Update to 4.14.14

Mostly done by Salvatore Bonaccorso.
This commit is contained in:
Ben Hutchings 2018-01-18 05:38:40 +00:00
parent 0bb5e7cccb
commit 6f43038466
11 changed files with 253 additions and 2269 deletions

117
debian/changelog vendored
View File

@ -1,9 +1,120 @@
linux (4.14.13-2) UNRELEASED; urgency=medium
linux (4.14.14-1) UNRELEASED; urgency=medium
* RDS: Heap OOB write in rds_message_alloc_sgs() (CVE-2018-5332)
* RDS: null pointer dereference in rds_atomic_free_op (CVE-2018-5333)
* New upstream stable update:
https://www.kernel.org/pub/linux/kernel/v4.x/ChangeLog-4.14.14
- dm bufio: fix shrinker scans when (nr_to_scan < retain_target)
- can: gs_usb: fix return value of the "set_bittiming" callback
- IB/srpt: Disable RDMA access by the initiator
- IB/srpt: Fix ACL lookup during login
- [mips*] Validate PR_SET_FP_MODE prctl(2) requests against the ABI of the
task
- [mips*] Factor out NT_PRFPREG regset access helpers
- [mips*] Guard against any partial write attempt with PTRACE_SETREGSET
- [mips*] Consistently handle buffer counter with PTRACE_SETREGSET
- [mips*] Fix an FCSR access API regression with NT_PRFPREG and MSA
- [mips*] Also verify sizeof `elf_fpreg_t' with PTRACE_SETREGSET
- [mips*] Disallow outsized PTRACE_SETREGSET NT_PRFPREG regset accesses
- cgroup: fix css_task_iter crash on CSS_TASK_ITER_PROC
- [x86] kvm: vmx: Scrub hardware GPRs at VM-exit (partial mitigation of
CVE-2017-5715, CVE-2017-5753)
- [x86] platform: wmi: Call acpi_wmi_init() later
- iw_cxgb4: only call the cq comp_handler when the cq is armed
- iw_cxgb4: atomically flush the qp
- iw_cxgb4: only clear the ARMED bit if a notification is needed
- iw_cxgb4: reflect the original WR opcode in drain cqes
- iw_cxgb4: when flushing, complete all wrs in a chain
- [x86] acpi: Handle SCI interrupts above legacy space gracefully
- ALSA: pcm: Remove incorrect snd_BUG_ON() usages
- ALSA: pcm: Workaround for weird PulseAudio behavior on rewind error
- ALSA: pcm: Add missing error checks in OSS emulation plugin builder
- ALSA: pcm: Abort properly at pending signal in OSS read/write loops
- ALSA: pcm: Allow aborting mutex lock at OSS read/write loops
- ALSA: aloop: Release cable upon open error path
- ALSA: aloop: Fix inconsistent format due to incomplete rule
- ALSA: aloop: Fix racy hw constraints adjustment
- [x86] acpi: Reduce code duplication in mp_override_legacy_irq()
- 8021q: fix a memory leak for VLAN 0 device
- ip6_tunnel: disable dst caching if tunnel is dual-stack
- net: core: fix module type in sock_diag_bind
- RDS: Heap OOB write in rds_message_alloc_sgs() (CVE-2018-5332)
- RDS: null pointer dereference in rds_atomic_free_op (CVE-2018-5333)
- net: fec: restore dev_id in the cases of probe error
- net: fec: defer probe if regulator is not ready
- net: fec: free/restore resource in related probe error pathes
- sctp: do not retransmit upon FragNeeded if PMTU discovery is disabled
- sctp: fix the handling of ICMP Frag Needed for too small MTUs
- [arm64, armhf] net: stmmac: enable EEE in MII, GMII or RGMII only
- ipv6: fix possible mem leaks in ipv6_make_skb()
- net/sched: Fix update of lastuse in act modules implementing
stats_update
- ipv6: sr: fix TLVs not being copied using setsockopt
- sfp: fix sfp-bus oops when removing socket/upstream
- membarrier: Disable preemption when calling smp_call_function_many()
- crypto: algapi - fix NULL dereference in crypto_remove_spawns()
- rbd: reacquire lock should update lock owner client id
- rbd: set max_segments to USHRT_MAX
- iwlwifi: pcie: fix DMA memory mapping / unmapping
- [x86] microcode/intel: Extend BDW late-loading with a revision check
- [x86] KVM: Add memory barrier on vmcs field lookup
- [powerpc*] KVM: Book3S PR: Fix WIMG handling under pHyp
- [powerpc*] KVM: Book3S HV: Drop prepare_done from struct kvm_resize_hpt
- [powerpc*] KVM: Book3S HV: Fix use after free in case of multiple resize
requests
- [powerpc*] KVM: Book3S HV: Always flush TLB in kvmppc_alloc_reset_hpt()
- [x86] drm/vmwgfx: Don't cache framebuffer maps
- [x86] drm/vmwgfx: Potential off by one in vmw_view_add()
- [x86] drm/i915/gvt: Clear the shadow page table entry after post-sync
- [x86] drm/i915: Whitelist SLICE_COMMON_ECO_CHICKEN1 on Geminilake.
- [x86] drm/i915: Move init_clock_gating() back to where it was
- [x86] drm/i915: Fix init_clock_gating for resume
- bpf: prevent out-of-bounds speculation (partial mitigation of CVE-2017-5753)
- bpf, array: fix overflow in max_entries and undefined behavior in
index_mask
- bpf: arsh is not supported in 32 bit alu thus reject it
- [arm64, armhf] usb: misc: usb3503: make sure reset is low for at least
100us
- USB: fix usbmon BUG trigger
- USB: UDC core: fix double-free in usb_add_gadget_udc_release
- usbip: remove kernel addresses from usb device and urb debug msgs
- usbip: fix vudc_rx: harden CMD_SUBMIT path to handle malicious input
- usbip: vudc_tx: fix v_send_ret_submit() vulnerability to null xfer
buffer
- staging: android: ashmem: fix a race condition in ASHMEM_SET_SIZE ioctl
(CVE-2017-13216)
- mux: core: fix double get_device()
- kdump: write correct address of mem_section into vmcoreinfo
- apparmor: fix ptrace label match when matching stacked labels
- [x86] pti: Unbreak EFI old_memmap
- [x86] Documentation: Add PTI description
- [x86] cpufeatures: Add X86_BUG_SPECTRE_V[12]
- sysfs/cpu: Add vulnerability folder
- [x86] cpu: Implement CPU vulnerabilites sysfs functions
- [x86] tboot: Unbreak tboot with PTI enabled
- [x86] mm/pti: Remove dead logic in pti_user_pagetable_walk*()
- [x86] cpu/AMD: Make LFENCE a serializing instruction
- [x86] cpu/AMD: Use LFENCE_RDTSC in preference to MFENCE_RDTSC
- [x86] alternatives: Fix optimize_nops() checking
- [x86] pti: Make unpoison of pgd for trusted boot work for real
- [x86] retpoline: Add initial retpoline support (partial mitigation of
CVE-2017-5715)
- [x86] spectre: Add boot time option to select Spectre v2 mitigation
- [x86] retpoline/crypto: Convert crypto assembler indirect jumps
- [x86] retpoline/entry: Convert entry assembler indirect jumps
- [x86] retpoline/ftrace: Convert ftrace assembler indirect jumps
- [x86] retpoline/hyperv: Convert assembler indirect jumps
- [x86] retpoline/xen: Convert Xen hypercall indirect jumps
- [x86] retpoline/checksum32: Convert assembler indirect jumps
- [x86] retpoline/irq32: Convert assembler indirect jumps
- [x86] retpoline: Fill return stack buffer on vmexit
- [x86] pti: Fix !PCID and sanitize defines
- [x86] perf: Disable intel_bts when PTI
[ Salvatore Bonaccorso ]
* loop: fix concurrent lo_open/lo_release (CVE-2018-5344)
[ Ben Hutchings ]
* bpf: Avoid ABI change in 4.14.14
-- Salvatore Bonaccorso <carnil@debian.org> Tue, 16 Jan 2018 20:50:23 +0100
linux (4.14.13-1) unstable; urgency=medium

View File

@ -1,34 +0,0 @@
From: Mohamed Ghannam <simo.ghannam@gmail.com>
Date: Tue, 2 Jan 2018 19:44:34 +0000
Subject: RDS: Heap OOB write in rds_message_alloc_sgs()
Origin: https://git.kernel.org/linus/c095508770aebf1b9218e77026e48345d719b17c
Bug-Debian-Security: https://security-tracker.debian.org/tracker/CVE-2018-5332
When args->nr_local is 0, nr_pages gets also 0 due some size
calculation via rds_rm_size(), which is later used to allocate
pages for DMA, this bug produces a heap Out-Of-Bound write access
to a specific memory region.
Signed-off-by: Mohamed Ghannam <simo.ghannam@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
---
net/rds/rdma.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/net/rds/rdma.c b/net/rds/rdma.c
index bc2f1e0977d6..94729d9da437 100644
--- a/net/rds/rdma.c
+++ b/net/rds/rdma.c
@@ -525,6 +525,9 @@ int rds_rdma_extra_size(struct rds_rdma_args *args)
local_vec = (struct rds_iovec __user *)(unsigned long) args->local_vec_addr;
+ if (args->nr_local == 0)
+ return -EINVAL;
+
/* figure out the number of pages in the vector */
for (i = 0; i < args->nr_local; i++) {
if (copy_from_user(&vec, &local_vec[i],
--
2.15.1

View File

@ -1,32 +0,0 @@
From: Mohamed Ghannam <simo.ghannam@gmail.com>
Date: Wed, 3 Jan 2018 21:06:06 +0000
Subject: RDS: null pointer dereference in rds_atomic_free_op
Origin: https://git.kernel.org/linus/7d11f77f84b27cef452cee332f4e469503084737
Bug-Debian-Security: https://security-tracker.debian.org/tracker/CVE-2018-5333
set rm->atomic.op_active to 0 when rds_pin_pages() fails
or the user supplied address is invalid,
this prevents a NULL pointer usage in rds_atomic_free_op()
Signed-off-by: Mohamed Ghannam <simo.ghannam@gmail.com>
Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
---
net/rds/rdma.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/net/rds/rdma.c b/net/rds/rdma.c
index 94729d9da437..634cfcb7bba6 100644
--- a/net/rds/rdma.c
+++ b/net/rds/rdma.c
@@ -877,6 +877,7 @@ int rds_cmsg_atomic(struct rds_sock *rs, struct rds_message *rm,
err:
if (page)
put_page(page);
+ rm->atomic.op_active = 0;
kfree(rm->atomic.op_notifier);
return ret;
--
2.15.1

View File

@ -1,48 +0,0 @@
From: Ben Seri <ben@armis.com>
Date: Mon, 04 Dec 2017 14:13:25 +0000
Subject: bluetooth: Prevent stack info leak from the EFS element.
Origin: http://www.openwall.com/lists/oss-security/2017/12/06/3
Bug-Debian: https://security-tracker.debian.org/tracker/CVE-2017-1000410
Signed-off-by: Ben Seri <ben@armis.com>
---
--- a/net/bluetooth/l2cap_core.c
+++ b/net/bluetooth/l2cap_core.c
@@ -3363,9 +3363,10 @@ static int l2cap_parse_conf_req(struct l
break;
case L2CAP_CONF_EFS:
- remote_efs = 1;
- if (olen == sizeof(efs))
+ if (olen == sizeof(efs)) {
+ remote_efs = 1;
memcpy(&efs, (void *) val, olen);
+ }
break;
case L2CAP_CONF_EWS:
@@ -3584,16 +3585,17 @@ static int l2cap_parse_conf_rsp(struct l
break;
case L2CAP_CONF_EFS:
- if (olen == sizeof(efs))
+ if (olen == sizeof(efs)) {
memcpy(&efs, (void *)val, olen);
- if (chan->local_stype != L2CAP_SERV_NOTRAFIC &&
- efs.stype != L2CAP_SERV_NOTRAFIC &&
- efs.stype != chan->local_stype)
- return -ECONNREFUSED;
+ if (chan->local_stype != L2CAP_SERV_NOTRAFIC &&
+ efs.stype != L2CAP_SERV_NOTRAFIC &&
+ efs.stype != chan->local_stype)
+ return -ECONNREFUSED;
- l2cap_add_conf_opt(&ptr, L2CAP_CONF_EFS, sizeof(efs),
- (unsigned long) &efs, endptr - ptr);
+ l2cap_add_conf_opt(&ptr, L2CAP_CONF_EFS, sizeof(efs),
+ (unsigned long) &efs, endptr - ptr);
+ }
break;
case L2CAP_CONF_FCS:

View File

@ -1,201 +0,0 @@
From: Jakub Kicinski <jakub.kicinski@netronome.com>
Date: Mon, 9 Oct 2017 10:30:10 -0700
Subject: bpf: encapsulate verifier log state into a structure
Origin: https://git.kernel.org/linus/e7bf8249e8f1bac64885eeccb55bcf6111901a81
Put the loose log_* variables into a structure. This will make
it simpler to remove the global verifier state in following patches.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Simon Horman <simon.horman@netronome.com>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
---
include/linux/bpf_verifier.h | 13 ++++++++++
kernel/bpf/verifier.c | 57 +++++++++++++++++++++++---------------------
2 files changed, 43 insertions(+), 27 deletions(-)
--- a/include/linux/bpf_verifier.h
+++ b/include/linux/bpf_verifier.h
@@ -115,6 +115,19 @@ struct bpf_insn_aux_data {
#define MAX_USED_MAPS 64 /* max number of maps accessed by one eBPF program */
+struct bpf_verifer_log {
+ u32 level;
+ char *kbuf;
+ char __user *ubuf;
+ u32 len_used;
+ u32 len_total;
+};
+
+static inline bool bpf_verifier_log_full(const struct bpf_verifer_log *log)
+{
+ return log->len_used >= log->len_total - 1;
+}
+
struct bpf_verifier_env;
struct bpf_ext_analyzer_ops {
int (*insn_hook)(struct bpf_verifier_env *env,
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -156,8 +156,7 @@ struct bpf_call_arg_meta {
/* verbose verifier prints what it's seeing
* bpf_check() is called under lock, so no race to access these global vars
*/
-static u32 log_level, log_size, log_len;
-static char *log_buf;
+static struct bpf_verifer_log verifier_log;
static DEFINE_MUTEX(bpf_verifier_lock);
@@ -167,13 +166,15 @@ static DEFINE_MUTEX(bpf_verifier_lock);
*/
static __printf(1, 2) void verbose(const char *fmt, ...)
{
+ struct bpf_verifer_log *log = &verifier_log;
va_list args;
- if (log_level == 0 || log_len >= log_size - 1)
+ if (!log->level || bpf_verifier_log_full(log))
return;
va_start(args, fmt);
- log_len += vscnprintf(log_buf + log_len, log_size - log_len, fmt, args);
+ log->len_used += vscnprintf(log->kbuf + log->len_used,
+ log->len_total - log->len_used, fmt, args);
va_end(args);
}
@@ -834,7 +835,7 @@ static int check_map_access(struct bpf_v
* need to try adding each of min_value and max_value to off
* to make sure our theoretical access will be safe.
*/
- if (log_level)
+ if (verifier_log.level)
print_verifier_state(state);
/* The minimum value is only important with signed
* comparisons where we can't assume the floor of a
@@ -2915,7 +2916,7 @@ static int check_cond_jmp_op(struct bpf_
verbose("R%d pointer comparison prohibited\n", insn->dst_reg);
return -EACCES;
}
- if (log_level)
+ if (verifier_log.level)
print_verifier_state(this_branch);
return 0;
}
@@ -3633,7 +3634,7 @@ static int do_check(struct bpf_verifier_
return err;
if (err == 1) {
/* found equivalent state, can prune the search */
- if (log_level) {
+ if (verifier_log.level) {
if (do_print_state)
verbose("\nfrom %d to %d: safe\n",
prev_insn_idx, insn_idx);
@@ -3646,8 +3647,9 @@ static int do_check(struct bpf_verifier_
if (need_resched())
cond_resched();
- if (log_level > 1 || (log_level && do_print_state)) {
- if (log_level > 1)
+ if (verifier_log.level > 1 ||
+ (verifier_log.level && do_print_state)) {
+ if (verifier_log.level > 1)
verbose("%d:", insn_idx);
else
verbose("\nfrom %d to %d:",
@@ -3656,7 +3658,7 @@ static int do_check(struct bpf_verifier_
do_print_state = false;
}
- if (log_level) {
+ if (verifier_log.level) {
verbose("%d: ", insn_idx);
print_bpf_insn(env, insn);
}
@@ -4307,7 +4309,7 @@ static void free_states(struct bpf_verif
int bpf_check(struct bpf_prog **prog, union bpf_attr *attr)
{
- char __user *log_ubuf = NULL;
+ struct bpf_verifer_log *log = &verifier_log;
struct bpf_verifier_env *env;
int ret = -EINVAL;
@@ -4332,23 +4334,23 @@ int bpf_check(struct bpf_prog **prog, un
/* user requested verbose verifier output
* and supplied buffer to store the verification trace
*/
- log_level = attr->log_level;
- log_ubuf = (char __user *) (unsigned long) attr->log_buf;
- log_size = attr->log_size;
- log_len = 0;
+ log->level = attr->log_level;
+ log->ubuf = (char __user *) (unsigned long) attr->log_buf;
+ log->len_total = attr->log_size;
+ log->len_used = 0;
ret = -EINVAL;
- /* log_* values have to be sane */
- if (log_size < 128 || log_size > UINT_MAX >> 8 ||
- log_level == 0 || log_ubuf == NULL)
+ /* log attributes have to be sane */
+ if (log->len_total < 128 || log->len_total > UINT_MAX >> 8 ||
+ !log->level || !log->ubuf)
goto err_unlock;
ret = -ENOMEM;
- log_buf = vmalloc(log_size);
- if (!log_buf)
+ log->kbuf = vmalloc(log->len_total);
+ if (!log->kbuf)
goto err_unlock;
} else {
- log_level = 0;
+ log->level = 0;
}
env->strict_alignment = !!(attr->prog_flags & BPF_F_STRICT_ALIGNMENT);
@@ -4385,15 +4387,16 @@ skip_full_check:
if (ret == 0)
ret = fixup_bpf_calls(env);
- if (log_level && log_len >= log_size - 1) {
- BUG_ON(log_len >= log_size);
+ if (log->level && bpf_verifier_log_full(log)) {
+ BUG_ON(log->len_used >= log->len_total);
/* verifier log exceeded user supplied buffer */
ret = -ENOSPC;
/* fall through to return what was recorded */
}
/* copy verifier log back to user space including trailing zero */
- if (log_level && copy_to_user(log_ubuf, log_buf, log_len + 1) != 0) {
+ if (log->level && copy_to_user(log->ubuf, log->kbuf,
+ log->len_used + 1) != 0) {
ret = -EFAULT;
goto free_log_buf;
}
@@ -4420,8 +4423,8 @@ skip_full_check:
}
free_log_buf:
- if (log_level)
- vfree(log_buf);
+ if (log->level)
+ vfree(log->kbuf);
if (!env->prog->aux->used_maps)
/* if we didn't copy map pointers into bpf_prog_info, release
* them now. Otherwise free_bpf_prog_info() will release them.
@@ -4458,7 +4461,7 @@ int bpf_analyzer(struct bpf_prog *prog,
/* grab the mutex to protect few globals used by verifier */
mutex_lock(&bpf_verifier_lock);
- log_level = 0;
+ verifier_log.level = 0;
env->strict_alignment = false;
if (!IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS))

View File

@ -1,63 +0,0 @@
From: Alexei Starovoitov <ast@kernel.org>
Date: Mon, 18 Dec 2017 20:12:00 -0800
Subject: [8/9] bpf: fix integer overflows
Origin: https://git.kernel.org/linus/bb7f0f989ca7de1153bd128a40a71709e339fa03
There were various issues related to the limited size of integers used in
the verifier:
- `off + size` overflow in __check_map_access()
- `off + reg->off` overflow in check_mem_access()
- `off + reg->var_off.value` overflow or 32-bit truncation of
`reg->var_off.value` in check_mem_access()
- 32-bit truncation in check_stack_boundary()
Make sure that any integer math cannot overflow by not allowing
pointer math with large values.
Also reduce the scope of "scalar op scalar" tracking.
Fixes: f1174f77b50c ("bpf/verifier: rework value tracking")
Reported-by: Jann Horn <jannh@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
[carnil:
- adjust context, we previously change verbose() signature
- drop changes to include/linux/bpf_verifier.h already set
]
---
include/linux/bpf_verifier.h | 4 ++--
kernel/bpf/verifier.c | 48 ++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 50 insertions(+), 2 deletions(-)
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -1821,25 +1821,25 @@ static bool check_reg_sane_offset(struct
s64 smin = reg->smin_value;
if (known && (val >= BPF_MAX_VAR_OFF || val <= -BPF_MAX_VAR_OFF)) {
- verbose("math between %s pointer and %lld is not allowed\n",
+ verbose(env, "math between %s pointer and %lld is not allowed\n",
reg_type_str[type], val);
return false;
}
if (reg->off >= BPF_MAX_VAR_OFF || reg->off <= -BPF_MAX_VAR_OFF) {
- verbose("%s pointer offset %d is not allowed\n",
+ verbose(env, "%s pointer offset %d is not allowed\n",
reg_type_str[type], reg->off);
return false;
}
if (smin == S64_MIN) {
- verbose("math between %s pointer and register with unbounded min value is not allowed\n",
+ verbose(env, "math between %s pointer and register with unbounded min value is not allowed\n",
reg_type_str[type]);
return false;
}
if (smin >= BPF_MAX_VAR_OFF || smin <= -BPF_MAX_VAR_OFF) {
- verbose("value %lld makes %s pointer be out of bounds\n",
+ verbose(env, "value %lld makes %s pointer be out of bounds\n",
smin, reg_type_str[type]);
return false;
}

View File

@ -1,60 +0,0 @@
From: Benjamin Poirier <bpoirier@suse.com>
Date: Mon, 11 Dec 2017 16:26:40 +0900
Subject: e1000e: Fix e1000_check_for_copper_link_ich8lan return value.
Origin: https://marc.info/?l=linux-kernel&m=151297726823919&w=2
Bug: https://bugzilla.kernel.org/show_bug.cgi?id=198047
Bug-Debian: https://bugs.debian.org/885348
e1000e_check_for_copper_link() and e1000_check_for_copper_link_ich8lan()
are the two functions that may be assigned to mac.ops.check_for_link when
phy.media_type == e1000_media_type_copper. Commit 19110cfbb34d ("e1000e:
Separate signaling for link check/link up") changed the meaning of the
return value of check_for_link for copper media but only adjusted the first
function. This patch adjusts the second function likewise.
Reported-by: Christian Hesse <list@eworm.de>
Reported-by: Gabriel C <nix.or.die@gmail.com>
Link: https://bugzilla.kernel.org/show_bug.cgi?id=198047
Fixes: 19110cfbb34d ("e1000e: Separate signaling for link check/link up")
Tested-by: Christian Hesse <list@eworm.de>
Signed-off-by: Benjamin Poirier <bpoirier@suse.com>
---
drivers/net/ethernet/intel/e1000e/ich8lan.c | 11 ++++++++---
1 file changed, 8 insertions(+), 3 deletions(-)
--- a/drivers/net/ethernet/intel/e1000e/ich8lan.c
+++ b/drivers/net/ethernet/intel/e1000e/ich8lan.c
@@ -1367,6 +1367,9 @@ out:
* Checks to see of the link status of the hardware has changed. If a
* change in link status has been detected, then we read the PHY registers
* to get the current speed/duplex if link exists.
+ *
+ * Returns a negative error code (-E1000_ERR_*) or 0 (link down) or 1 (link
+ * up).
**/
static s32 e1000_check_for_copper_link_ich8lan(struct e1000_hw *hw)
{
@@ -1382,7 +1385,7 @@ static s32 e1000_check_for_copper_link_i
* Change or Rx Sequence Error interrupt.
*/
if (!mac->get_link_status)
- return 0;
+ return 1;
/* First we want to see if the MII Status Register reports
* link. If so, then we want to get the current speed/duplex
@@ -1613,10 +1616,12 @@ static s32 e1000_check_for_copper_link_i
* different link partner.
*/
ret_val = e1000e_config_fc_after_link_up(hw);
- if (ret_val)
+ if (ret_val) {
e_dbg("Error configuring flow control\n");
+ return ret_val;
+ }
- return ret_val;
+ return 1;
}
static s32 e1000_get_variants_ich8lan(struct e1000_adapter *adapter)

View File

@ -1,153 +0,0 @@
From: Wanpeng Li <wanpeng.li@hotmail.com>
Date: Thu, 14 Dec 2017 17:40:50 -0800
Subject: KVM: Fix stack-out-of-bounds read in write_mmio
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Origin: https://git.kernel.org/pub/scm/virt/kvm/kvm.git/commit?id=e39d200fa5bf5b94a0948db0dae44c1b73b84a56
Bug-Debian-Security: https://security-tracker.debian.org/tracker/CVE-2017-17741
Reported by syzkaller:
BUG: KASAN: stack-out-of-bounds in write_mmio+0x11e/0x270 [kvm]
Read of size 8 at addr ffff8803259df7f8 by task syz-executor/32298
CPU: 6 PID: 32298 Comm: syz-executor Tainted: G OE 4.15.0-rc2+ #18
Hardware name: LENOVO ThinkCentre M8500t-N000/SHARKBAY, BIOS FBKTC1AUS 02/16/2016
Call Trace:
dump_stack+0xab/0xe1
print_address_description+0x6b/0x290
kasan_report+0x28a/0x370
write_mmio+0x11e/0x270 [kvm]
emulator_read_write_onepage+0x311/0x600 [kvm]
emulator_read_write+0xef/0x240 [kvm]
emulator_fix_hypercall+0x105/0x150 [kvm]
em_hypercall+0x2b/0x80 [kvm]
x86_emulate_insn+0x2b1/0x1640 [kvm]
x86_emulate_instruction+0x39a/0xb90 [kvm]
handle_exception+0x1b4/0x4d0 [kvm_intel]
vcpu_enter_guest+0x15a0/0x2640 [kvm]
kvm_arch_vcpu_ioctl_run+0x549/0x7d0 [kvm]
kvm_vcpu_ioctl+0x479/0x880 [kvm]
do_vfs_ioctl+0x142/0x9a0
SyS_ioctl+0x74/0x80
entry_SYSCALL_64_fastpath+0x23/0x9a
The path of patched vmmcall will patch 3 bytes opcode 0F 01 C1(vmcall)
to the guest memory, however, write_mmio tracepoint always prints 8 bytes
through *(u64 *)val since kvm splits the mmio access into 8 bytes. This
leaks 5 bytes from the kernel stack (CVE-2017-17741). This patch fixes
it by just accessing the bytes which we operate on.
Before patch:
syz-executor-5567 [007] .... 51370.561696: kvm_mmio: mmio write len 3 gpa 0x10 val 0x1ffff10077c1010f
After patch:
syz-executor-13416 [002] .... 51302.299573: kvm_mmio: mmio write len 3 gpa 0x10 val 0xc1010f
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Reviewed-by: Darren Kenny <darren.kenny@oracle.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Tested-by: Marc Zyngier <marc.zyngier@arm.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Wanpeng Li <wanpeng.li@hotmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/kvm/x86.c | 8 ++++----
include/trace/events/kvm.h | 7 +++++--
virt/kvm/arm/mmio.c | 6 +++---
3 files changed, 12 insertions(+), 9 deletions(-)
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -4362,7 +4362,7 @@ static int vcpu_mmio_read(struct kvm_vcp
addr, n, v))
&& kvm_io_bus_read(vcpu, KVM_MMIO_BUS, addr, n, v))
break;
- trace_kvm_mmio(KVM_TRACE_MMIO_READ, n, addr, *(u64 *)v);
+ trace_kvm_mmio(KVM_TRACE_MMIO_READ, n, addr, v);
handled += n;
addr += n;
len -= n;
@@ -4621,7 +4621,7 @@ static int read_prepare(struct kvm_vcpu
{
if (vcpu->mmio_read_completed) {
trace_kvm_mmio(KVM_TRACE_MMIO_READ, bytes,
- vcpu->mmio_fragments[0].gpa, *(u64 *)val);
+ vcpu->mmio_fragments[0].gpa, val);
vcpu->mmio_read_completed = 0;
return 1;
}
@@ -4643,14 +4643,14 @@ static int write_emulate(struct kvm_vcpu
static int write_mmio(struct kvm_vcpu *vcpu, gpa_t gpa, int bytes, void *val)
{
- trace_kvm_mmio(KVM_TRACE_MMIO_WRITE, bytes, gpa, *(u64 *)val);
+ trace_kvm_mmio(KVM_TRACE_MMIO_WRITE, bytes, gpa, val);
return vcpu_mmio_write(vcpu, gpa, bytes, val);
}
static int read_exit_mmio(struct kvm_vcpu *vcpu, gpa_t gpa,
void *val, int bytes)
{
- trace_kvm_mmio(KVM_TRACE_MMIO_READ_UNSATISFIED, bytes, gpa, 0);
+ trace_kvm_mmio(KVM_TRACE_MMIO_READ_UNSATISFIED, bytes, gpa, NULL);
return X86EMUL_IO_NEEDED;
}
--- a/include/trace/events/kvm.h
+++ b/include/trace/events/kvm.h
@@ -211,7 +211,7 @@ TRACE_EVENT(kvm_ack_irq,
{ KVM_TRACE_MMIO_WRITE, "write" }
TRACE_EVENT(kvm_mmio,
- TP_PROTO(int type, int len, u64 gpa, u64 val),
+ TP_PROTO(int type, int len, u64 gpa, void *val),
TP_ARGS(type, len, gpa, val),
TP_STRUCT__entry(
@@ -225,7 +225,10 @@ TRACE_EVENT(kvm_mmio,
__entry->type = type;
__entry->len = len;
__entry->gpa = gpa;
- __entry->val = val;
+ __entry->val = 0;
+ if (val)
+ memcpy(&__entry->val, val,
+ min_t(u32, sizeof(__entry->val), len));
),
TP_printk("mmio %s len %u gpa 0x%llx val 0x%llx",
--- a/virt/kvm/arm/mmio.c
+++ b/virt/kvm/arm/mmio.c
@@ -112,7 +112,7 @@ int kvm_handle_mmio_return(struct kvm_vc
}
trace_kvm_mmio(KVM_TRACE_MMIO_READ, len, run->mmio.phys_addr,
- data);
+ &data);
data = vcpu_data_host_to_guest(vcpu, data, len);
vcpu_set_reg(vcpu, vcpu->arch.mmio_decode.rt, data);
}
@@ -182,14 +182,14 @@ int io_mem_abort(struct kvm_vcpu *vcpu,
data = vcpu_data_guest_to_host(vcpu, vcpu_get_reg(vcpu, rt),
len);
- trace_kvm_mmio(KVM_TRACE_MMIO_WRITE, len, fault_ipa, data);
+ trace_kvm_mmio(KVM_TRACE_MMIO_WRITE, len, fault_ipa, &data);
kvm_mmio_write_buf(data_buf, len, data);
ret = kvm_io_bus_write(vcpu, KVM_MMIO_BUS, fault_ipa, len,
data_buf);
} else {
trace_kvm_mmio(KVM_TRACE_MMIO_READ_UNSATISFIED, len,
- fault_ipa, 0);
+ fault_ipa, NULL);
ret = kvm_io_bus_read(vcpu, KVM_MMIO_BUS, fault_ipa, len,
data_buf);

View File

@ -0,0 +1,138 @@
From: Ben Hutchings <ben@decadent.org.uk>
Date: Thu, 18 Jan 2018 05:17:34 +0000
Subject: bpf: Avoid ABI change in 4.14.14
Forwarded: not-needed
Commit b2157399cc98 "bpf: prevent out-of-bounds speculation" added one
member each to struct bpf_map and struct bpf_array (which is
effectively a sub-type of bpf_map). Changing the size of struct
bpf_array is an ABI change, since the array contents immediately
follows the structure. However, bpf_map::work is not used (or even
initialised) until after the map's refcount drops to zero. We can
therefore move the new members into a union with it.
---
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -51,10 +51,20 @@ struct bpf_map {
u32 pages;
u32 id;
int numa_node;
- bool unpriv_array;
+
struct user_struct *user;
const struct bpf_map_ops *ops;
+#ifdef __GENKSYMS__
struct work_struct work;
+#else
+ union {
+ struct work_struct work;
+ struct {
+ bool unpriv_array;
+ u32 index_mask;
+ };
+ };
+#endif
atomic_t usercnt;
struct bpf_map *inner_map_meta;
};
@@ -196,7 +206,6 @@ struct bpf_prog_aux {
struct bpf_array {
struct bpf_map map;
u32 elem_size;
- u32 index_mask;
/* 'ownership' of prog_array is claimed by the first program that
* is going to use this map or by the first program which FD is stored
* in the map to make sure that all callers and callees have the same
--- a/kernel/bpf/arraymap.c
+++ b/kernel/bpf/arraymap.c
@@ -104,7 +104,7 @@ static struct bpf_map *array_map_alloc(u
array = bpf_map_area_alloc(array_size, numa_node);
if (!array)
return ERR_PTR(-ENOMEM);
- array->index_mask = index_mask;
+ array->map.index_mask = index_mask;
array->map.unpriv_array = unpriv;
/* copy mandatory map attributes */
@@ -141,7 +141,7 @@ static void *array_map_lookup_elem(struc
if (unlikely(index >= array->map.max_entries))
return NULL;
- return array->value + array->elem_size * (index & array->index_mask);
+ return array->value + array->elem_size * (index & array->map.index_mask);
}
/* emit BPF instructions equivalent to C code of array_map_lookup_elem() */
@@ -158,7 +158,7 @@ static u32 array_map_gen_lookup(struct b
*insn++ = BPF_LDX_MEM(BPF_W, ret, index, 0);
if (map->unpriv_array) {
*insn++ = BPF_JMP_IMM(BPF_JGE, ret, map->max_entries, 4);
- *insn++ = BPF_ALU32_IMM(BPF_AND, ret, array->index_mask);
+ *insn++ = BPF_ALU32_IMM(BPF_AND, ret, array->map.index_mask);
} else {
*insn++ = BPF_JMP_IMM(BPF_JGE, ret, map->max_entries, 3);
}
@@ -183,7 +183,7 @@ static void *percpu_array_map_lookup_ele
if (unlikely(index >= array->map.max_entries))
return NULL;
- return this_cpu_ptr(array->pptrs[index & array->index_mask]);
+ return this_cpu_ptr(array->pptrs[index & array->map.index_mask]);
}
int bpf_percpu_array_copy(struct bpf_map *map, void *key, void *value)
@@ -203,7 +203,7 @@ int bpf_percpu_array_copy(struct bpf_map
*/
size = round_up(map->value_size, 8);
rcu_read_lock();
- pptr = array->pptrs[index & array->index_mask];
+ pptr = array->pptrs[index & array->map.index_mask];
for_each_possible_cpu(cpu) {
bpf_long_memcpy(value + off, per_cpu_ptr(pptr, cpu), size);
off += size;
@@ -251,11 +251,11 @@ static int array_map_update_elem(struct
return -EEXIST;
if (array->map.map_type == BPF_MAP_TYPE_PERCPU_ARRAY)
- memcpy(this_cpu_ptr(array->pptrs[index & array->index_mask]),
+ memcpy(this_cpu_ptr(array->pptrs[index & array->map.index_mask]),
value, map->value_size);
else
memcpy(array->value +
- array->elem_size * (index & array->index_mask),
+ array->elem_size * (index & array->map.index_mask),
value, map->value_size);
return 0;
}
@@ -289,7 +289,7 @@ int bpf_percpu_array_update(struct bpf_m
*/
size = round_up(map->value_size, 8);
rcu_read_lock();
- pptr = array->pptrs[index & array->index_mask];
+ pptr = array->pptrs[index & array->map.index_mask];
for_each_possible_cpu(cpu) {
bpf_long_memcpy(per_cpu_ptr(pptr, cpu), value + off, size);
off += size;
@@ -651,7 +651,7 @@ static u32 array_of_map_gen_lookup(struc
*insn++ = BPF_LDX_MEM(BPF_W, ret, index, 0);
if (map->unpriv_array) {
*insn++ = BPF_JMP_IMM(BPF_JGE, ret, map->max_entries, 6);
- *insn++ = BPF_ALU32_IMM(BPF_AND, ret, array->index_mask);
+ *insn++ = BPF_ALU32_IMM(BPF_AND, ret, array->map.index_mask);
} else {
*insn++ = BPF_JMP_IMM(BPF_JGE, ret, map->max_entries, 5);
}
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -4344,9 +4344,7 @@ static int fixup_bpf_calls(struct bpf_ve
insn_buf[0] = BPF_JMP_IMM(BPF_JGE, BPF_REG_3,
map_ptr->max_entries, 2);
insn_buf[1] = BPF_ALU32_IMM(BPF_AND, BPF_REG_3,
- container_of(map_ptr,
- struct bpf_array,
- map)->index_mask);
+ map_ptr->index_mask);
insn_buf[2] = *insn;
cnt = 3;
new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt);

View File

@ -81,7 +81,6 @@ bugfix/all/kbuild-include-addtree-remove-quotes-before-matching-path.patch
bugfix/all/i40e-i40evf-organize-and-re-number-feature-flags.patch
bugfix/all/i40e-fix-flags-declaration.patch
bugfix/all/xen-time-do-not-decrease-steal-time-after-live-migra.patch
bugfix/all/e1000e-fix-e1000_check_for_copper_link_ich8lan-return-value.patch
bugfix/all/libsas-Disable-asynchronous-aborts-for-SATA-devices.patch
bugfix/all/drm-nouveau-disp-gf119-add-missing-drive-vfunc-ptr.patch
debian/revert-objtool-fix-config_stack_validation-y-warning.patch
@ -126,13 +125,6 @@ bugfix/all/netfilter-xt_osf-add-missing-permission-checks.patch
bugfix/all/media-dvb-usb-v2-lmedm04-Improve-logic-checking-of-w.patch
bugfix/all/media-dvb-usb-v2-lmedm04-move-ts2020-attach-to-dm04_.patch
bugfix/all/media-hdpvr-fix-an-error-handling-path-in-hdpvr_prob.patch
bugfix/all/kvm-fix-stack-out-of-bounds-read-in-write_mmio.patch
bugfix/all/bluetooth-prevent-stack-info-leak-from-the-efs-element.patch
bugfix/all/bpf-encapsulate-verifier-log-state-into-a-structure.patch
bugfix/all/bpf-move-global-verifier-log-into-verifier-environme.patch
bugfix/all/bpf-fix-integer-overflows.patch
bugfix/all/RDS-Heap-OOB-write-in-rds_message_alloc_sgs.patch
bugfix/all/RDS-null-pointer-dereference-in-rds_atomic_free_op.patch
bugfix/all/loop-fix-concurrent-lo_open-lo_release.patch
# Fix exported symbol versions
@ -164,3 +156,4 @@ features/arm/dwmac-sun8i/0008-ARM-dts-sunxi-h3-h5-represent-the-mdio-switch-used
features/arm64/tegra210-smp/0001-arm64-tegra-Add-CPU-and-PSCI-nodes-for-NVIDIA-Tegra2.patch
# ABI maintenance
debian/bpf-avoid-abi-change-in-4.14.14.patch