[x86] Update TAA and NX fixes to pending stable backports

This commit is contained in:
Ben Hutchings 2019-11-09 20:16:45 +00:00
parent be004c1b69
commit c2443a2e97
29 changed files with 233 additions and 225 deletions

4
debian/changelog vendored
View File

@ -1,6 +1,8 @@
linux (4.19.67-2+deb10u2) UNRELEASED; urgency=medium
* [x86] Add mitigation for TSX Asynchronous Abort (CVE-2019-11135):
- KVM: x86: use Intel speculation bugs and features as derived in generic
x86 code
- x86/msr: Add the IA32_TSX_CTRL MSR
- x86/cpu: Add a helper function x86_read_arch_cap_msr()
- x86/cpu: Add a "tsx=" cmdline option with TSX disabled by default
@ -15,7 +17,6 @@ linux (4.19.67-2+deb10u2) UNRELEASED; urgency=medium
Documentation/admin-guide/hw-vuln/tsx_async_abort.rst
* [x86] KVM: Add mitigation for Machine Check Error on Page Size Change
(aka iTLB multi-hit, CVE-2018-12207):
- KVM: x86: adjust kvm_mmu_page member to save 8 bytes
- kvm: Convert kvm_lock to a mutex
- kvm: x86: Do not release the page inside mmu_set_spte()
- KVM: x86: make FNAME(fetch) and __direct_map more similar
@ -26,6 +27,7 @@ linux (4.19.67-2+deb10u2) UNRELEASED; urgency=medium
- KVM: vmx, svm: always run with EFER.NXE=1 when shadow paging is active
- x86/bugs: Add ITLB_MULTIHIT bug infrastructure
- cpu/speculation: Uninline and export CPU mitigations helpers
- x86/cpu: Add Tremont to the cpu vulnerability whitelist
- kvm: mmu: ITLB_MULTIHIT mitigation
- kvm: Add helper function for creating VM worker threads
- kvm: x86: mmu: Recovery of shattered NX large pages

View File

@ -1,52 +0,0 @@
From: Wei Yang <richard.weiyang@gmail.com>
Date: Thu, 6 Sep 2018 05:58:16 +0800
Subject: KVM: x86: adjust kvm_mmu_page member to save 8 bytes
commit 3ff519f29d98ecdc1961d825d105d68711093b6b upstream.
On a 64bits machine, struct is naturally aligned with 8 bytes. Since
kvm_mmu_page member *unsync* and *role* are less then 4 bytes, we can
rearrange the sequence to compace the struct.
As the comment shows, *role* and *gfn* are used to key the shadow page. In
order to keep the comment valid, this patch moves the *unsync* up and
exchange the position of *role* and *gfn*.
From /proc/slabinfo, it shows the size of kvm_mmu_page is 8 bytes less and
with one more object per slap after applying this patch.
# name <active_objs> <num_objs> <objsize> <objperslab>
kvm_mmu_page_header 0 0 168 24
kvm_mmu_page_header 0 0 160 25
Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
---
arch/x86/include/asm/kvm_host.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -281,18 +281,18 @@ struct kvm_rmap_head {
struct kvm_mmu_page {
struct list_head link;
struct hlist_node hash_link;
+ bool unsync;
/*
* The following two entries are used to key the shadow page in the
* hash table.
*/
- gfn_t gfn;
union kvm_mmu_page_role role;
+ gfn_t gfn;
u64 *spt;
/* hold the gfn of each spte inside spt */
gfn_t *gfns;
- bool unsync;
int root_count; /* Currently serving as active root */
unsigned int unsync_children;
struct kvm_rmap_head parent_ptes; /* rmap pointers to parent sptes */

View File

@ -2,7 +2,7 @@ From: Vineela Tummalapalli <vineela.tummalapalli@intel.com>
Date: Mon, 4 Nov 2019 12:22:01 +0100
Subject: x86/bugs: Add ITLB_MULTIHIT bug infrastructure
commit db4d30fbb71b47e4ecb11c4efa5d8aad4b03dfae upstream.
commit db4d30fbb71b47e4ecb11c4efa5d8aad4b03dfae upstream
Some processors may incur a machine check error possibly resulting in an
unrecoverable CPU lockup when an instruction fetch encounters a TLB
@ -30,10 +30,6 @@ Co-developed-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
[bwh: Backported to 4.19:
- No support for X86_VENDOR_HYGON, ATOM_AIRMONT_NP
- Adjust context]
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
---
.../ABI/testing/sysfs-devices-system-cpu | 1 +
arch/x86/include/asm/cpufeatures.h | 1 +
@ -81,7 +77,7 @@ Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
* Not susceptible to
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -1391,6 +1391,11 @@ static ssize_t l1tf_show_state(char *buf
@@ -1387,6 +1387,11 @@ static ssize_t l1tf_show_state(char *buf
}
#endif
@ -93,7 +89,7 @@ Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
static ssize_t mds_show_state(char *buf)
{
if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) {
@@ -1494,6 +1499,9 @@ static ssize_t cpu_show_common(struct de
@@ -1490,6 +1495,9 @@ static ssize_t cpu_show_common(struct de
case X86_BUG_TAA:
return tsx_async_abort_show_state(buf);
@ -103,7 +99,7 @@ Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
default:
break;
}
@@ -1535,4 +1543,9 @@ ssize_t cpu_show_tsx_async_abort(struct
@@ -1531,4 +1539,9 @@ ssize_t cpu_show_tsx_async_abort(struct
{
return cpu_show_common(dev, attr, buf, X86_BUG_TAA);
}

View File

@ -0,0 +1,30 @@
From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Date: Mon, 4 Nov 2019 12:22:01 +0100
Subject: x86/cpu: Add Tremont to the cpu vulnerability whitelist
commit cad14885a8d32c1c0d8eaa7bf5c0152a22b6080e upstream
Add the new cpu family ATOM_TREMONT_D to the cpu vunerability
whitelist. ATOM_TREMONT_D is not affected by X86_BUG_ITLB_MULTIHIT.
ATOM_TREMONT_D might have mitigations against other issues as well, but
only the ITLB multihit mitigation is confirmed at this point.
Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
arch/x86/kernel/cpu/common.c | 2 ++
1 file changed, 2 insertions(+)
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1000,6 +1000,8 @@ static const __initconst struct x86_cpu_
* good enough for our purposes.
*/
+ VULNWL_INTEL(ATOM_TREMONT_X, NO_ITLB_MULTIHIT),
+
/* AMD Family 0xf - 0x12 */
VULNWL_AMD(0x0f, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT),
VULNWL_AMD(0x10, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT),

View File

@ -2,7 +2,7 @@ From: Tyler Hicks <tyhicks@canonical.com>
Date: Mon, 4 Nov 2019 12:22:02 +0100
Subject: cpu/speculation: Uninline and export CPU mitigations helpers
commit 731dc9df975a5da21237a18c3384f811a7a41cc6 upstream.
commit 731dc9df975a5da21237a18c3384f811a7a41cc6 upstream
A kernel module may need to check the value of the "mitigations=" kernel
command line parameter as part of its setup when the module needs
@ -17,7 +17,6 @@ cpu_mitigations can be checked with the exported helper functions.
Signed-off-by: Tyler Hicks <tyhicks@canonical.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
---
include/linux/cpu.h | 25 ++-----------------------
kernel/cpu.c | 27 ++++++++++++++++++++++++++-

View File

@ -2,7 +2,7 @@ From: "Gomez Iglesias, Antonio" <antonio.gomez.iglesias@intel.com>
Date: Mon, 4 Nov 2019 12:22:03 +0100
Subject: Documentation: Add ITLB_MULTIHIT documentation
commit 7f00cc8d4a51074eb0ad4c3f16c15757b1ddfb7d upstream.
commit 7f00cc8d4a51074eb0ad4c3f16c15757b1ddfb7d upstream
Add the initial ITLB_MULTIHIT documentation.
@ -12,7 +12,6 @@ Signed-off-by: Antonio Gomez Iglesias <antonio.gomez.iglesias@intel.com>
Signed-off-by: Nelson D'Souza <nelson.dsouza@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
---
Documentation/admin-guide/hw-vuln/index.rst | 1 +
.../admin-guide/hw-vuln/multihit.rst | 163 ++++++++++++++++++

View File

@ -1,8 +1,8 @@
From: Paolo Bonzini <pbonzini@redhat.com>
Date: Mon, 30 Sep 2019 18:48:44 +0200
Date: Fri, 11 Oct 2019 11:59:48 +0200
Subject: kvm: x86, powerpc: do not allow clearing largepages debugfs entry
commit 833b45de69a6016c4b0cebe6765d526a31a81580 upstream.
commit 833b45de69a6016c4b0cebe6765d526a31a81580 upstream
The largepages debugfs entry is incremented/decremented as shadow
pages are created or destroyed. Clearing it will result in an
@ -11,8 +11,8 @@ misinterpreted by tools that use debugfs information), so make
this particular statistic read-only.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
[bwh: Backported to 4.19: drop powerpc changes and the Cc to kvm-ppc]
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: kvm-ppc@vger.kernel.org
---
arch/x86/kvm/x86.c | 6 +++---
include/linux/kvm_host.h | 2 ++

View File

@ -2,7 +2,7 @@ From: Junaid Shahid <junaids@google.com>
Date: Thu, 3 Jan 2019 17:14:28 -0800
Subject: kvm: Convert kvm_lock to a mutex
commit 0d9ce162cf46c99628cc5da9510b959c7976735b upstream.
commit 0d9ce162cf46c99628cc5da9510b959c7976735b upstream
It doesn't seem as if there is any particular need for kvm_lock to be a
spinlock, so convert the lock to a mutex so that sleepable functions (in
@ -10,8 +10,7 @@ particular cond_resched()) can be called while holding it.
Signed-off-by: Junaid Shahid <junaids@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
[bwh: Backported to 4.19: adjust context]
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
Documentation/virtual/kvm/locking.txt | 4 +---
arch/s390/kvm/kvm-s390.c | 4 ++--
@ -81,7 +80,7 @@ Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -6490,7 +6490,7 @@ static void kvm_hyperv_tsc_notifier(void
@@ -6498,7 +6498,7 @@ static void kvm_hyperv_tsc_notifier(void
struct kvm_vcpu *vcpu;
int cpu;
@ -90,7 +89,7 @@ Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
list_for_each_entry(kvm, &vm_list, vm_list)
kvm_make_mclock_inprogress_request(kvm);
@@ -6516,7 +6516,7 @@ static void kvm_hyperv_tsc_notifier(void
@@ -6524,7 +6524,7 @@ static void kvm_hyperv_tsc_notifier(void
spin_unlock(&ka->pvclock_gtod_sync_lock);
}
@ -99,7 +98,7 @@ Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
}
#endif
@@ -6574,17 +6574,17 @@ static int kvmclock_cpufreq_notifier(str
@@ -6582,17 +6582,17 @@ static int kvmclock_cpufreq_notifier(str
smp_call_function_single(freq->cpu, tsc_khz_changed, freq, 1);
@ -120,7 +119,7 @@ Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
if (freq->old < freq->new && send_ipi) {
/*
@@ -6710,12 +6710,12 @@ static void pvclock_gtod_update_fn(struc
@@ -6718,12 +6718,12 @@ static void pvclock_gtod_update_fn(struc
struct kvm_vcpu *vcpu;
int i;
@ -157,7 +156,7 @@ Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
static DEFINE_RAW_SPINLOCK(kvm_count_lock);
LIST_HEAD(vm_list);
@@ -684,9 +684,9 @@ static struct kvm *kvm_create_vm(unsigne
@@ -685,9 +685,9 @@ static struct kvm *kvm_create_vm(unsigne
if (r)
goto out_err;
@ -169,7 +168,7 @@ Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
preempt_notifier_inc();
@@ -732,9 +732,9 @@ static void kvm_destroy_vm(struct kvm *k
@@ -733,9 +733,9 @@ static void kvm_destroy_vm(struct kvm *k
kvm_uevent_notify_change(KVM_EVENT_DESTROY_VM, kvm);
kvm_destroy_vm_debugfs(kvm);
kvm_arch_sync_events(kvm);
@ -181,7 +180,7 @@ Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
kvm_free_irq_routing(kvm);
for (i = 0; i < KVM_NR_BUSES; i++) {
struct kvm_io_bus *bus = kvm_get_bus(kvm, i);
@@ -3828,13 +3828,13 @@ static int vm_stat_get(void *_offset, u6
@@ -3831,13 +3831,13 @@ static int vm_stat_get(void *_offset, u6
u64 tmp_val;
*val = 0;
@ -197,7 +196,7 @@ Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
return 0;
}
@@ -3847,12 +3847,12 @@ static int vm_stat_clear(void *_offset,
@@ -3850,12 +3850,12 @@ static int vm_stat_clear(void *_offset,
if (val)
return -EINVAL;
@ -212,7 +211,7 @@ Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
return 0;
}
@@ -3867,13 +3867,13 @@ static int vcpu_stat_get(void *_offset,
@@ -3870,13 +3870,13 @@ static int vcpu_stat_get(void *_offset,
u64 tmp_val;
*val = 0;
@ -228,7 +227,7 @@ Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
return 0;
}
@@ -3886,12 +3886,12 @@ static int vcpu_stat_clear(void *_offset
@@ -3889,12 +3889,12 @@ static int vcpu_stat_clear(void *_offset
if (val)
return -EINVAL;
@ -243,7 +242,7 @@ Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
return 0;
}
@@ -3912,7 +3912,7 @@ static void kvm_uevent_notify_change(uns
@@ -3915,7 +3915,7 @@ static void kvm_uevent_notify_change(uns
if (!kvm_dev.this_device || !kvm)
return;
@ -252,7 +251,7 @@ Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
if (type == KVM_EVENT_CREATE_VM) {
kvm_createvm_count++;
kvm_active_vms++;
@@ -3921,7 +3921,7 @@ static void kvm_uevent_notify_change(uns
@@ -3924,7 +3924,7 @@ static void kvm_uevent_notify_change(uns
}
created = kvm_createvm_count;
active = kvm_active_vms;

View File

@ -1,8 +1,8 @@
From: Junaid Shahid <junaids@google.com>
Date: Thu, 3 Jan 2019 16:22:21 -0800
Subject: kvm: x86: Do not release the page inside mmu_set_spte()
Subject: kvm: mmu: Do not release the page inside mmu_set_spte()
commit 43fdcda96e2550c6d1c46fb8a78801aa2f7276ed upstream.
commit 43fdcda96e2550c6d1c46fb8a78801aa2f7276ed upstream
Release the page at the call-site where it was originally acquired.
This makes the exit code cleaner for most call sites, since they
@ -11,7 +11,7 @@ label.
Signed-off-by: Junaid Shahid <junaids@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
arch/x86/kvm/mmu.c | 18 +++++++-----------
arch/x86/kvm/paging_tmpl.h | 8 +++-----

View File

@ -2,7 +2,7 @@ From: Paolo Bonzini <pbonzini@redhat.com>
Date: Mon, 24 Jun 2019 13:06:21 +0200
Subject: KVM: x86: make FNAME(fetch) and __direct_map more similar
commit 3fcf2d1bdeb6a513523cb2c77012a6b047aa859c upstream.
commit 3fcf2d1bdeb6a513523cb2c77012a6b047aa859c upstream
These two functions are basically doing the same thing through
kvm_mmu_get_page, link_shadow_page and mmu_set_spte; yet, for historical
@ -11,8 +11,7 @@ best of each and make them very similar, so that it is easy to understand
changes that apply to both of them.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
[bwh: Backported to 4.19: adjust context]
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
arch/x86/kvm/mmu.c | 53 ++++++++++++++++++--------------------
arch/x86/kvm/paging_tmpl.h | 30 ++++++++++-----------

View File

@ -2,14 +2,14 @@ From: Paolo Bonzini <pbonzini@redhat.com>
Date: Sun, 23 Jun 2019 19:15:49 +0200
Subject: KVM: x86: remove now unneeded hugepage gfn adjustment
commit d679b32611c0102ce33b9e1a4e4b94854ed1812a upstream.
commit d679b32611c0102ce33b9e1a4e4b94854ed1812a upstream
After the previous patch, the low bits of the gfn are masked in
both FNAME(fetch) and __direct_map, so we do not need to clear them
in transparent_hugepage_adjust.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
arch/x86/kvm/mmu.c | 9 +++------
arch/x86/kvm/paging_tmpl.h | 2 +-

View File

@ -2,7 +2,7 @@ From: Paolo Bonzini <pbonzini@redhat.com>
Date: Sun, 30 Jun 2019 08:36:21 -0400
Subject: KVM: x86: change kvm_mmu_page_get_gfn BUG_ON to WARN_ON
commit e9f2a760b158551bfbef6db31d2cae45ab8072e5 upstream.
commit e9f2a760b158551bfbef6db31d2cae45ab8072e5 upstream
Note that in such a case it is quite likely that KVM will BUG_ON
in __pte_list_remove when the VM is closed. However, there is no
@ -10,7 +10,7 @@ immediate risk of memory corruption in the host so a WARN_ON is
enough and it lets you gather traces for debugging.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
arch/x86/kvm/mmu.c | 12 +++++++++---
1 file changed, 9 insertions(+), 3 deletions(-)

View File

@ -1,19 +1,18 @@
From: Paolo Bonzini <pbonzini@redhat.com>
Date: Mon, 1 Jul 2019 06:22:57 -0400
Date: Thu, 4 Jul 2019 05:14:13 -0400
Subject: KVM: x86: add tracepoints around __direct_map and FNAME(fetch)
commit 335e192a3fa415e1202c8b9ecdaaecd643f823cc upstream.
commit 335e192a3fa415e1202c8b9ecdaaecd643f823cc upstream
These are useful in debugging shadow paging.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
[bwh: Backported to 4.19: adjust context]
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
arch/x86/kvm/mmu.c | 13 ++++-----
arch/x86/kvm/mmu.c | 14 ++++-----
arch/x86/kvm/mmutrace.h | 59 ++++++++++++++++++++++++++++++++++++++
arch/x86/kvm/paging_tmpl.h | 2 ++
3 files changed, 67 insertions(+), 7 deletions(-)
3 files changed, 68 insertions(+), 7 deletions(-)
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@ -27,7 +26,7 @@ Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
#define SPTE_HOST_WRITEABLE (1ULL << PT_FIRST_AVAIL_BITS_SHIFT)
#define SPTE_MMU_WRITEABLE (1ULL << (PT_FIRST_AVAIL_BITS_SHIFT + 1))
@@ -261,9 +258,13 @@ static u64 __read_mostly shadow_nonprese
@@ -261,9 +258,14 @@ static u64 __read_mostly shadow_nonprese
static void mmu_spte_set(u64 *sptep, u64 spte);
@ -37,11 +36,12 @@ Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
+#define CREATE_TRACE_POINTS
+#include "mmutrace.h"
+
+
void kvm_mmu_set_mmio_spte_mask(u64 mmio_mask, u64 mmio_value)
{
BUG_ON((mmio_mask & mmio_value) != mmio_value);
@@ -2992,10 +2993,7 @@ static int mmu_set_spte(struct kvm_vcpu
@@ -2992,10 +2994,7 @@ static int mmu_set_spte(struct kvm_vcpu
ret = RET_PF_EMULATE;
pgprintk("%s: setting spte %llx\n", __func__, *sptep);
@ -53,7 +53,7 @@ Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
if (!was_rmapped && is_large_pte(*sptep))
++vcpu->kvm->stat.lpages;
@@ -3106,6 +3104,7 @@ static int __direct_map(struct kvm_vcpu
@@ -3106,6 +3105,7 @@ static int __direct_map(struct kvm_vcpu
if (!VALID_PAGE(vcpu->arch.mmu.root_hpa))
return RET_PF_RETRY;

View File

@ -1,24 +1,23 @@
From: Paolo Bonzini <pbonzini@redhat.com>
Date: Sun, 27 Oct 2019 16:23:23 +0100
Date: Sun, 27 Oct 2019 09:36:37 +0100
Subject: KVM: vmx, svm: always run with EFER.NXE=1 when shadow paging is
active
commit 9167ab79936206118cc60e47dcb926c3489f3bd5 upstream.
commit 9167ab79936206118cc60e47dcb926c3489f3bd5 upstream
VMX already does so if the host has SMEP, in order to support the combination of
CR0.WP=1 and CR4.SMEP=1. However, it is perfectly safe to always do so, and in
fact VMX already ends up running with EFER.NXE=1 on old processors that lack the
"load EFER" controls, because it may help avoiding a slow MSR write. Removing
all the conditionals simplifies the code.
fact VMX also ends up running with EFER.NXE=1 on old processors that lack the
"load EFER" controls, because it may help avoiding a slow MSR write.
SVM does not have similar code, but it should since recent AMD processors do
support SMEP. So this patch also makes the code for the two vendors more similar
while fixing NPT=0, CR0.WP=1 and CR4.SMEP=1 on AMD processors.
support SMEP. So this patch makes the code for the two vendors simpler and
more similar, while fixing an issue with CR0.WP=1 and CR4.SMEP=1 on AMD.
Cc: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
[bwh: Backported to 4.19: adjust filename]
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Joerg Roedel <jroedel@suse.de>
Cc: stable@vger.kernel.org
---
arch/x86/kvm/svm.c | 10 ++++++++--
arch/x86/kvm/vmx.c | 14 +++-----------

View File

@ -2,7 +2,7 @@ From: Paolo Bonzini <pbonzini@redhat.com>
Date: Mon, 4 Nov 2019 12:22:02 +0100
Subject: kvm: mmu: ITLB_MULTIHIT mitigation
commit b8e8c8303ff28c61046a4d0f6ea99aea609a7dc0 upstream.
commit b8e8c8303ff28c61046a4d0f6ea99aea609a7dc0 upstream
With some Intel processors, putting the same virtual address in the TLB
as both a 4 KiB and 2 MiB page can confuse the instruction fetch unit
@ -26,10 +26,8 @@ and direct EPT is treated in the same way.
Originally-by: Junaid Shahid <junaids@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
[bwh: Backported to 4.19:
- Use kvm_mmu_invalidate_zap_all_pages() instead of kvm_mmu_zap_all_fast()
- Adjust context]
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
.../admin-guide/kernel-parameters.txt | 19 +++
arch/x86/include/asm/kvm_host.h | 2 +
@ -76,14 +74,14 @@ Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Mitigate all CPU vulnerabilities, but leave SMT
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -282,6 +282,7 @@ struct kvm_mmu_page {
struct list_head link;
struct hlist_node hash_link;
@@ -293,6 +293,7 @@ struct kvm_mmu_page {
/* hold the gfn of each spte inside spt */
gfn_t *gfns;
bool unsync;
+ bool lpage_disallowed; /* Can't be replaced by an equiv large page */
/*
* The following two entries are used to key the shadow page in the
int root_count; /* Currently serving as active root */
unsigned int unsync_children;
struct kvm_rmap_head parent_ptes; /* rmap pointers to parent sptes */
@@ -887,6 +888,7 @@ struct kvm_vm_stat {
ulong mmu_unsync;
ulong remote_tlb_flush;
@ -94,7 +92,7 @@ Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -1229,6 +1229,9 @@ void x86_spec_ctrl_setup_ap(void)
@@ -1225,6 +1225,9 @@ void x86_spec_ctrl_setup_ap(void)
x86_amd_ssb_disable();
}
@ -104,7 +102,7 @@ Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
#undef pr_fmt
#define pr_fmt(fmt) "L1TF: " fmt
@@ -1384,17 +1387,25 @@ static ssize_t l1tf_show_state(char *buf
@@ -1380,17 +1383,25 @@ static ssize_t l1tf_show_state(char *buf
l1tf_vmx_states[l1tf_vmx_mitigation],
sched_smt_active() ? "vulnerable" : "disabled");
}
@ -154,7 +152,7 @@ Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
/*
* When setting this variable to true it enables Two-Dimensional-Paging
* where the hardware walks 2 page tables:
@@ -284,6 +298,11 @@ static inline bool spte_ad_enabled(u64 s
@@ -285,6 +299,11 @@ static inline bool spte_ad_enabled(u64 s
return !(spte & shadow_acc_track_value);
}
@ -166,7 +164,7 @@ Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
static inline u64 spte_shadow_accessed_mask(u64 spte)
{
MMU_WARN_ON((spte & shadow_mmio_mask) == shadow_mmio_value);
@@ -1096,6 +1115,15 @@ static void account_shadowed(struct kvm
@@ -1097,6 +1116,15 @@ static void account_shadowed(struct kvm
kvm_mmu_gfn_disallow_lpage(slot, gfn);
}
@ -182,7 +180,7 @@ Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
static void unaccount_shadowed(struct kvm *kvm, struct kvm_mmu_page *sp)
{
struct kvm_memslots *slots;
@@ -1113,6 +1141,12 @@ static void unaccount_shadowed(struct kv
@@ -1114,6 +1142,12 @@ static void unaccount_shadowed(struct kv
kvm_mmu_gfn_allow_lpage(slot, gfn);
}
@ -195,7 +193,7 @@ Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
static bool __mmu_gfn_lpage_is_disallowed(gfn_t gfn, int level,
struct kvm_memory_slot *slot)
{
@@ -2665,6 +2699,9 @@ static int kvm_mmu_prepare_zap_page(stru
@@ -2666,6 +2700,9 @@ static int kvm_mmu_prepare_zap_page(stru
kvm_reload_remote_mmus(kvm);
}
@ -205,7 +203,7 @@ Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
sp->role.invalid = 1;
return ret;
}
@@ -2873,6 +2910,11 @@ static int set_spte(struct kvm_vcpu *vcp
@@ -2874,6 +2911,11 @@ static int set_spte(struct kvm_vcpu *vcp
if (!speculative)
spte |= spte_shadow_accessed_mask(spte);
@ -217,7 +215,7 @@ Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
if (pte_access & ACC_EXEC_MASK)
spte |= shadow_x_mask;
else
@@ -3091,9 +3133,32 @@ static void direct_pte_prefetch(struct k
@@ -3092,9 +3134,32 @@ static void direct_pte_prefetch(struct k
__direct_pte_prefetch(vcpu, sp, sptep);
}
@ -251,7 +249,7 @@ Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
{
struct kvm_shadow_walk_iterator it;
struct kvm_mmu_page *sp;
@@ -3106,6 +3171,12 @@ static int __direct_map(struct kvm_vcpu
@@ -3107,6 +3172,12 @@ static int __direct_map(struct kvm_vcpu
trace_kvm_mmu_spte_requested(gpa, level, pfn);
for_each_shadow_entry(vcpu, gpa, it) {
@ -264,7 +262,7 @@ Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
base_gfn = gfn & ~(KVM_PAGES_PER_HPAGE(it.level) - 1);
if (it.level == level)
break;
@@ -3116,6 +3187,8 @@ static int __direct_map(struct kvm_vcpu
@@ -3117,6 +3188,8 @@ static int __direct_map(struct kvm_vcpu
it.level - 1, true, ACC_ALL);
link_shadow_page(vcpu, it.sptep, sp);
@ -273,7 +271,7 @@ Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
}
}
@@ -3416,11 +3489,14 @@ static int nonpaging_map(struct kvm_vcpu
@@ -3417,11 +3490,14 @@ static int nonpaging_map(struct kvm_vcpu
{
int r;
int level;
@ -289,7 +287,7 @@ Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
level = mapping_level(vcpu, gfn, &force_pt_level);
if (likely(!force_pt_level)) {
/*
@@ -3454,7 +3530,8 @@ static int nonpaging_map(struct kvm_vcpu
@@ -3455,7 +3531,8 @@ static int nonpaging_map(struct kvm_vcpu
goto out_unlock;
if (likely(!force_pt_level))
transparent_hugepage_adjust(vcpu, gfn, &pfn, &level);
@ -299,7 +297,7 @@ Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
out_unlock:
spin_unlock(&vcpu->kvm->mmu_lock);
kvm_release_pfn_clean(pfn);
@@ -4048,6 +4125,8 @@ static int tdp_page_fault(struct kvm_vcp
@@ -4049,6 +4126,8 @@ static int tdp_page_fault(struct kvm_vcp
unsigned long mmu_seq;
int write = error_code & PFERR_WRITE_MASK;
bool map_writable;
@ -308,7 +306,7 @@ Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
MMU_WARN_ON(!VALID_PAGE(vcpu->arch.mmu.root_hpa));
@@ -4058,8 +4137,9 @@ static int tdp_page_fault(struct kvm_vcp
@@ -4059,8 +4138,9 @@ static int tdp_page_fault(struct kvm_vcp
if (r)
return r;
@ -320,7 +318,7 @@ Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
level = mapping_level(vcpu, gfn, &force_pt_level);
if (likely(!force_pt_level)) {
if (level > PT_DIRECTORY_LEVEL &&
@@ -4088,7 +4168,8 @@ static int tdp_page_fault(struct kvm_vcp
@@ -4089,7 +4169,8 @@ static int tdp_page_fault(struct kvm_vcp
goto out_unlock;
if (likely(!force_pt_level))
transparent_hugepage_adjust(vcpu, gfn, &pfn, &level);
@ -330,7 +328,7 @@ Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
out_unlock:
spin_unlock(&vcpu->kvm->mmu_lock);
kvm_release_pfn_clean(pfn);
@@ -5886,10 +5967,58 @@ static void mmu_destroy_caches(void)
@@ -5887,10 +5968,58 @@ static void mmu_destroy_caches(void)
kmem_cache_destroy(mmu_page_header_cache);
}

View File

@ -1,8 +1,8 @@
From: Junaid Shahid <junaids@google.com>
Date: Mon, 4 Nov 2019 12:22:02 +0100
Date: Fri, 1 Nov 2019 00:14:08 +0100
Subject: kvm: Add helper function for creating VM worker threads
commit c57c80467f90e5504c8df9ad3555d2c78800bf94 upstream.
commit c57c80467f90e5504c8df9ad3555d2c78800bf94 upstream
Add a function to create a kernel thread associated with a given VM. In
particular, it ensures that the worker thread inherits the priority and
@ -11,8 +11,8 @@ cgroups of the calling thread.
Signed-off-by: Junaid Shahid <junaids@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
[bwh: Backported to 4.19: adjust context]
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
include/linux/kvm_host.h | 6 +++
virt/kvm/kvm_main.c | 84 ++++++++++++++++++++++++++++++++++++++++

View File

@ -1,8 +1,8 @@
From: Junaid Shahid <junaids@google.com>
Date: Mon, 4 Nov 2019 12:22:03 +0100
Date: Fri, 1 Nov 2019 00:14:14 +0100
Subject: kvm: x86: mmu: Recovery of shattered NX large pages
commit 1aa9b9572b10529c2e64e2b8f44025d86e124308 upstream.
commit 1aa9b9572b10529c2e64e2b8f44025d86e124308 upstream
The page table pages corresponding to broken down large pages are zapped in
FIFO order, so that the large page can potentially be recovered, if it is
@ -15,10 +15,8 @@ reaches a steady state.
Signed-off-by: Junaid Shahid <junaids@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
[bwh: Backported to 4.19:
- Update another error path in kvm_create_vm() to use out_err_no_mmu_notifier
- Adjust context]
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
.../admin-guide/kernel-parameters.txt | 6 +
arch/x86/include/asm/kvm_host.h | 4 +
@ -45,16 +43,15 @@ Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -281,6 +281,8 @@ struct kvm_rmap_head {
@@ -281,6 +281,7 @@ struct kvm_rmap_head {
struct kvm_mmu_page {
struct list_head link;
struct hlist_node hash_link;
+ struct list_head lpage_disallowed_link;
+
bool unsync;
bool lpage_disallowed; /* Can't be replaced by an equiv large page */
@@ -805,6 +807,7 @@ struct kvm_arch {
/*
* The following two entries are used to key the shadow page in the
@@ -805,6 +806,7 @@ struct kvm_arch {
*/
struct list_head active_mmu_pages;
struct list_head zapped_obsolete_pages;
@ -62,10 +59,11 @@ Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
struct kvm_page_track_notifier_node mmu_sp_tracker;
struct kvm_page_track_notifier_head track_notifier_head;
@@ -875,6 +878,7 @@ struct kvm_arch {
@@ -875,6 +877,8 @@ struct kvm_arch {
bool x2apic_broadcast_quirk_disabled;
bool guest_can_read_msr_platform_info;
+
+ struct task_struct *nx_lpage_recovery_thread;
};
@ -107,7 +105,7 @@ Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
/*
* When setting this variable to true it enables Two-Dimensional-Paging
@@ -1121,6 +1132,8 @@ static void account_huge_nx_page(struct
@@ -1122,6 +1133,8 @@ static void account_huge_nx_page(struct
return;
++kvm->stat.nx_lpage_splits;
@ -116,7 +114,7 @@ Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
sp->lpage_disallowed = true;
}
@@ -1145,6 +1158,7 @@ static void unaccount_huge_nx_page(struc
@@ -1146,6 +1159,7 @@ static void unaccount_huge_nx_page(struc
{
--kvm->stat.nx_lpage_splits;
sp->lpage_disallowed = false;
@ -124,7 +122,7 @@ Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
}
static bool __mmu_gfn_lpage_is_disallowed(gfn_t gfn, int level,
@@ -6005,6 +6019,8 @@ static int set_nx_huge_pages(const char
@@ -6006,6 +6020,8 @@ static int set_nx_huge_pages(const char
idx = srcu_read_lock(&kvm->srcu);
kvm_mmu_invalidate_zap_all_pages(kvm);
srcu_read_unlock(&kvm->srcu, idx);
@ -133,7 +131,7 @@ Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
}
mutex_unlock(&kvm_lock);
}
@@ -6086,3 +6102,116 @@ void kvm_mmu_module_exit(void)
@@ -6087,3 +6103,116 @@ void kvm_mmu_module_exit(void)
unregister_shrinker(&mmu_shrinker);
mmu_audit_disable();
}
@ -263,7 +261,7 @@ Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
#endif
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -8952,6 +8952,7 @@ int kvm_arch_init_vm(struct kvm *kvm, un
@@ -8960,6 +8960,7 @@ int kvm_arch_init_vm(struct kvm *kvm, un
INIT_HLIST_HEAD(&kvm->arch.mask_notifier_list);
INIT_LIST_HEAD(&kvm->arch.active_mmu_pages);
INIT_LIST_HEAD(&kvm->arch.zapped_obsolete_pages);
@ -271,7 +269,7 @@ Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
INIT_LIST_HEAD(&kvm->arch.assigned_dev_head);
atomic_set(&kvm->arch.noncoherent_dma_count, 0);
@@ -8983,6 +8984,11 @@ int kvm_arch_init_vm(struct kvm *kvm, un
@@ -8991,6 +8992,11 @@ int kvm_arch_init_vm(struct kvm *kvm, un
return 0;
}
@ -283,7 +281,7 @@ Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
static void kvm_unload_vcpu_mmu(struct kvm_vcpu *vcpu)
{
vcpu_load(vcpu);
@@ -9084,6 +9090,11 @@ int x86_set_memory_region(struct kvm *kv
@@ -9092,6 +9098,11 @@ int x86_set_memory_region(struct kvm *kv
}
EXPORT_SYMBOL_GPL(x86_set_memory_region);

View File

@ -0,0 +1,58 @@
From: Paolo Bonzini <pbonzini@redhat.com>
Date: Mon, 19 Aug 2019 17:24:07 +0200
Subject: KVM: x86: use Intel speculation bugs and features as derived in
generic x86 code
commit 0c54914d0c52a15db9954a76ce80fee32cf318f4 upstream
Similar to AMD bits, set the Intel bits from the vendor-independent
feature and bug flags, because KVM_GET_SUPPORTED_CPUID does not care
about the vendor and they should be set on AMD processors as well.
Suggested-by: Jim Mattson <jmattson@google.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
arch/x86/kvm/cpuid.c | 8 ++++++++
arch/x86/kvm/x86.c | 8 ++++++++
2 files changed, 16 insertions(+)
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -501,8 +501,16 @@ static inline int __do_cpuid_ent(struct
/* PKU is not yet implemented for shadow paging. */
if (!tdp_enabled || !boot_cpu_has(X86_FEATURE_OSPKE))
entry->ecx &= ~F(PKU);
+
entry->edx &= kvm_cpuid_7_0_edx_x86_features;
cpuid_mask(&entry->edx, CPUID_7_EDX);
+ if (boot_cpu_has(X86_FEATURE_IBPB) &&
+ boot_cpu_has(X86_FEATURE_IBRS))
+ entry->edx |= F(SPEC_CTRL);
+ if (boot_cpu_has(X86_FEATURE_STIBP))
+ entry->edx |= F(INTEL_STIBP);
+ if (boot_cpu_has(X86_FEATURE_SSBD))
+ entry->edx |= F(SPEC_CTRL_SSBD);
/*
* We emulate ARCH_CAPABILITIES in software even
* if the host doesn't support it.
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -1127,8 +1127,16 @@ u64 kvm_get_arch_capabilities(void)
if (l1tf_vmx_mitigation != VMENTER_L1D_FLUSH_NEVER)
data |= ARCH_CAP_SKIP_VMENTRY_L1DFLUSH;
+ if (!boot_cpu_has_bug(X86_BUG_CPU_MELTDOWN))
+ data |= ARCH_CAP_RDCL_NO;
+ if (!boot_cpu_has_bug(X86_BUG_SPEC_STORE_BYPASS))
+ data |= ARCH_CAP_SSB_NO;
+ if (!boot_cpu_has_bug(X86_BUG_MDS))
+ data |= ARCH_CAP_MDS_NO;
+
return data;
}
+
EXPORT_SYMBOL_GPL(kvm_get_arch_capabilities);
static int kvm_get_msr_feature(struct kvm_msr_entry *msr)

View File

@ -2,7 +2,7 @@ From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Date: Wed, 23 Oct 2019 10:45:50 +0200
Subject: x86/msr: Add the IA32_TSX_CTRL MSR
commit c2955f270a84762343000f103e0640d29c7a96f3 upstream.
commit c2955f270a84762343000f103e0640d29c7a96f3 upstream
Transactional Synchronization Extensions (TSX) may be used on certain
processors as part of a speculative side channel attack. A microcode
@ -52,7 +52,6 @@ Tested-by: Neelima Krishnan <neelima.krishnan@intel.com>
Reviewed-by: Mark Gross <mgross@linux.intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com>
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
---
arch/x86/include/asm/msr-index.h | 5 +++++
1 file changed, 5 insertions(+)

View File

@ -2,7 +2,7 @@ From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Date: Wed, 23 Oct 2019 10:52:35 +0200
Subject: x86/cpu: Add a helper function x86_read_arch_cap_msr()
commit 286836a70433fb64131d2590f4bf512097c255e1 upstream.
commit 286836a70433fb64131d2590f4bf512097c255e1 upstream
Add a helper function to read the IA32_ARCH_CAPABILITIES MSR.
@ -13,7 +13,6 @@ Tested-by: Neelima Krishnan <neelima.krishnan@intel.com>
Reviewed-by: Mark Gross <mgross@linux.intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com>
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
---
arch/x86/kernel/cpu/common.c | 15 +++++++++++----
arch/x86/kernel/cpu/cpu.h | 2 ++

View File

@ -2,7 +2,7 @@ From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Date: Wed, 23 Oct 2019 11:01:53 +0200
Subject: x86/cpu: Add a "tsx=" cmdline option with TSX disabled by default
commit 95c5824f75f3ba4c9e8e5a4b1a623c95390ac266 upstream.
commit 95c5824f75f3ba4c9e8e5a4b1a623c95390ac266 upstream
Add a kernel cmdline parameter "tsx" to control the Transactional
Synchronization Extensions (TSX) feature. On CPUs that support TSX
@ -22,16 +22,14 @@ Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com>
[bwh: Backported to 4.19: adjust context]
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
---
.../admin-guide/kernel-parameters.txt | 26 ++++
arch/x86/kernel/cpu/Makefile | 2 +-
arch/x86/kernel/cpu/common.c | 2 +
arch/x86/kernel/cpu/common.c | 1 +
arch/x86/kernel/cpu/cpu.h | 16 +++
arch/x86/kernel/cpu/intel.c | 5 +
arch/x86/kernel/cpu/tsx.c | 125 ++++++++++++++++++
6 files changed, 175 insertions(+), 1 deletion(-)
6 files changed, 174 insertions(+), 1 deletion(-)
create mode 100644 arch/x86/kernel/cpu/tsx.c
--- a/Documentation/admin-guide/kernel-parameters.txt
@ -82,11 +80,10 @@ Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
obj-$(CONFIG_CPU_SUP_CENTAUR) += centaur.o
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1482,6 +1482,8 @@ void __init identify_boot_cpu(void)
@@ -1482,6 +1482,7 @@ void __init identify_boot_cpu(void)
enable_sep_cpu();
#endif
cpu_detect_tlb(&boot_cpu_data);
+
+ tsx_init();
}

View File

@ -2,7 +2,7 @@ From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Date: Wed, 23 Oct 2019 11:30:45 +0200
Subject: x86/speculation/taa: Add mitigation for TSX Async Abort
commit 1b42f017415b46c317e71d41c34ec088417a1883 upstream.
commit 1b42f017415b46c317e71d41c34ec088417a1883 upstream
TSX Async Abort (TAA) is a side channel vulnerability to the internal
buffers in some Intel processors similar to Microachitectural Data
@ -56,8 +56,6 @@ Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com>
[bwh: Backported to 4.19: Add #include "cpu.h" in bugs.c]
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
---
arch/x86/include/asm/cpufeatures.h | 1 +
arch/x86/include/asm/msr-index.h | 4 +

View File

@ -2,7 +2,7 @@ From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Date: Wed, 23 Oct 2019 12:19:51 +0200
Subject: x86/speculation/taa: Add sysfs reporting for TSX Async Abort
commit 6608b45ac5ecb56f9e171252229c39580cc85f0f upstream.
commit 6608b45ac5ecb56f9e171252229c39580cc85f0f upstream
Add the sysfs reporting file for TSX Async Abort. It exposes the
vulnerability and the mitigation state similar to the existing files for
@ -19,7 +19,6 @@ Reviewed-by: Mark Gross <mgross@linux.intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com>
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
---
arch/x86/kernel/cpu/bugs.c | 23 +++++++++++++++++++++++
drivers/base/cpu.c | 9 +++++++++

View File

@ -2,7 +2,7 @@ From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Date: Wed, 23 Oct 2019 12:23:33 +0200
Subject: kvm/x86: Export MDS_NO=0 to guests when TSX is enabled
commit e1d38b63acd843cfdd4222bf19a26700fd5c699e upstream.
commit e1d38b63acd843cfdd4222bf19a26700fd5c699e upstream
Export the IA32_ARCH_CAPABILITIES MSR bit MDS_NO=0 to guests on TSX
Async Abort(TAA) affected hosts that have TSX enabled and updated
@ -26,16 +26,15 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Neelima Krishnan <neelima.krishnan@intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com>
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
---
arch/x86/kvm/x86.c | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -1127,6 +1127,25 @@ u64 kvm_get_arch_capabilities(void)
if (l1tf_vmx_mitigation != VMENTER_L1D_FLUSH_NEVER)
data |= ARCH_CAP_SKIP_VMENTRY_L1DFLUSH;
@@ -1134,6 +1134,25 @@ u64 kvm_get_arch_capabilities(void)
if (!boot_cpu_has_bug(X86_BUG_MDS))
data |= ARCH_CAP_MDS_NO;
+ /*
+ * On TAA affected systems, export MDS_NO=0 when:
@ -58,4 +57,4 @@ Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
+
return data;
}
EXPORT_SYMBOL_GPL(kvm_get_arch_capabilities);

View File

@ -2,7 +2,7 @@ From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Date: Wed, 23 Oct 2019 12:28:57 +0200
Subject: x86/tsx: Add "auto" option to the tsx= cmdline parameter
commit 7531a3596e3272d1f6841e0d601a614555dc6b65 upstream.
commit 7531a3596e3272d1f6841e0d601a614555dc6b65 upstream
Platforms which are not affected by X86_BUG_TAA may want the TSX feature
enabled. Add "auto" option to the TSX cmdline parameter. When tsx=auto
@ -18,7 +18,6 @@ Signed-off-by: Borislav Petkov <bp@suse.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com>
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
---
Documentation/admin-guide/kernel-parameters.txt | 3 +++
arch/x86/kernel/cpu/tsx.c | 7 ++++++-

View File

@ -2,7 +2,7 @@ From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Date: Wed, 23 Oct 2019 12:32:55 +0200
Subject: x86/speculation/taa: Add documentation for TSX Async Abort
commit a7a248c593e4fd7a67c50b5f5318fe42a0db335e upstream.
commit a7a248c593e4fd7a67c50b5f5318fe42a0db335e upstream
Add the documenation for TSX Async Abort. Include the description of
the issue, how to check the mitigation state, control the mitigation,
@ -19,8 +19,6 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Mark Gross <mgross@linux.intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com>
[bwh: Backported to 4.19: adjust context]
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
---
.../ABI/testing/sysfs-devices-system-cpu | 1 +
Documentation/admin-guide/hw-vuln/index.rst | 1 +

View File

@ -2,7 +2,7 @@ From: Michal Hocko <mhocko@suse.com>
Date: Wed, 23 Oct 2019 12:35:50 +0200
Subject: x86/tsx: Add config options to set tsx=on|off|auto
commit db616173d787395787ecc93eef075fa975227b10 upstream.
commit db616173d787395787ecc93eef075fa975227b10 upstream
There is a general consensus that TSX usage is not largely spread while
the history shows there is a non trivial space for side channel attacks
@ -27,7 +27,6 @@ Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com>
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
---
arch/x86/Kconfig | 45 +++++++++++++++++++++++++++++++++++++++
arch/x86/kernel/cpu/tsx.c | 22 +++++++++++++------

View File

@ -1,8 +1,8 @@
From: Josh Poimboeuf <jpoimboe@redhat.com>
Date: Wed, 6 Nov 2019 20:26:46 -0600
Subject: x86/speculation/taa: Fix printing of TAA_MSG_SMT on IBRS_ALL CPUs
Origin: https://git.kernel.org/linus/012206a822a8b6ac09125bfaa210a95b9eb8f1c1
Bug-Debian-Security: https://security-tracker.debian.org/tracker/CVE-2019-11135
commit 012206a822a8b6ac09125bfaa210a95b9eb8f1c1 upstream
For new IBRS_ALL CPUs, the Enhanced IBRS check at the beginning of
cpu_bugs_smt_update() causes the function to return early, unintentionally
@ -29,11 +29,9 @@ Reviewed-by: Borislav Petkov <bp@suse.de>
arch/x86/kernel/cpu/bugs.c | 4 ----
1 file changed, 4 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 8237b86ba6dc..10d11586f805 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -886,10 +886,6 @@ static void update_mds_branch_idle(void)
@@ -874,10 +874,6 @@ static void update_mds_branch_idle(void)
void arch_smt_update(void)
{
@ -44,6 +42,3 @@ index 8237b86ba6dc..10d11586f805 100644
mutex_lock(&spec_ctrl_mutex);
switch (spectre_v2_user) {
--
2.24.0

51
debian/patches/series vendored
View File

@ -258,31 +258,32 @@ bugfix/all/ALSA-usb-audio-Fix-a-stack-buffer-overflow-bug-in-check_input_term.pa
bugfix/all/vhost-make-sure-log_num-in_num.patch
bugfix/x86/x86-ptrace-fix-up-botched-merge-of-spectrev1-fix.patch
bugfix/all/KVM-coalesced_mmio-add-bounds-checking.patch
bugfix/x86/taa/0001-x86-msr-Add-the-IA32_TSX_CTRL-MSR.patch
bugfix/x86/taa/0002-x86-cpu-Add-a-helper-function-x86_read_arch_cap_msr.patch
bugfix/x86/taa/0003-x86-cpu-Add-a-tsx-cmdline-option-with-TSX-disabled-b.patch
bugfix/x86/taa/0004-x86-speculation-taa-Add-mitigation-for-TSX-Async-Abo.patch
bugfix/x86/taa/0005-x86-speculation-taa-Add-sysfs-reporting-for-TSX-Asyn.patch
bugfix/x86/taa/0006-kvm-x86-Export-MDS_NO-0-to-guests-when-TSX-is-enable.patch
bugfix/x86/taa/0007-x86-tsx-Add-auto-option-to-the-tsx-cmdline-parameter.patch
bugfix/x86/taa/0008-x86-speculation-taa-Add-documentation-for-TSX-Async-.patch
bugfix/x86/taa/0009-x86-tsx-Add-config-options-to-set-tsx-on-off-auto.patch
bugfix/x86/taa/0010-x86-speculation-taa-Fix-printing-of-TAA_MSG_SMT-on-I.patch
bugfix/x86/itlb_multihit/0010-KVM-x86-adjust-kvm_mmu_page-member-to-save-8-bytes.patch
bugfix/x86/itlb_multihit/0011-kvm-Convert-kvm_lock-to-a-mutex.patch
bugfix/x86/itlb_multihit/0012-kvm-x86-Do-not-release-the-page-inside-mmu_set_spte.patch
bugfix/x86/itlb_multihit/0013-KVM-x86-make-FNAME-fetch-and-__direct_map-more-simil.patch
bugfix/x86/itlb_multihit/0014-KVM-x86-remove-now-unneeded-hugepage-gfn-adjustment.patch
bugfix/x86/itlb_multihit/0015-KVM-x86-change-kvm_mmu_page_get_gfn-BUG_ON-to-WARN_O.patch
bugfix/x86/itlb_multihit/0016-KVM-x86-add-tracepoints-around-__direct_map-and-FNAM.patch
bugfix/x86/itlb_multihit/0017-kvm-x86-powerpc-do-not-allow-clearing-largepages-deb.patch
bugfix/x86/itlb_multihit/0018-KVM-vmx-svm-always-run-with-EFER.NXE-1-when-shadow-p.patch
bugfix/x86/itlb_multihit/0019-x86-bugs-Add-ITLB_MULTIHIT-bug-infrastructure.patch
bugfix/x86/itlb_multihit/0020-cpu-speculation-Uninline-and-export-CPU-mitigations-.patch
bugfix/x86/itlb_multihit/0021-kvm-mmu-ITLB_MULTIHIT-mitigation.patch
bugfix/x86/itlb_multihit/0022-kvm-Add-helper-function-for-creating-VM-worker-threa.patch
bugfix/x86/itlb_multihit/0023-kvm-x86-mmu-Recovery-of-shattered-NX-large-pages.patch
bugfix/x86/itlb_multihit/0024-Documentation-Add-ITLB_MULTIHIT-documentation.patch
bugfix/x86/taa/0001-KVM-x86-use-Intel-speculation-bugs-and-features-as-d.patch
bugfix/x86/taa/0002-x86-msr-Add-the-IA32_TSX_CTRL-MSR.patch
bugfix/x86/taa/0003-x86-cpu-Add-a-helper-function-x86_read_arch_cap_msr.patch
bugfix/x86/taa/0004-x86-cpu-Add-a-tsx-cmdline-option-with-TSX-disabled-b.patch
bugfix/x86/taa/0005-x86-speculation-taa-Add-mitigation-for-TSX-Async-Abo.patch
bugfix/x86/taa/0006-x86-speculation-taa-Add-sysfs-reporting-for-TSX-Asyn.patch
bugfix/x86/taa/0007-kvm-x86-Export-MDS_NO-0-to-guests-when-TSX-is-enable.patch
bugfix/x86/taa/0008-x86-tsx-Add-auto-option-to-the-tsx-cmdline-parameter.patch
bugfix/x86/taa/0009-x86-speculation-taa-Add-documentation-for-TSX-Async-.patch
bugfix/x86/taa/0010-x86-tsx-Add-config-options-to-set-tsx-on-off-auto.patch
bugfix/x86/taa/0015-x86-speculation-taa-Fix-printing-of-TAA_MSG_SMT-on-I.patch
bugfix/x86/itlb_multihit/0011-x86-bugs-Add-ITLB_MULTIHIT-bug-infrastructure.patch
bugfix/x86/itlb_multihit/0012-x86-cpu-Add-Tremont-to-the-cpu-vulnerability-whiteli.patch
bugfix/x86/itlb_multihit/0013-cpu-speculation-Uninline-and-export-CPU-mitigations-.patch
bugfix/x86/itlb_multihit/0014-Documentation-Add-ITLB_MULTIHIT-documentation.patch
bugfix/x86/itlb_multihit/0016-kvm-x86-powerpc-do-not-allow-clearing-largepages-deb.patch
bugfix/x86/itlb_multihit/0017-kvm-Convert-kvm_lock-to-a-mutex.patch
bugfix/x86/itlb_multihit/0018-kvm-mmu-Do-not-release-the-page-inside-mmu_set_spte.patch
bugfix/x86/itlb_multihit/0019-KVM-x86-make-FNAME-fetch-and-__direct_map-more-simil.patch
bugfix/x86/itlb_multihit/0020-KVM-x86-remove-now-unneeded-hugepage-gfn-adjustment.patch
bugfix/x86/itlb_multihit/0021-KVM-x86-change-kvm_mmu_page_get_gfn-BUG_ON-to-WARN_O.patch
bugfix/x86/itlb_multihit/0022-KVM-x86-add-tracepoints-around-__direct_map-and-FNAM.patch
bugfix/x86/itlb_multihit/0023-KVM-vmx-svm-always-run-with-EFER.NXE-1-when-shadow-p.patch
bugfix/x86/itlb_multihit/0024-kvm-mmu-ITLB_MULTIHIT-mitigation.patch
bugfix/x86/itlb_multihit/0025-kvm-Add-helper-function-for-creating-VM-worker-threa.patch
bugfix/x86/itlb_multihit/0026-kvm-x86-mmu-Recovery-of-shattered-NX-large-pages.patch
# ABI maintenance
debian/abi/powerpc-avoid-abi-change-for-disabling-tm.patch