[x86] Add mitigation for TSX Asynchronous Abort (CVE-2019-11135)

This is a backport of v6 of the TAA patch set, and will probably
require updates before release.  The subject lines for these patches
didn't come through.
This commit is contained in:
Ben Hutchings 2019-10-20 14:46:03 +01:00
parent d9bd594144
commit 96c0e74c50
11 changed files with 1728 additions and 0 deletions

3
debian/changelog vendored
View File

@ -14,6 +14,9 @@ linux (4.19.67-2+deb10u2) UNRELEASED; urgency=medium
- kvm: mmu: ITLB_MULTIHIT mitigation
- kvm: Add helper function for creating VM worker threads
- kvm: x86: mmu: Recovery of shattered NX large pages
* [x86] Add mitigation for TSX Asynchronous Abort (CVE-2019-11135).
TSX is now disabled by default; see
Documentation/admin-guide/hw-vuln/tsx_async_abort.rst
-- Ben Hutchings <ben@decadent.org.uk> Sun, 20 Oct 2019 14:21:28 +0100

View File

@ -0,0 +1,67 @@
From: speck for Pawan Gupta <speck@linutronix.de>
Date: Wed, 9 Oct 2019 16:22:56 -0700
Subject: TAAv6 1
Transactional Synchronization Extensions (TSX) may be used on certain
processors as part of a speculative side channel attack. A microcode
update for existing processors that are vulnerable to this attack will
add a new MSR, IA32_TSX_CTRL to allow the system administrator the
option to disable TSX as one of the possible mitigations. [Note that
future processors that are not vulnerable will also support the
IA32_TSX_CTRL MSR]. Add defines for the new IA32_TSX_CTRL MSR and its
bits.
TSX has two sub-features:
1. Restricted Transactional Memory (RTM) is an explicitly-used feature
where new instructions begin and end TSX transactions.
2. Hardware Lock Elision (HLE) is implicitly used when certain kinds of
"old" style locks are used by software.
Bit 7 of the IA32_ARCH_CAPABILITIES indicates the presence of the
IA32_TSX_CTRL MSR.
There are two control bits in IA32_TSX_CTRL MSR:
Bit 0: When set it disables the Restricted Transactional Memory (RTM)
sub-feature of TSX (will force all transactions to abort on the
XBEGIN instruction).
Bit 1: When set it disables the enumeration of the RTM and HLE feature
(i.e. it will make CPUID(EAX=7).EBX{bit4} and
CPUID(EAX=7).EBX{bit11} read as 0).
The other TSX sub-feature, Hardware Lock Elision (HLE), is unconditionally
disabled but still enumerated as present by CPUID(EAX=7).EBX{bit4}.
Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Reviewed-by: Mark Gross <mgross@linux.intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Tested-by: Neelima Krishnan <neelima.krishnan@intel.com>
---
arch/x86/include/asm/msr-index.h | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index f58e6921cbf7..f45ca8aad98f 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -91,6 +91,7 @@
* physical address or cache type
* without TLB invalidation.
*/
+#define ARCH_CAP_TSX_CTRL_MSR BIT(7) /* MSR for TSX control is available. */
#define MSR_IA32_FLUSH_CMD 0x0000010b
#define L1D_FLUSH BIT(0) /*
@@ -101,6 +102,10 @@
#define MSR_IA32_BBL_CR_CTL 0x00000119
#define MSR_IA32_BBL_CR_CTL3 0x0000011e
+#define MSR_IA32_TSX_CTRL 0x00000122
+#define TSX_CTRL_RTM_DISABLE BIT(0) /* Disable RTM fxeature */
+#define TSX_CTRL_CPUID_CLEAR BIT(1) /* Disable TSX enumeration */
+
#define MSR_IA32_SYSENTER_CS 0x00000174
#define MSR_IA32_SYSENTER_ESP 0x00000175
#define MSR_IA32_SYSENTER_EIP 0x00000176

View File

@ -0,0 +1,45 @@
From: speck for Pawan Gupta <speck@linutronix.de>
Date: Wed, 9 Oct 2019 16:23:56 -0700
Subject: TAAv6 2
Add a helper function to read IA32_ARCH_CAPABILITIES MSR. If the CPU
doesn't support this MSR return 0.
Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Reviewed-by: Mark Gross <mgross@linux.intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Tested-by: Neelima Krishnan <neelima.krishnan@intel.com>
[bwh: Forward-ported on top of NX: Fix conflict (neighbouring changes)
in arch/x86/kernel/cpu/common.c]
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
---
arch/x86/kernel/cpu/common.c | 11 +++++++++--
1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 128808dccd2f..cee109bd7f00 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1018,13 +1018,20 @@ static bool __init cpu_matches(unsigned long which)
return m && !!(m->driver_data & which);
}
-static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+u64 x86_read_arch_cap_msr(void)
{
u64 ia32_cap = 0;
- if (cpu_has(c, X86_FEATURE_ARCH_CAPABILITIES))
+ if (boot_cpu_has(X86_FEATURE_ARCH_CAPABILITIES))
rdmsrl(MSR_IA32_ARCH_CAPABILITIES, ia32_cap);
+ return ia32_cap;
+}
+
+static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+{
+ u64 ia32_cap = x86_read_arch_cap_msr();
+
/* Set ITLB_MULTIHIT bug if cpu is not in the whitelist and not mitigated */
if (!cpu_matches(NO_ITLB_MULTIHIT) && !(ia32_cap & ARCH_CAP_PSCHANGE_MC_NO))
setup_force_cpu_bug(X86_BUG_ITLB_MULTIHIT);

View File

@ -0,0 +1,245 @@
From: speck for Pawan Gupta <speck@linutronix.de>
Date: Wed, 9 Oct 2019 16:24:56 -0700
Subject: TAAv6 3
Add kernel cmdline parameter "tsx" to control the Transactional
Synchronization Extensions (TSX) feature. On CPUs that support TSX
control, use "tsx=on|off" to enable or disable TSX. Not specifying this
option is equivalent to "tsx=off". This is because on certain processors
TSX may be used as a part of a speculative side channel attack.
Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Reviewed-by: Mark Gross <mgross@linux.intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Tested-by: Neelima Krishnan <neelima.krishnan@intel.com>
[bwh: Backported to 4.19: adjust context]
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
---
.../admin-guide/kernel-parameters.txt | 11 ++
arch/x86/kernel/cpu/Makefile | 2 +-
arch/x86/kernel/cpu/common.c | 2 +
arch/x86/kernel/cpu/cpu.h | 18 +++
arch/x86/kernel/cpu/intel.c | 5 +
arch/x86/kernel/cpu/tsx.c | 115 ++++++++++++++++++
6 files changed, 152 insertions(+), 1 deletion(-)
create mode 100644 arch/x86/kernel/cpu/tsx.c
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index efdc471ed0b9..f03756d2addb 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -4707,6 +4707,17 @@
marks the TSC unconditionally unstable at bootup and
avoids any further wobbles once the TSC watchdog notices.
+ tsx= [X86] Control Transactional Synchronization
+ Extensions (TSX) feature in Intel processors that
+ support TSX control.
+
+ This parameter controls the TSX feature. The options are:
+
+ on - Enable TSX on the system.
+ off - Disable TSX on the system.
+
+ Not specifying this option is equivalent to tsx=off.
+
turbografx.map[2|3]= [HW,JOY]
TurboGraFX parallel port interface
Format:
diff --git a/arch/x86/kernel/cpu/Makefile b/arch/x86/kernel/cpu/Makefile
index 347137e80bf5..320769b4807b 100644
--- a/arch/x86/kernel/cpu/Makefile
+++ b/arch/x86/kernel/cpu/Makefile
@@ -28,7 +28,7 @@ obj-y += cpuid-deps.o
obj-$(CONFIG_PROC_FS) += proc.o
obj-$(CONFIG_X86_FEATURE_NAMES) += capflags.o powerflags.o
-obj-$(CONFIG_CPU_SUP_INTEL) += intel.o intel_pconfig.o
+obj-$(CONFIG_CPU_SUP_INTEL) += intel.o intel_pconfig.o tsx.o
obj-$(CONFIG_CPU_SUP_AMD) += amd.o
obj-$(CONFIG_CPU_SUP_CYRIX_32) += cyrix.o
obj-$(CONFIG_CPU_SUP_CENTAUR) += centaur.o
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index cee109bd7f00..5f89d78fe132 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1487,6 +1487,8 @@ void __init identify_boot_cpu(void)
enable_sep_cpu();
#endif
cpu_detect_tlb(&boot_cpu_data);
+
+ tsx_init();
}
void identify_secondary_cpu(struct cpuinfo_x86 *c)
diff --git a/arch/x86/kernel/cpu/cpu.h b/arch/x86/kernel/cpu/cpu.h
index 7b229afa0a37..236582c90d3f 100644
--- a/arch/x86/kernel/cpu/cpu.h
+++ b/arch/x86/kernel/cpu/cpu.h
@@ -45,6 +45,22 @@ struct _tlb_table {
extern const struct cpu_dev *const __x86_cpu_dev_start[],
*const __x86_cpu_dev_end[];
+#ifdef CONFIG_CPU_SUP_INTEL
+enum tsx_ctrl_states {
+ TSX_CTRL_ENABLE,
+ TSX_CTRL_DISABLE,
+ TSX_CTRL_NOT_SUPPORTED,
+};
+
+extern __ro_after_init enum tsx_ctrl_states tsx_ctrl_state;
+
+extern void __init tsx_init(void);
+extern void tsx_enable(void);
+extern void tsx_disable(void);
+#else
+static inline void tsx_init(void) { }
+#endif /* CONFIG_CPU_SUP_INTEL */
+
extern void get_cpu_cap(struct cpuinfo_x86 *c);
extern void get_cpu_address_sizes(struct cpuinfo_x86 *c);
extern void cpu_detect_cache_sizes(struct cpuinfo_x86 *c);
@@ -65,4 +81,6 @@ unsigned int aperfmperf_get_khz(int cpu);
extern void x86_spec_ctrl_setup_ap(void);
+extern u64 x86_read_arch_cap_msr(void);
+
#endif /* ARCH_X86_CPU_H */
diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
index fc3c07fe7df5..a5287b18a63f 100644
--- a/arch/x86/kernel/cpu/intel.c
+++ b/arch/x86/kernel/cpu/intel.c
@@ -766,6 +766,11 @@ static void init_intel(struct cpuinfo_x86 *c)
init_intel_energy_perf(c);
init_intel_misc_features(c);
+
+ if (tsx_ctrl_state == TSX_CTRL_ENABLE)
+ tsx_enable();
+ if (tsx_ctrl_state == TSX_CTRL_DISABLE)
+ tsx_disable();
}
#ifdef CONFIG_X86_32
diff --git a/arch/x86/kernel/cpu/tsx.c b/arch/x86/kernel/cpu/tsx.c
new file mode 100644
index 000000000000..e39b33b7cef8
--- /dev/null
+++ b/arch/x86/kernel/cpu/tsx.c
@@ -0,0 +1,115 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Intel Transactional Synchronization Extensions (TSX) control.
+ *
+ * Copyright (C) 2019 Intel Corporation
+ *
+ * Author:
+ * Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
+ */
+
+#include <linux/processor.h>
+#include <linux/cpufeature.h>
+
+#include <asm/cmdline.h>
+
+#include "cpu.h"
+
+enum tsx_ctrl_states tsx_ctrl_state __ro_after_init = TSX_CTRL_NOT_SUPPORTED;
+
+void tsx_disable(void)
+{
+ u64 tsx;
+
+ rdmsrl(MSR_IA32_TSX_CTRL, tsx);
+
+ /* Force all transactions to immediately abort */
+ tsx |= TSX_CTRL_RTM_DISABLE;
+ /*
+ * Ensure TSX support is not enumerated in CPUID.
+ * This is visible to userspace and will ensure they
+ * do not waste resources trying TSX transactions that
+ * will always abort.
+ */
+ tsx |= TSX_CTRL_CPUID_CLEAR;
+
+ wrmsrl(MSR_IA32_TSX_CTRL, tsx);
+}
+
+void tsx_enable(void)
+{
+ u64 tsx;
+
+ rdmsrl(MSR_IA32_TSX_CTRL, tsx);
+
+ /* Enable the RTM feature in the cpu */
+ tsx &= ~TSX_CTRL_RTM_DISABLE;
+ /*
+ * Ensure TSX support is enumerated in CPUID.
+ * This is visible to userspace and will ensure they
+ * can enumerate and use the TSX feature.
+ */
+ tsx &= ~TSX_CTRL_CPUID_CLEAR;
+
+ wrmsrl(MSR_IA32_TSX_CTRL, tsx);
+}
+
+static bool __init tsx_ctrl_is_supported(void)
+{
+ u64 ia32_cap = x86_read_arch_cap_msr();
+
+ /*
+ * TSX is controlled via MSR_IA32_TSX_CTRL. However,
+ * support for this MSR is enumerated by ARCH_CAP_TSX_MSR bit
+ * in MSR_IA32_ARCH_CAPABILITIES.
+ */
+ return !!(ia32_cap & ARCH_CAP_TSX_CTRL_MSR);
+}
+
+void __init tsx_init(void)
+{
+ char arg[20];
+ int ret;
+
+ if (!tsx_ctrl_is_supported())
+ return;
+
+ ret = cmdline_find_option(boot_command_line, "tsx", arg, sizeof(arg));
+ if (ret >= 0) {
+ if (!strcmp(arg, "on")) {
+ tsx_ctrl_state = TSX_CTRL_ENABLE;
+ } else if (!strcmp(arg, "off")) {
+ tsx_ctrl_state = TSX_CTRL_DISABLE;
+ } else {
+ tsx_ctrl_state = TSX_CTRL_DISABLE;
+ pr_info("tsx: invalid option, defaulting to off\n");
+ }
+ } else {
+ /* tsx= not provided, defaulting to off */
+ tsx_ctrl_state = TSX_CTRL_DISABLE;
+ }
+
+ if (tsx_ctrl_state == TSX_CTRL_DISABLE) {
+ tsx_disable();
+ /*
+ * tsx_disable() will change the state of the
+ * RTM CPUID bit. Clear it here since it is now
+ * expected to be not set.
+ */
+ setup_clear_cpu_cap(X86_FEATURE_RTM);
+ } else if (tsx_ctrl_state == TSX_CTRL_ENABLE) {
+ /*
+ * HW defaults TSX to be enabled at bootup.
+ * We may still need the TSX enable support
+ * during init for special cases like
+ * kexec after TSX is disabled.
+ */
+ tsx_enable();
+ /*
+ * tsx_enable() will change the state of the
+ * RTM CPUID bit. Force it here since it is now
+ * expected to be set.
+ */
+ setup_force_cpu_cap(X86_FEATURE_RTM);
+ }
+}

View File

@ -0,0 +1,336 @@
From: speck for Pawan Gupta <speck@linutronix.de>
Date: Wed, 9 Oct 2019 16:25:56 -0700
Subject: TAAv6 4
TSX Async Abort (TAA) is a side channel vulnerability to the internal
buffers in some Intel processors similar to Microachitectural Data
Sampling (MDS). In this case certain loads may speculatively pass
invalid data to dependent operations when an asynchronous abort
condition is pending in a TSX transaction. This includes loads with no
fault or assist condition. Such loads may speculatively expose stale
data from the uarch data structures as in MDS. Scope of exposure is
within the same-thread and cross-thread. This issue affects all current
processors that support TSX, but do not have ARCH_CAP_TAA_NO (bit 8) set
in MSR_IA32_ARCH_CAPABILITIES.
On CPUs which have their IA32_ARCH_CAPABILITIES MSR bit MDS_NO=0,
CPUID.MD_CLEAR=1 and the MDS mitigation is clearing the CPU buffers
using VERW or L1D_FLUSH, there is no additional mitigation needed for
TAA.
On affected CPUs with MDS_NO=1 this issue can be mitigated by disabling
Transactional Synchronization Extensions (TSX) feature. A new MSR
IA32_TSX_CTRL in future and current processors after a microcode update
can be used to control TSX feature. TSX_CTRL_RTM_DISABLE bit disables
the TSX sub-feature Restricted Transactional Memory (RTM).
TSX_CTRL_CPUID_CLEAR bit clears the RTM enumeration in CPUID. The other
TSX sub-feature, Hardware Lock Elision (HLE), is unconditionally
disabled with updated microcode but still enumerated as present by
CPUID(EAX=7).EBX{bit4}.
The second mitigation approach is similar to MDS which is clearing the
affected CPU buffers on return to user space and when entering a guest.
Relevant microcode update is required for the mitigation to work. More
details on this approach can be found here:
https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html
TSX feature can be controlled by the "tsx" command line parameter. If
the TSX feature is forced to be enabled then "Clear CPU buffers" (MDS
mitigation) is deployed. The effective mitigation state can be read from
sysfs.
Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Reviewed-by: Mark Gross <mgross@linux.intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Tested-by: Neelima Krishnan <neelima.krishnan@intel.com>
[bwh: Forward-ported on top of NX: Renumber bug bit after
X86_BUG_ITLB_MULTIHIT]
[bwh: Backported to 4.19: Add #include "cpu.h" in bugs.c]
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
---
arch/x86/include/asm/cpufeatures.h | 1 +
arch/x86/include/asm/msr-index.h | 4 +
arch/x86/include/asm/nospec-branch.h | 4 +-
arch/x86/include/asm/processor.h | 7 ++
arch/x86/kernel/cpu/bugs.c | 129 ++++++++++++++++++++++++++-
arch/x86/kernel/cpu/common.c | 15 ++++
6 files changed, 156 insertions(+), 4 deletions(-)
diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index ccad4f183400..5a2eecfed727 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -390,5 +390,6 @@
#define X86_BUG_MSBDS_ONLY X86_BUG(20) /* CPU is only affected by the MSDBS variant of BUG_MDS */
#define X86_BUG_SWAPGS X86_BUG(21) /* CPU is affected by speculation through SWAPGS */
#define X86_BUG_ITLB_MULTIHIT X86_BUG(22) /* CPU may incur MCE during certain page attribute changes */
+#define X86_BUG_TAA X86_BUG(23) /* CPU is affected by TSX Async Abort(TAA) */
#endif /* _ASM_X86_CPUFEATURES_H */
diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index f45ca8aad98f..6d17eb64cc69 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -92,6 +92,10 @@
* without TLB invalidation.
*/
#define ARCH_CAP_TSX_CTRL_MSR BIT(7) /* MSR for TSX control is available. */
+#define ARCH_CAP_TAA_NO BIT(8) /*
+ * Not susceptible to
+ * TSX Async Abort (TAA) vulnerabilities.
+ */
#define MSR_IA32_FLUSH_CMD 0x0000010b
#define L1D_FLUSH BIT(0) /*
diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index 28cb2b31527a..09c7466c4880 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -323,7 +323,7 @@ DECLARE_STATIC_KEY_FALSE(mds_idle_clear);
#include <asm/segment.h>
/**
- * mds_clear_cpu_buffers - Mitigation for MDS vulnerability
+ * mds_clear_cpu_buffers - Mitigation for MDS and TAA vulnerability
*
* This uses the otherwise unused and obsolete VERW instruction in
* combination with microcode which triggers a CPU buffer flush when the
@@ -346,7 +346,7 @@ static inline void mds_clear_cpu_buffers(void)
}
/**
- * mds_user_clear_cpu_buffers - Mitigation for MDS vulnerability
+ * mds_user_clear_cpu_buffers - Mitigation for MDS and TAA vulnerability
*
* Clear CPU buffers if the corresponding static key is enabled
*/
diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
index b54f25697beb..4a163f33a07d 100644
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -1003,4 +1003,11 @@ enum mds_mitigations {
MDS_MITIGATION_VMWERV,
};
+enum taa_mitigations {
+ TAA_MITIGATION_OFF,
+ TAA_MITIGATION_UCODE_NEEDED,
+ TAA_MITIGATION_VERW,
+ TAA_MITIGATION_TSX_DISABLE,
+};
+
#endif /* _ASM_X86_PROCESSOR_H */
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 1e764992fa64..841f106a277a 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -32,11 +32,14 @@
#include <asm/e820/api.h>
#include <asm/hypervisor.h>
+#include "cpu.h"
+
static void __init spectre_v1_select_mitigation(void);
static void __init spectre_v2_select_mitigation(void);
static void __init ssb_select_mitigation(void);
static void __init l1tf_select_mitigation(void);
static void __init mds_select_mitigation(void);
+static void __init taa_select_mitigation(void);
/* The base value of the SPEC_CTRL MSR that always has to be preserved. */
u64 x86_spec_ctrl_base;
@@ -103,6 +106,7 @@ void __init check_bugs(void)
ssb_select_mitigation();
l1tf_select_mitigation();
mds_select_mitigation();
+ taa_select_mitigation();
arch_smt_update();
@@ -266,6 +270,110 @@ static int __init mds_cmdline(char *str)
}
early_param("mds", mds_cmdline);
+#undef pr_fmt
+#define pr_fmt(fmt) "TAA: " fmt
+
+/* Default mitigation for TAA-affected CPUs */
+static enum taa_mitigations taa_mitigation __ro_after_init = TAA_MITIGATION_VERW;
+static bool taa_nosmt __ro_after_init;
+
+static const char * const taa_strings[] = {
+ [TAA_MITIGATION_OFF] = "Vulnerable",
+ [TAA_MITIGATION_UCODE_NEEDED] = "Vulnerable: Clear CPU buffers attempted, no microcode",
+ [TAA_MITIGATION_VERW] = "Mitigation: Clear CPU buffers",
+ [TAA_MITIGATION_TSX_DISABLE] = "Mitigation: TSX disabled",
+};
+
+static void __init taa_select_mitigation(void)
+{
+ u64 ia32_cap = x86_read_arch_cap_msr();
+
+ if (!boot_cpu_has_bug(X86_BUG_TAA)) {
+ taa_mitigation = TAA_MITIGATION_OFF;
+ return;
+ }
+
+ /*
+ * As X86_BUG_TAA=1, TSX feature is supported by the hardware. If
+ * TSX was disabled (X86_FEATURE_RTM=0) earlier during tsx_init().
+ * Select TSX_DISABLE as mitigation.
+ *
+ * This check is ahead of mitigations=off and tsx_async_abort=off
+ * because when TSX is disabled mitigation is already in place. This
+ * ensures sysfs doesn't show "Vulnerable" when TSX is disabled.
+ */
+ if (!boot_cpu_has(X86_FEATURE_RTM)) {
+ taa_mitigation = TAA_MITIGATION_TSX_DISABLE;
+ pr_info("%s\n", taa_strings[taa_mitigation]);
+ return;
+ }
+
+ /* All mitigations turned off from cmdline (mitigations=off) */
+ if (cpu_mitigations_off()) {
+ taa_mitigation = TAA_MITIGATION_OFF;
+ return;
+ }
+
+ /* TAA mitigation is turned off from cmdline (tsx_async_abort=off) */
+ if (taa_mitigation == TAA_MITIGATION_OFF) {
+ pr_info("%s\n", taa_strings[taa_mitigation]);
+ return;
+ }
+
+ if (boot_cpu_has(X86_FEATURE_MD_CLEAR))
+ taa_mitigation = TAA_MITIGATION_VERW;
+ else
+ taa_mitigation = TAA_MITIGATION_UCODE_NEEDED;
+
+ /*
+ * VERW doesn't clear the CPU buffers when MD_CLEAR=1 and MDS_NO=1.
+ * A microcode update fixes this behavior to clear CPU buffers.
+ * Microcode update also adds support for MSR_IA32_TSX_CTRL which
+ * is enumerated by ARCH_CAP_TSX_CTRL_MSR bit.
+ *
+ * On MDS_NO=1 CPUs if ARCH_CAP_TSX_CTRL_MSR is not set, microcode
+ * update is required.
+ */
+ if ((ia32_cap & ARCH_CAP_MDS_NO) &&
+ !(ia32_cap & ARCH_CAP_TSX_CTRL_MSR))
+ taa_mitigation = TAA_MITIGATION_UCODE_NEEDED;
+
+ /*
+ * TSX is enabled, select alternate mitigation for TAA which is
+ * same as MDS. Enable MDS static branch to clear CPU buffers.
+ *
+ * For guests that can't determine whether the correct microcode is
+ * present on host, enable the mitigation for UCODE_NEEDED as well.
+ */
+ static_branch_enable(&mds_user_clear);
+
+ if (taa_nosmt || cpu_mitigations_auto_nosmt())
+ cpu_smt_disable(false);
+
+ pr_info("%s\n", taa_strings[taa_mitigation]);
+}
+
+static int __init tsx_async_abort_cmdline(char *str)
+{
+ if (!boot_cpu_has_bug(X86_BUG_TAA))
+ return 0;
+
+ if (!str)
+ return -EINVAL;
+
+ if (!strcmp(str, "off")) {
+ taa_mitigation = TAA_MITIGATION_OFF;
+ } else if (!strcmp(str, "full")) {
+ taa_mitigation = TAA_MITIGATION_VERW;
+ } else if (!strcmp(str, "full,nosmt")) {
+ taa_mitigation = TAA_MITIGATION_VERW;
+ taa_nosmt = true;
+ }
+
+ return 0;
+}
+early_param("tsx_async_abort", tsx_async_abort_cmdline);
+
#undef pr_fmt
#define pr_fmt(fmt) "Spectre V1 : " fmt
@@ -751,7 +859,7 @@ static void update_indir_branch_cond(void)
#undef pr_fmt
#define pr_fmt(fmt) fmt
-/* Update the static key controlling the MDS CPU buffer clear in idle */
+/* Update the static key controlling the MDS and TAA CPU buffer clear in idle */
static void update_mds_branch_idle(void)
{
/*
@@ -761,8 +869,11 @@ static void update_mds_branch_idle(void)
* The other variants cannot be mitigated when SMT is enabled, so
* clearing the buffers on idle just to prevent the Store Buffer
* repartitioning leak would be a window dressing exercise.
+ *
+ * Apply idle buffer clearing to TAA affected CPUs also.
*/
- if (!boot_cpu_has_bug(X86_BUG_MSBDS_ONLY))
+ if (!boot_cpu_has_bug(X86_BUG_MSBDS_ONLY) &&
+ !boot_cpu_has_bug(X86_BUG_TAA))
return;
if (sched_smt_active())
@@ -772,6 +883,7 @@ static void update_mds_branch_idle(void)
}
#define MDS_MSG_SMT "MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.\n"
+#define TAA_MSG_SMT "TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.\n"
void arch_smt_update(void)
{
@@ -804,6 +916,19 @@ void arch_smt_update(void)
break;
}
+ switch (taa_mitigation) {
+ case TAA_MITIGATION_VERW:
+ case TAA_MITIGATION_UCODE_NEEDED:
+ if (sched_smt_active())
+ pr_warn_once(TAA_MSG_SMT);
+ /* TSX is enabled, apply MDS idle buffer clearing. */
+ update_mds_branch_idle();
+ break;
+ case TAA_MITIGATION_TSX_DISABLE:
+ case TAA_MITIGATION_OFF:
+ break;
+ }
+
mutex_unlock(&spec_ctrl_mutex);
}
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 5f89d78fe132..394bcb0403c9 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1058,6 +1058,21 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
if (!cpu_matches(NO_SWAPGS))
setup_force_cpu_bug(X86_BUG_SWAPGS);
+ /*
+ * When processor is not mitigated for TAA (TAA_NO=0) set TAA bug when:
+ * - TSX is supported or
+ * - TSX_CTRL is supported
+ *
+ * TSX_CTRL check is needed for cases when TSX could be disabled before
+ * the kernel boot e.g. kexec
+ * TSX_CTRL check alone is not sufficient for cases when the microcode
+ * update is not present or running as guest that don't get TSX_CTRL.
+ */
+ if (!(ia32_cap & ARCH_CAP_TAA_NO) &&
+ (boot_cpu_has(X86_FEATURE_RTM) ||
+ (ia32_cap & ARCH_CAP_TSX_CTRL_MSR)))
+ setup_force_cpu_bug(X86_BUG_TAA);
+
if (cpu_matches(NO_MELTDOWN))
return;

View File

@ -0,0 +1,119 @@
From: speck for Pawan Gupta <speck@linutronix.de>
Date: Wed, 9 Oct 2019 16:26:56 -0700
Subject: TAAv6 5
Add the sysfs reporting file for TSX Async Abort. It exposes the
vulnerability and the mitigation state similar to the existing files for
the other hardware vulnerabilities.
sysfs file path is:
/sys/devices/system/cpu/vulnerabilities/tsx_async_abort
Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Reviewed-by: Mark Gross <mgross@linux.intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Tested-by: Neelima Krishnan <neelima.krishnan@intel.com>
[bwh: Forward-ported on top of NX: Fix conflicts (neighbouring
insertions) in arch/x86/kernel/cpu/bugs.c, drivers/base/cpu.c,
include/linux/cpu.h]
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
---
arch/x86/kernel/cpu/bugs.c | 23 +++++++++++++++++++++++
drivers/base/cpu.c | 9 +++++++++
include/linux/cpu.h | 3 +++
3 files changed, 35 insertions(+)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 841f106a277a..c435bc5dc19b 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -1439,6 +1439,21 @@ static ssize_t mds_show_state(char *buf)
sched_smt_active() ? "vulnerable" : "disabled");
}
+static ssize_t tsx_async_abort_show_state(char *buf)
+{
+ if ((taa_mitigation == TAA_MITIGATION_TSX_DISABLE) ||
+ (taa_mitigation == TAA_MITIGATION_OFF))
+ return sprintf(buf, "%s\n", taa_strings[taa_mitigation]);
+
+ if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) {
+ return sprintf(buf, "%s; SMT Host state unknown\n",
+ taa_strings[taa_mitigation]);
+ }
+
+ return sprintf(buf, "%s; SMT %s\n", taa_strings[taa_mitigation],
+ sched_smt_active() ? "vulnerable" : "disabled");
+}
+
static char *stibp_state(void)
{
if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
@@ -1510,6 +1525,9 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
case X86_BUG_ITLB_MULTIHIT:
return itlb_multihit_show_state(buf);
+ case X86_BUG_TAA:
+ return tsx_async_abort_show_state(buf);
+
default:
break;
}
@@ -1551,4 +1569,9 @@ ssize_t cpu_show_itlb_multihit(struct device *dev, struct device_attribute *attr
{
return cpu_show_common(dev, attr, buf, X86_BUG_ITLB_MULTIHIT);
}
+
+ssize_t cpu_show_tsx_async_abort(struct device *dev, struct device_attribute *attr, char *buf)
+{
+ return cpu_show_common(dev, attr, buf, X86_BUG_TAA);
+}
#endif
diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c
index c21e2aec5cbb..e9e7fde0fe00 100644
--- a/drivers/base/cpu.c
+++ b/drivers/base/cpu.c
@@ -558,6 +558,13 @@ ssize_t __weak cpu_show_itlb_multihit(struct device *dev,
return sprintf(buf, "Not affected\n");
}
+ssize_t __weak cpu_show_tsx_async_abort(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ return sprintf(buf, "Not affected\n");
+}
+
static DEVICE_ATTR(meltdown, 0444, cpu_show_meltdown, NULL);
static DEVICE_ATTR(spectre_v1, 0444, cpu_show_spectre_v1, NULL);
static DEVICE_ATTR(spectre_v2, 0444, cpu_show_spectre_v2, NULL);
@@ -565,6 +572,7 @@ static DEVICE_ATTR(spec_store_bypass, 0444, cpu_show_spec_store_bypass, NULL);
static DEVICE_ATTR(l1tf, 0444, cpu_show_l1tf, NULL);
static DEVICE_ATTR(mds, 0444, cpu_show_mds, NULL);
static DEVICE_ATTR(itlb_multihit, 0444, cpu_show_itlb_multihit, NULL);
+static DEVICE_ATTR(tsx_async_abort, 0444, cpu_show_tsx_async_abort, NULL);
static struct attribute *cpu_root_vulnerabilities_attrs[] = {
&dev_attr_meltdown.attr,
@@ -574,6 +582,7 @@ static struct attribute *cpu_root_vulnerabilities_attrs[] = {
&dev_attr_l1tf.attr,
&dev_attr_mds.attr,
&dev_attr_itlb_multihit.attr,
+ &dev_attr_tsx_async_abort.attr,
NULL
};
diff --git a/include/linux/cpu.h b/include/linux/cpu.h
index 7bb824b0f30e..9d8dba19844e 100644
--- a/include/linux/cpu.h
+++ b/include/linux/cpu.h
@@ -61,6 +61,9 @@ extern ssize_t cpu_show_mds(struct device *dev,
struct device_attribute *attr, char *buf);
extern ssize_t cpu_show_itlb_multihit(struct device *dev,
struct device_attribute *attr, char *buf);
+extern ssize_t cpu_show_tsx_async_abort(struct device *dev,
+ struct device_attribute *attr,
+ char *buf);
extern __printf(4, 5)
struct device *cpu_device_create(struct device *parent, void *drvdata,

View File

@ -0,0 +1,57 @@
From: speck for Pawan Gupta <speck@linutronix.de>
Date: Wed, 9 Oct 2019 16:27:56 -0700
Subject: TAAv6 6
Export IA32_ARCH_CAPABILITIES MSR bit MDS_NO=0 to guests on TSX Async
Abort(TAA) affected hosts that have TSX enabled and updated microcode.
This is required so that the guests don't complain,
"Vulnerable: Clear CPU buffers attempted, no microcode"
when the host has the updated microcode to clear CPU buffers.
Microcode update also adds support for MSR_IA32_TSX_CTRL which is
enumerated by the ARCH_CAP_TSX_CTRL bit in IA32_ARCH_CAPABILITIES MSR.
Guests can't do this check themselves when the ARCH_CAP_TSX_CTRL bit is
not exported to the guests.
In this case export MDS_NO=0 to the guests. When guests have
CPUID.MD_CLEAR=1 guests deploy MDS mitigation which also mitigates TAA.
Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Tested-by: Neelima Krishnan <neelima.krishnan@intel.com>
---
arch/x86/kvm/x86.c | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 1ecadf51f154..5ccf79739b2b 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -1143,6 +1143,25 @@ u64 kvm_get_arch_capabilities(void)
if (l1tf_vmx_mitigation != VMENTER_L1D_FLUSH_NEVER)
data |= ARCH_CAP_SKIP_VMENTRY_L1DFLUSH;
+ /*
+ * On TAA affected systems, export MDS_NO=0 when:
+ * - TSX is enabled on host, i.e. X86_FEATURE_RTM=1.
+ * - Updated microcode is present. This is detected by
+ * the presence of ARCH_CAP_TSX_CTRL_MSR. This ensures
+ * VERW clears CPU buffers.
+ *
+ * When MDS_NO=0 is exported, guests deploy clear CPU buffer
+ * mitigation and don't complain:
+ *
+ * "Vulnerable: Clear CPU buffers attempted, no microcode"
+ *
+ * If TSX is disabled on the system, guests are also mitigated against
+ * TAA and clear CPU buffer mitigation is not required for guests.
+ */
+ if (boot_cpu_has_bug(X86_BUG_TAA) && boot_cpu_has(X86_FEATURE_RTM) &&
+ (data & ARCH_CAP_TSX_CTRL_MSR))
+ data &= ~ARCH_CAP_MDS_NO;
+
return data;
}
EXPORT_SYMBOL_GPL(kvm_get_arch_capabilities);

View File

@ -0,0 +1,51 @@
From: speck for Pawan Gupta <speck@linutronix.de>
Date: Wed, 9 Oct 2019 16:28:56 -0700
Subject: TAAv6 7
Platforms which are not affected by X86_BUG_TAA may want the TSX feature
enabled. Add "auto" option to the TSX cmdline parameter. When tsx=auto
disable TSX when X86_BUG_TAA is present, otherwise enable TSX.
More details on X86_BUG_TAA can be found here:
https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html
Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Tested-by: Neelima Krishnan <neelima.krishnan@intel.com>
---
Documentation/admin-guide/kernel-parameters.txt | 5 +++++
arch/x86/kernel/cpu/tsx.c | 5 +++++
2 files changed, 10 insertions(+)
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index f03756d2addb..dffdd4d86f4b 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -4715,6 +4715,11 @@
on - Enable TSX on the system.
off - Disable TSX on the system.
+ auto - Disable TSX if X86_BUG_TAA is present,
+ otherwise enable TSX on the system.
+
+ More details on X86_BUG_TAA are here:
+ Documentation/admin-guide/hw-vuln/tsx_async_abort.rst
Not specifying this option is equivalent to tsx=off.
diff --git a/arch/x86/kernel/cpu/tsx.c b/arch/x86/kernel/cpu/tsx.c
index e39b33b7cef8..e93abe6f0bb9 100644
--- a/arch/x86/kernel/cpu/tsx.c
+++ b/arch/x86/kernel/cpu/tsx.c
@@ -80,6 +80,11 @@ void __init tsx_init(void)
tsx_ctrl_state = TSX_CTRL_ENABLE;
} else if (!strcmp(arg, "off")) {
tsx_ctrl_state = TSX_CTRL_DISABLE;
+ } else if (!strcmp(arg, "auto")) {
+ if (boot_cpu_has_bug(X86_BUG_TAA))
+ tsx_ctrl_state = TSX_CTRL_DISABLE;
+ else
+ tsx_ctrl_state = TSX_CTRL_ENABLE;
} else {
tsx_ctrl_state = TSX_CTRL_DISABLE;
pr_info("tsx: invalid option, defaulting to off\n");

View File

@ -0,0 +1,411 @@
From: speck for Pawan Gupta <speck@linutronix.de>
Date: Wed, 9 Oct 2019 16:29:57 -0700
Subject: TAAv6 8
Add the documenation for TSX Async Abort. Include the description of
the issue, how to check the mitigation state, control the mitigation,
guidance for system administrators.
Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Co-developed-by: Antonio Gomez Iglesias <antonio.gomez.iglesias@intel.com>
Signed-off-by: Antonio Gomez Iglesias <antonio.gomez.iglesias@intel.com>
Reviewed-by: Mark Gross <mgross@linux.intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
[bwh: Forward-ported on top of NX: Fix conflict (neighbouring
insertions) in Documentation/ABI/testing/sysfs-devices-system-cpu]
[bwh: Backported to 4.19: adjust context]
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
---
.../ABI/testing/sysfs-devices-system-cpu | 1 +
Documentation/admin-guide/hw-vuln/index.rst | 1 +
.../admin-guide/hw-vuln/tsx_async_abort.rst | 240 ++++++++++++++++++
.../admin-guide/kernel-parameters.txt | 36 +++
Documentation/x86/index.rst | 1 +
Documentation/x86/tsx_async_abort.rst | 54 ++++
6 files changed, 333 insertions(+)
create mode 100644 Documentation/admin-guide/hw-vuln/tsx_async_abort.rst
create mode 100644 Documentation/x86/tsx_async_abort.rst
--- a/Documentation/ABI/testing/sysfs-devices-system-cpu
+++ b/Documentation/ABI/testing/sysfs-devices-system-cpu
@@ -479,6 +479,7 @@ What: /sys/devices/system/cpu/vulnerabi
/sys/devices/system/cpu/vulnerabilities/l1tf
/sys/devices/system/cpu/vulnerabilities/mds
/sys/devices/system/cpu/vulnerabilities/itlb_multihit
+ /sys/devices/system/cpu/vulnerabilities/tsx_async_abort
Date: January 2018
Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
Description: Information about CPU vulnerabilities
--- a/Documentation/admin-guide/hw-vuln/index.rst
+++ b/Documentation/admin-guide/hw-vuln/index.rst
@@ -12,3 +12,4 @@ are configurable at compile, boot or run
spectre
l1tf
mds
+ tsx_async_abort
--- /dev/null
+++ b/Documentation/admin-guide/hw-vuln/tsx_async_abort.rst
@@ -0,0 +1,240 @@
+TAA - TSX Asynchronous Abort
+======================================
+
+TAA is a hardware vulnerability that allows unprivileged speculative access to
+data which is available in various CPU internal buffers by using asynchronous
+aborts within an Intel TSX transactional region.
+
+Affected processors
+-------------------
+
+This vulnerability only affects Intel processors that support Intel
+Transactional Synchronization Extensions (TSX) when the TAA_NO bit (bit 8)
+is 0 in the IA32_ARCH_CAPABILITIES MSR. On processors where the MDS_NO bit
+(bit 5)is 0 in the IA32_ARCH_CAPABILITIES MSR, the existing MDS mitigations
+also mitigate against TAA.
+
+Whether a processor is affected or not can be read out from the TAA
+vulnerability file in sysfs. See :ref:`tsx_async_abort_sys_info`.
+
+Related CVEs
+------------
+
+The following CVE entry is related to this TAA issue:
+
+ ============== ===== ===================================================
+ CVE-2019-11135 TAA TSX Asynchronous Abort (TAA) condition on some
+ microprocessors utilizing speculative execution may
+ allow an authenticated user to potentially enable
+ information disclosure via a side channel with
+ local access.
+ ============== ===== ===================================================
+
+Problem
+-------
+
+When performing store, load, L1 refill operations, processors write data into
+temporary microarchitectural structures (buffers). The data in the buffer can
+be forwarded to load operations as an optimization.
+
+Intel TSX are an extension to the x86 instruction set architecture that adds
+hardware transactional memory support to improve performance of multi-threaded
+software. TSX lets the processor expose and exploit concurrence hidden in an
+application due to dynamically avoiding unnecessary synchronization.
+
+TSX supports atomic memory transactions that are either committed (success) or
+aborted. During an abort, operations that happened within the transactional region
+are rolled back. An asynchronous abort takes place, among other options, when a
+different thread accesses a cache line that is also used within the transactional
+region when that access might lead to a data race.
+
+Immediately after an uncompleted asynchronous abort, certain speculatively
+executed loads may read data from those internal buffers and pass it to dependent
+operations. This can be then used to infer the value via a cache side channel
+attack.
+
+Because the buffers are potentially shared between Hyper-Threads cross
+Hyper-Thread attacks are possible.
+
+The victim of a malicious actor does not need to make use of TSX. Only the
+attacker needs to begin a TSX transaction and raise an asynchronous abort
+to try to leak some of data stored in the buffers.
+
+Deeper technical information is available in the TAA specific x86 architecture
+section: :ref:`Documentation/x86/tsx_async_abort.rst <tsx_async_abort>`.
+
+
+Attack scenarios
+----------------
+
+Attacks against the TAA vulnerability can be implemented from unprivileged
+applications running on hosts or guests.
+
+As for MDS, the attacker has no control over the memory addresses that can be
+leaked. Only the victim is responsible for bringing data to the CPU. As a
+result, the malicious actor has to first sample as much data as possible and
+then postprocess it to try to infer any useful information from it.
+
+A potential attacker only has read access to the data. Also, there is no direct
+privilege escalation by using this technique.
+
+
+.. _tsx_async_abort_sys_info:
+
+TAA system information
+-----------------------
+
+The Linux kernel provides a sysfs interface to enumerate the current TAA status
+of mitigated systems. The relevant sysfs file is:
+
+/sys/devices/system/cpu/vulnerabilities/tsx_async_abort
+
+The possible values in this file are:
+
+.. list-table::
+
+ * - 'Vulnerable'
+ - The CPU is affected by this vulnerability and the microcode and kernel mitigation are not applied.
+ * - 'Vulnerable: Clear CPU buffers attempted, no microcode'
+ - The system tries to clear the buffers but the microcode might not support the operation.
+ * - 'Mitigation: Clear CPU buffers'
+ - The microcode has been updated to clear the buffers. TSX is still enabled.
+ * - 'Mitigation: TSX disabled'
+ - TSX is disabled.
+ * - 'Not affected'
+ - The CPU is not affected by this issue.
+
+.. _ucode_needed:
+
+Best effort mitigation mode
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+If the processor is vulnerable, but the availability of the microcode-based
+mitigation mechanism is not advertised via CPUID the kernel selects a best
+effort mitigation mode. This mode invokes the mitigation instructions
+without a guarantee that they clear the CPU buffers.
+
+This is done to address virtualization scenarios where the host has the
+microcode update applied, but the hypervisor is not yet updated to expose the
+CPUID to the guest. If the host has updated microcode the protection takes
+effect; otherwise a few CPU cycles are wasted pointlessly.
+
+The state in the tsx_async_abort sysfs file reflects this situation
+accordingly.
+
+
+Mitigation mechanism
+--------------------
+
+The kernel detects the affected CPUs and the presence of the microcode which is
+required. If a CPU is affected and the microcode is available, then the kernel
+enables the mitigation by default.
+
+
+The mitigation can be controlled at boot time via a kernel command line option.
+See :ref:`taa_mitigation_control_command_line`. It also provides a sysfs
+interface. See :ref:`taa_mitigation_sysfs`.
+
+.. _virt_mechanism:
+
+Virtualization mitigation
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Affected systems where the host has the TAA microcode and the TAA mitigation is
+ON (with TSX disabled) are not vulnerable regardless of the status of the VMs.
+
+In all other cases, if the host either does not have the TAA microcode or the
+kernel is not mitigated, the system might be vulnerable.
+
+
+.. _taa_mitigation_control_command_line:
+
+Mitigation control on the kernel command line
+---------------------------------------------
+
+The kernel command line allows to control the TAA mitigations at boot time with
+the option "tsx_async_abort=". The valid arguments for this option are:
+
+ ============ =============================================================
+ off This option disables the TAA mitigation on affected platforms.
+ If the system has TSX enabled (see next parameter) and the CPU
+ is affected, the system is vulnerable.
+
+ full TAA mitigation is enabled. If TSX is enabled, on an affected
+ system it will clear CPU buffers on ring transitions. On
+ systems which are MDS-affected and deploy MDS mitigation,
+ TAA is also mitigated. Specifying this option on those
+ systems will have no effect.
+
+ full,nosmt The same as tsx_async_abort=full, with SMT disabled on
+ vulnerable CPUs that have TSX enabled. This is the complete
+ mitigation. When TSX is disabled, SMT is not disabled because
+ CPU is not vulnerable to cross-thread TAA attacks.
+ ============ =============================================================
+
+Not specifying this option is equivalent to "tsx_async_abort=full".
+
+The kernel command line also allows to control the TSX feature using the
+parameter "tsx=" on CPUs which support TSX control. MSR_IA32_TSX_CTRL is used
+to control the TSX feature and the enumeration of the TSX feature bits (RTM
+and HLE) in CPUID.
+
+The valid options are:
+
+ ============ =============================================================
+ off Disables TSX.
+
+ on Enables TSX.
+
+ auto Disables TSX on affected platform, otherwise enables TSX.
+ ============ =============================================================
+
+Not specifying this option is equivalent to "tsx=off".
+
+The following combinations of the "tsx_async_abort" and "tsx" are possible. For
+affected platforms tsx=auto is equivalent to tsx=off and the result will be:
+
+ ========= ==================== =========================================
+ tsx=on tsx_async_abort=full The system will use VERW to clear CPU
+ buffers.
+ tsx=on tsx_async_abort=off The system is vulnerable.
+ tsx=off tsx_async_abort=full TSX is disabled. System is not vulnerable.
+ tsx=off tsx_async_abort=off TSX is disabled. System is not vulnerable.
+ ========= ==================== =========================================
+
+For unaffected platforms "tsx=on" and "tsx_async_abort=full" does not clear CPU
+buffers. For platforms without TSX control "tsx" command line argument has no
+effect.
+
+
+Mitigation selection guide
+--------------------------
+
+1. Trusted userspace and guests
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+If all user space applications are from a trusted source and do not execute
+untrusted code which is supplied externally, then the mitigation can be
+disabled. The same applies to virtualized environments with trusted guests.
+
+
+2. Untrusted userspace and guests
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+If there are untrusted applications or guests on the system, enabling TSX
+might allow a malicious actor to leak data from the host or from other
+processes running on the same physical core.
+
+If the microcode is available and the TSX is disabled on the host, attacks
+are prevented in a virtualized environment as well, even if the VMs do not
+explicitly enable the mitigation.
+
+
+.. _taa_default_mitigations:
+
+Default mitigations
+-------------------
+
+The kernel's default action for vulnerable processors is:
+
+ - Deploy TSX disable mitigation (tsx_async_abort=full).
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -2538,6 +2538,7 @@
spec_store_bypass_disable=off [X86,PPC]
l1tf=off [X86]
mds=off [X86]
+ tsx_async_abort=off [X86]
auto (default)
Mitigate all CPU vulnerabilities, but leave SMT
@@ -2553,6 +2554,7 @@
be fully mitigated, even if it means losing SMT.
Equivalent to: l1tf=flush,nosmt [X86]
mds=full,nosmt [X86]
+ tsx_async_abort=full,nosmt [X86]
mminit_loglevel=
[KNL] When CONFIG_DEBUG_MEMORY_INIT is set, this
@@ -4528,6 +4530,40 @@
neutralize any effect of /proc/sys/kernel/sysrq.
Useful for debugging.
+ tsx_async_abort= [X86,INTEL] Control mitigation for the TSX Async
+ Abort (TAA) vulnerability.
+
+ Similar to Micro-architectural Data Sampling (MDS)
+ certain CPUs that support Transactional
+ Synchronization Extensions (TSX) are vulnerable to an
+ exploit against CPU internal buffers which can forward
+ information to a disclosure gadget under certain
+ conditions.
+
+ In vulnerable processors, the speculatively forwarded
+ data can be used in a cache side channel attack, to
+ access data to which the attacker does not have direct
+ access.
+
+ This parameter controls the TAA mitigation. The
+ options are:
+
+ full - Enable TAA mitigation on vulnerable CPUs
+ full,nosmt - Enable TAA mitigation and disable SMT on
+ vulnerable CPUs. If TSX is disabled, SMT
+ is not disabled because CPU is not
+ vulnerable to cross-thread TAA attacks.
+ off - Unconditionally disable TAA mitigation
+
+ Not specifying this option is equivalent to
+ tsx_async_abort=full. On CPUs which are MDS affected
+ and deploy MDS mitigation, TAA mitigation is not
+ required and doesn't provide any additional
+ mitigation.
+
+ For details see:
+ Documentation/admin-guide/hw-vuln/tsx_async_abort.rst
+
tcpmhash_entries= [KNL,NET]
Set the number of tcp_metrics_hash slots.
Default value is 8192 or 16384 depending on total
--- a/Documentation/x86/index.rst
+++ b/Documentation/x86/index.rst
@@ -6,3 +6,4 @@ x86 architecture specifics
:maxdepth: 1
mds
+ tsx_async_abort
--- /dev/null
+++ b/Documentation/x86/tsx_async_abort.rst
@@ -0,0 +1,54 @@
+TSX Async Abort (TAA) mitigation
+=================================================
+
+.. _tsx_async_abort:
+
+Overview
+--------
+
+TSX Async Abort (TAA) is a side channel attack on internal buffers in some
+Intel processors similar to Microachitectural Data Sampling (MDS). In this
+case certain loads may speculatively pass invalid data to dependent operations
+when an asynchronous abort condition is pending in a Transactional
+Synchronization Extensions (TSX) transaction. This includes loads with no
+fault or assist condition. Such loads may speculatively expose stale data from
+the same uarch data structures as in MDS, with same scope of exposure i.e.
+same-thread and cross-thread. This issue affects all current processors that
+support TSX.
+
+Mitigation strategy
+-------------------
+
+a) TSX disable - One of the mitigation is to disable TSX feature. A new MSR
+IA32_TSX_CTRL will be available in future and current processors and after a
+microcode update in which can be used to disable TSX. This MSR can be used to
+disable the TSX feature and the enumeration of the TSX feature bits(RTM and
+HLE) in CPUID.
+
+b) CPU clear buffers - Similar to MDS, clearing the CPU buffers mitigates this
+vulnerability. More details on this approach can be found here
+https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html
+
+Kernel internal mitigation modes
+--------------------------------
+
+ ============= ============================================================
+ off Mitigation is disabled. Either the CPU is not affected or
+ tsx_async_abort=off is supplied on the kernel command line.
+
+ tsx disabled Mitigation is enabled. TSX feature is disabled by default at
+ bootup on processors that support TSX control.
+
+ verw Mitigation is enabled. CPU is affected and MD_CLEAR is
+ advertised in CPUID.
+
+ ucode needed Mitigation is enabled. CPU is affected and MD_CLEAR is not
+ advertised in CPUID. That is mainly for virtualization
+ scenarios where the host has the updated microcode but the
+ hypervisor does not expose MD_CLEAR in CPUID. It's a best
+ effort approach without guarantee.
+ ============= ============================================================
+
+If the CPU is affected and "tsx_async_abort" kernel command line parameter is
+not provided then the kernel selects an appropriate mitigation depending on the
+status of RTM and MD_CLEAR CPUID bits.

View File

@ -0,0 +1,385 @@
From: speck for Pawan Gupta <speck@linutronix.de>
Date: Wed, 9 Oct 2019 16:30:57 -0700
Subject: TAAv6 9
Transactional Synchronization Extensions (TSX) is an extension to the
x86 instruction set architecture (ISA) that adds Hardware Transactional
Memory (HTM) support. Changing TSX state currently requires a reboot.
This may not be desirable when rebooting imposes a huge penalty. Add
support to control TSX feature via a new sysfs file:
/sys/devices/system/cpu/hw_tx_mem
- Writing 0|off|N|n to this file disables TSX feature on all the CPUs.
This is equivalent to boot parameter tsx=off.
- Writing 1|on|Y|y to this file enables TSX feature on all the CPUs.
This is equivalent to boot parameter tsx=on.
- Reading from this returns the status of TSX feature.
- When TSX control is not supported this interface is not visible in
sysfs.
Changing the TSX state from this interface also updates CPUID.RTM
feature bit. From the kernel side, this feature bit doesn't result in
any ALTERNATIVE code patching. No memory allocations are done to
save/restore user state. No code paths in outside of the tests for
vulnerability to TAA are dependent on the value of the feature bit. In
general the kernel doesn't care whether RTM is present or not.
Applications typically look at CPUID bits once at startup (or when first
calling into a library that uses the feature). So we have a couple of
cases to cover:
1) An application started and saw that RTM was enabled, so began
to use it. Then TSX was disabled. Net result in this case is that
the application will keep trying to use RTM, but every xbegin() will
immediately abort the transaction. This has a performance impact to
the application, but it doesn't affect correctness because all users
of RTM must have a fallback path for when the transaction aborts. Note
that even if an application is in the middle of a transaction when we
disable RTM, we are safe. The XPI that we use to update the TSX_CTRL
MSR will abort the transaction (just as any interrupt would abort
a transaction).
2) An application starts and sees RTM is not available. So it will
always use alternative paths. Even if TSX is enabled and RTM is set,
applications in general do not re-evaluate their choice so will
continue to run in non-TSX mode.
When the TSX state is changed from the sysfs interface, TSX Async Abort
(TAA) mitigation state also needs to be updated. Set the TAA mitigation
state as per TSX and VERW static branch state.
Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Reviewed-by: Mark Gross <mgross@linux.intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Tested-by: Neelima Krishnan <neelima.krishnan@intel.com>
[bwh: Backported to 4.19: adjust context]
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
---
.../ABI/testing/sysfs-devices-system-cpu | 23 ++++
.../admin-guide/hw-vuln/tsx_async_abort.rst | 29 +++++
arch/x86/kernel/cpu/bugs.c | 21 +++-
arch/x86/kernel/cpu/cpu.h | 3 +-
arch/x86/kernel/cpu/tsx.c | 100 +++++++++++++++++-
drivers/base/cpu.c | 32 +++++-
include/linux/cpu.h | 6 ++
7 files changed, 210 insertions(+), 4 deletions(-)
diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu
index a1bd0b6766d7..2a98f6c70add 100644
--- a/Documentation/ABI/testing/sysfs-devices-system-cpu
+++ b/Documentation/ABI/testing/sysfs-devices-system-cpu
@@ -513,3 +513,26 @@ Description: Control Symetric Multi Threading (SMT)
If control status is "forceoff" or "notsupported" writes
are rejected.
+
+What: /sys/devices/system/cpu/hw_tx_mem
+Date: August 2019
+Contact: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
+ Linux kernel mailing list <linux-kernel@vger.kernel.org>
+Description: Hardware Transactional Memory (HTM) control.
+
+ Read/write interface to control HTM feature for all the CPUs in
+ the system. This interface is only present on platforms that
+ support HTM control. HTM is a hardware feature to speed up the
+ execution of multi-threaded software through lock elision. An
+ example of HTM implementation is Intel Transactional
+ Synchronization Extensions (TSX).
+
+ Read returns the status of HTM feature.
+
+ 0: HTM is disabled
+ 1: HTM is enabled
+
+ Write sets the state of HTM feature.
+
+ 0: Disables HTM
+ 1: Enables HTM
diff --git a/Documentation/admin-guide/hw-vuln/tsx_async_abort.rst b/Documentation/admin-guide/hw-vuln/tsx_async_abort.rst
index 58f24db49615..b62bc749fd8c 100644
--- a/Documentation/admin-guide/hw-vuln/tsx_async_abort.rst
+++ b/Documentation/admin-guide/hw-vuln/tsx_async_abort.rst
@@ -207,6 +207,35 @@ buffers. For platforms without TSX control "tsx" command line argument has no
effect.
+.. _taa_mitigation_sysfs:
+
+Mitigation control using sysfs
+------------------------------
+
+For those affected systems that can not be frequently rebooted to enable or
+disable TSX, sysfs can be used as an alternative after installing the updates.
+The possible values for the file /sys/devices/system/cpu/hw_tx_mem are:
+
+ ============ =============================================================
+ 0 Disable TSX. Upon entering a TSX transactional region, the code
+ will immediately abort, before any instruction executes within
+ the transactional region even speculatively, and continue on
+ the fallback. Equivalent to boot parameter "tsx=off".
+
+ 1 Enable TSX. Equivalent to boot parameter "tsx=on".
+
+ ============ =============================================================
+
+Reading from this file returns the status of TSX feature. This file is only
+present on systems that support TSX control.
+
+When disabling TSX by using the sysfs mechanism, applications that are already
+running and use TSX will see their transactional regions aborted and execution
+flow will be redirected to the fallback, losing the benefits of the
+non-blocking path. TSX needs fallback code to guarantee correct execution
+without transactional regions.
+
+
Mitigation selection guide
--------------------------
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index c435bc5dc19b..f0a998c10056 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -274,7 +274,7 @@ early_param("mds", mds_cmdline);
#define pr_fmt(fmt) "TAA: " fmt
/* Default mitigation for TAA-affected CPUs */
-static enum taa_mitigations taa_mitigation __ro_after_init = TAA_MITIGATION_VERW;
+static enum taa_mitigations taa_mitigation = TAA_MITIGATION_VERW;
static bool taa_nosmt __ro_after_init;
static const char * const taa_strings[] = {
@@ -374,6 +374,25 @@ static int __init tsx_async_abort_cmdline(char *str)
}
early_param("tsx_async_abort", tsx_async_abort_cmdline);
+void taa_update_mitigation(bool tsx_enabled)
+{
+ /*
+ * When userspace changes the TSX state, update taa_mitigation
+ * so that the updated mitigation state is shown in:
+ * /sys/devices/system/cpu/vulnerabilities/tsx_async_abort
+ *
+ * Check if TSX is disabled.
+ * Check if CPU buffer clear is enabled.
+ * else the system is vulnerable.
+ */
+ if (!tsx_enabled)
+ taa_mitigation = TAA_MITIGATION_TSX_DISABLE;
+ else if (static_key_count(&mds_user_clear.key))
+ taa_mitigation = TAA_MITIGATION_VERW;
+ else
+ taa_mitigation = TAA_MITIGATION_OFF;
+}
+
#undef pr_fmt
#define pr_fmt(fmt) "Spectre V1 : " fmt
diff --git a/arch/x86/kernel/cpu/cpu.h b/arch/x86/kernel/cpu/cpu.h
index 236582c90d3f..57fd603d367f 100644
--- a/arch/x86/kernel/cpu/cpu.h
+++ b/arch/x86/kernel/cpu/cpu.h
@@ -52,11 +52,12 @@ enum tsx_ctrl_states {
TSX_CTRL_NOT_SUPPORTED,
};
-extern __ro_after_init enum tsx_ctrl_states tsx_ctrl_state;
+extern enum tsx_ctrl_states tsx_ctrl_state;
extern void __init tsx_init(void);
extern void tsx_enable(void);
extern void tsx_disable(void);
+extern void taa_update_mitigation(bool tsx_enabled);
#else
static inline void tsx_init(void) { }
#endif /* CONFIG_CPU_SUP_INTEL */
diff --git a/arch/x86/kernel/cpu/tsx.c b/arch/x86/kernel/cpu/tsx.c
index e93abe6f0bb9..96320449abb7 100644
--- a/arch/x86/kernel/cpu/tsx.c
+++ b/arch/x86/kernel/cpu/tsx.c
@@ -10,12 +10,15 @@
#include <linux/processor.h>
#include <linux/cpufeature.h>
+#include <linux/cpu.h>
#include <asm/cmdline.h>
#include "cpu.h"
-enum tsx_ctrl_states tsx_ctrl_state __ro_after_init = TSX_CTRL_NOT_SUPPORTED;
+static DEFINE_MUTEX(tsx_mutex);
+
+enum tsx_ctrl_states tsx_ctrl_state = TSX_CTRL_NOT_SUPPORTED;
void tsx_disable(void)
{
@@ -118,3 +121,98 @@ void __init tsx_init(void)
setup_force_cpu_cap(X86_FEATURE_RTM);
}
}
+
+static void tsx_update_this_cpu(void *arg)
+{
+ unsigned long enable = (unsigned long)arg;
+
+ if (enable)
+ tsx_enable();
+ else
+ tsx_disable();
+}
+
+/* Take tsx_mutex lock and update tsx_ctrl_state when calling this function */
+static void tsx_update_on_each_cpu(bool val)
+{
+ get_online_cpus();
+ on_each_cpu(tsx_update_this_cpu, (void *)val, 1);
+ put_online_cpus();
+}
+
+ssize_t hw_tx_mem_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ return sprintf(buf, "%d\n", tsx_ctrl_state == TSX_CTRL_ENABLE ? 1 : 0);
+}
+
+ssize_t hw_tx_mem_store(struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ enum tsx_ctrl_states requested_state;
+ ssize_t ret;
+ bool val;
+
+ ret = kstrtobool(buf, &val);
+ if (ret)
+ return ret;
+
+ mutex_lock(&tsx_mutex);
+
+ if (val)
+ requested_state = TSX_CTRL_ENABLE;
+ else
+ requested_state = TSX_CTRL_DISABLE;
+
+ /* Current state is same as the requested state, do nothing */
+ if (tsx_ctrl_state == requested_state)
+ goto exit;
+
+ tsx_ctrl_state = requested_state;
+
+ /*
+ * Changing the TSX state from this interface also updates CPUID.RTM
+ * feature bit. From the kernel side, this feature bit doesn't result
+ * in any ALTERNATIVE code patching. No memory allocations are done to
+ * save/restore user state. No code paths in outside of the tests for
+ * vulnerability to TAA are dependent on the value of the feature bit.
+ * In general the kernel doesn't care whether RTM is present or not.
+ *
+ * From the user side it is a bit fuzzier. Applications typically look
+ * at CPUID bits once at startup (or when first calling into a library
+ * that uses the feature). So we have a couple of cases to cover:
+ *
+ * 1) An application started and saw that RTM was enabled, so began
+ * to use it. Then TSX was disabled. Net result in this case is
+ * that the application will keep trying to use RTM, but every
+ * xbegin() will immediately abort the transaction. This has a
+ * performance impact to the application, but it doesn't affect
+ * correctness because all users of RTM must have a fallback path
+ * for when the transaction aborts. Note that even if an application
+ * is in the middle of a transaction when we disable RTM, we are
+ * safe. The XPI that we use to update the TSX_CTRL MSR will abort
+ * the transaction (just as any interrupt would abort a
+ * transaction).
+ *
+ * 2) An application starts and sees RTM is not available. So it will
+ * always use alternative paths. Even if TSX is enabled and RTM is
+ * set, applications in general do not re-evaluate their choice so
+ * will continue to run in non-TSX mode.
+ */
+ tsx_update_on_each_cpu(val);
+
+ if (boot_cpu_has_bug(X86_BUG_TAA))
+ taa_update_mitigation(val);
+exit:
+ mutex_unlock(&tsx_mutex);
+
+ return count;
+}
+
+umode_t hw_tx_mem_is_visible(void)
+{
+ if (tsx_ctrl_state == TSX_CTRL_NOT_SUPPORTED)
+ return 0;
+
+ return 0644;
+}
diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c
index e9e7fde0fe00..ebc46fd81762 100644
--- a/drivers/base/cpu.c
+++ b/drivers/base/cpu.c
@@ -458,6 +458,34 @@ struct device *cpu_device_create(struct device *parent, void *drvdata,
}
EXPORT_SYMBOL_GPL(cpu_device_create);
+ssize_t __weak hw_tx_mem_show(struct device *dev, struct device_attribute *a,
+ char *buf)
+{
+ return -ENODEV;
+}
+
+ssize_t __weak hw_tx_mem_store(struct device *dev, struct device_attribute *a,
+ const char *buf, size_t count)
+{
+ return -ENODEV;
+}
+
+DEVICE_ATTR_RW(hw_tx_mem);
+
+umode_t __weak hw_tx_mem_is_visible(void)
+{
+ return 0;
+}
+
+static umode_t cpu_root_attrs_is_visible(struct kobject *kobj,
+ struct attribute *attr, int index)
+{
+ if (attr == &dev_attr_hw_tx_mem.attr)
+ return hw_tx_mem_is_visible();
+
+ return attr->mode;
+}
+
#ifdef CONFIG_GENERIC_CPU_AUTOPROBE
static DEVICE_ATTR(modalias, 0444, print_cpu_modalias, NULL);
#endif
@@ -479,11 +507,13 @@ static struct attribute *cpu_root_attrs[] = {
#ifdef CONFIG_GENERIC_CPU_AUTOPROBE
&dev_attr_modalias.attr,
#endif
+ &dev_attr_hw_tx_mem.attr,
NULL
};
static struct attribute_group cpu_root_attr_group = {
- .attrs = cpu_root_attrs,
+ .attrs = cpu_root_attrs,
+ .is_visible = cpu_root_attrs_is_visible,
};
static const struct attribute_group *cpu_root_attr_groups[] = {
diff --git a/include/linux/cpu.h b/include/linux/cpu.h
index 9d8dba19844e..7bd8ced5c000 100644
--- a/include/linux/cpu.h
+++ b/include/linux/cpu.h
@@ -65,6 +65,12 @@ extern ssize_t cpu_show_tsx_async_abort(struct device *dev,
struct device_attribute *attr,
char *buf);
+extern ssize_t hw_tx_mem_show(struct device *dev, struct device_attribute *a,
+ char *buf);
+extern ssize_t hw_tx_mem_store(struct device *dev, struct device_attribute *a,
+ const char *buf, size_t count);
+extern umode_t hw_tx_mem_is_visible(void);
+
extern __printf(4, 5)
struct device *cpu_device_create(struct device *parent, void *drvdata,
const struct attribute_group **groups,

View File

@ -270,6 +270,15 @@ bugfix/x86//itlb_multihit/0009-x86-Add-ITLB_MULTIHIT-bug-infrastructure.patch
bugfix/x86//itlb_multihit/0010-kvm-mmu-ITLB_MULTIHIT-mitigation.patch
bugfix/x86//itlb_multihit/0011-kvm-Add-helper-function-for-creating-VM-worker-threa.patch
bugfix/x86//itlb_multihit/0012-kvm-x86-mmu-Recovery-of-shattered-NX-large-pages.patch
bugfix/x86/taa/0013-TAAv6-1.patch
bugfix/x86/taa/0014-TAAv6-2.patch
bugfix/x86/taa/0015-TAAv6-3.patch
bugfix/x86/taa/0016-TAAv6-4.patch
bugfix/x86/taa/0017-TAAv6-5.patch
bugfix/x86/taa/0018-TAAv6-6.patch
bugfix/x86/taa/0019-TAAv6-7.patch
bugfix/x86/taa/0020-TAAv6-8.patch
bugfix/x86/taa/0021-TAAv6-9.patch
# ABI maintenance
debian/abi/powerpc-avoid-abi-change-for-disabling-tm.patch