Drop "KVM: VMX: Zero out *all* general purpose registers after VM-Exit"

This is not needed to fix CVE-2019-3016, and is addressing an issue
that's so far theoretical.  It also needs a further fix to avoid
causing a more serious regression (depending on the compiler
behaviour).
This commit is contained in:
Ben Hutchings 2020-06-07 01:13:16 +01:00
parent ff5ad5a3d1
commit 22423990cd
3 changed files with 0 additions and 69 deletions

1
debian/changelog vendored
View File

@ -16,7 +16,6 @@ linux (4.19.118-2+deb10u1) UNRELEASED; urgency=medium
* kernel/relay.c: handle alloc_percpu returning NULL in relay_open
(CVE-2019-19462)
* mm: Fix mremap not considering huge pmd devmap (CVE-2020-10757)
* [x86] KVM: VMX: Zero out *all* general purpose registers after VM-Exit
* [x86] KVM: nVMX: Always sync GUEST_BNDCFGS when it comes from vmcs01
* KVM: Introduce a new guest mapping API
* [arm64] kvm: fix compilation on aarch64

View File

@ -1,67 +0,0 @@
From: Sean Christopherson <sean.j.christopherson@intel.com>
Date: Fri, 25 Jan 2019 07:40:50 -0800
Subject: [01/11] KVM: VMX: Zero out *all* general purpose registers after
VM-Exit
Origin: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git/commit?id=b4be98039a9224ec0cfc2b706e8e881b9ba53850
commit 0e0ab73c9a0243736bcd779b30b717e23ba9a56d upstream.
...except RSP, which is restored by hardware as part of VM-Exit.
Paolo theorized that restoring registers from the stack after a VM-Exit
in lieu of zeroing them could lead to speculative execution with the
guest's values, e.g. if the stack accesses miss the L1 cache[1].
Zeroing XORs are dirt cheap, so just be ultra-paranoid.
Note that the scratch register (currently RCX) used to save/restore the
guest state is also zeroed as its host-defined value is loaded via the
stack, just with a MOV instead of a POP.
[1] https://patchwork.kernel.org/patch/10771539/#22441255
Fixes: 0cb5b30698fd ("kvm: vmx: Scrub hardware GPRs at VM-exit")
Cc: Jim Mattson <jmattson@google.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
[bwh: Backported to 4.19: adjust filename, context]
Signed-off-by: Ben Hutchings <ben.hutchings@codethink.co.uk>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
arch/x86/kvm/vmx.c | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index d37b48173e9c..e4d0ad06790e 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -10841,6 +10841,15 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu)
"mov %%r13, %c[r13](%0) \n\t"
"mov %%r14, %c[r14](%0) \n\t"
"mov %%r15, %c[r15](%0) \n\t"
+
+ /*
+ * Clear all general purpose registers (except RSP, which is loaded by
+ * the CPU during VM-Exit) to prevent speculative use of the guest's
+ * values, even those that are saved/loaded via the stack. In theory,
+ * an L1 cache miss when restoring registers could lead to speculative
+ * execution with the guest's values. Zeroing XORs are dirt cheap,
+ * i.e. the extra paranoia is essentially free.
+ */
"xor %%r8d, %%r8d \n\t"
"xor %%r9d, %%r9d \n\t"
"xor %%r10d, %%r10d \n\t"
@@ -10855,8 +10864,11 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu)
"xor %%eax, %%eax \n\t"
"xor %%ebx, %%ebx \n\t"
+ "xor %%ecx, %%ecx \n\t"
+ "xor %%edx, %%edx \n\t"
"xor %%esi, %%esi \n\t"
"xor %%edi, %%edi \n\t"
+ "xor %%ebp, %%ebp \n\t"
"pop %%" _ASM_BP "; pop %%" _ASM_DX " \n\t"
".pushsection .rodata \n\t"
".global vmx_return \n\t"
--
2.27.0.rc0

View File

@ -312,7 +312,6 @@ bugfix/all/fs-binfmt_elf.c-allocate-initialized-memory-in-fill_.patch
bugfix/all/kernel-relay.c-handle-alloc_percpu-returning-NULL-in.patch
bugfix/all/mm-Fix-mremap-not-considering-huge-pmd-devmap.patch
# pre-requisites and CVE-2019-3016
bugfix/x86/KVM-VMX-Zero-out-all-general-purpose-registers-after.patch
bugfix/x86/KVM-nVMX-Always-sync-GUEST_BNDCFGS-when-it-comes-fro.patch
bugfix/all/KVM-Introduce-a-new-guest-mapping-API.patch
bugfix/arm64/kvm-fix-compilation-on-aarch64.patch