The bug has most of the context for this fix. Basically, the cloud image
disables TPM drives, and we want to reenable them.
I added the virt and hardware-agnostic drivers (TIS/CRB/XEN/VTPM), and
I explictly didn't add the hardware-specific drivers. I also didn't
bother with CONFIG_HW_RANDOM_TPM as we already set
CONFIG_RANDOM_TRUST_CPU=y which handles any early-boot RNG issues.
Signed-off-by: Joe Richey <joerichey@google.com>
In order to access Azure's VMbus via /sys/vmbus, the corresponding
UIO module must be available.
Also enable VFIO for safe userspace device handling when the host
exposes a vIOMMU.
- Various config symbols were removed, renamed or split
- HOTPLUG_PCI_SHPC is now boolean, so set it to built-in
- The stack protector config symbols were changed to two booleans
with different names
- Various ancient SCSI drivers were removed
- BT_HCIBTUART and INFINIBAND_CXGB3_DEBUG were removed
- OMAP_DM_TIMER is now an automatic symbol
- Marvell NAND driver was rewritten, so we enable MTD_NAND_MARVELL
instead of MTD_NAND_PXA3xx
- Various netfilter symbols are now boolean instead of tristate
As discussed on d-kernel, this flavour is added as experiment on request
of Microsoft. For now it is only tested on Microsoft Azure.
It will be expanded to cover the other public cloud platforms at well.
This platforms will need additional drivers.