Paolo Bonzini wrote on 2015-04-30: > This patch series introduces system management mode support. Just curious what's motivation to add vSMM supporting? Is there any usage case inside guest requires SMM? Thanks. > There is still some work to do, namely: test without unrestricted > guest support, test on AMD, disable the capability if !unrestricted > guest and !emulate invalid guest state(*), test with a QEMU that > understand KVM_MEM_X86_SMRAM, actually post QEMU patches that let you use this. > > (*) newer chipsets moved away from legacy SMRAM at 0xa0000, > thus support for real mode CS base above 1M is necessary > > Because legacy SMRAM is a mess, I have tried these patches with Q35's > high SMRAM (at 0xfeda0000). This means that right now this isn't the > easiest thing to test; you need QEMU patches that add support for high > SMRAM, and SeaBIOS patches to use high SMRAM. Until QEMU support for > KVM_MEM_X86_SMRAM is in place, also, I'm keeping SMRAM open in SeaBIOS. > > That said, even this clumsy and incomplete userspace configuration is > enough to test all patches except 11 and 12. > > The series is structured as follows. > > Patch 1 is an unrelated bugfix (I think). Patches 2 to 6 extend some > infrastructure functions. Patches 1 to 4 could be committed right now. > > Patches 7 to 9 implement basic support for SMM in the KVM API and > teach KVM about doing the world switch on SMI and RSM. > > Patch 10 touches all places in KVM that read/write guest memory to go > through an x86-specific function. The x86-specific function takes a > VCPU rather than a struct kvm. This is used in patches 11 and 12 to > limits access to specially marked SMRAM slots unless the VCPU is in > system management mode. > > Finally, patch 13 exposes the new capability for userspace to probe. > Best regards, Yang {.n++%ݶw{.n+{G{ayʇڙ,jfhz_(階ݢj"mG?&~iOzv^m ?I