From: Jann Horn <jannh@google.com> To: ahmedsoliman0x666@gmail.com Cc: kvm@vger.kernel.org, "Kernel Hardening" <kernel-hardening@lists.openwall.com>, virtualization@lists.linux-foundation.org, linux-doc@vger.kernel.org, "the arch/x86 maintainers" <x86@kernel.org>, "Paolo Bonzini" <pbonzini@redhat.com>, "Radim Krčmář" <rkrcmar@redhat.com>, "Jonathan Corbet" <corbet@lwn.net>, "Thomas Gleixner" <tglx@linutronix.de>, "Ingo Molnar" <mingo@redhat.com>, "H . Peter Anvin" <hpa@zytor.com>, "Kees Cook" <keescook@chromium.org>, "Ard Biesheuvel" <ard.biesheuvel@linaro.org>, david@redhat.com, "Boris Lukashev" <blukashev@sempervictus.com>, david.vrabel@nutanix.com, nigel.edwards@hpe.com, riel@surriel.com Subject: Re: [PATCH 3/3] [RFC V3] KVM: X86: Adding skeleton for Memory ROE Date: Fri, 20 Jul 2018 03:28:15 +0200 [thread overview] Message-ID: <CAG48ez0+KiOhyX1R3=FjWQe5M0MFZ5GC=AkV6ZiSYK3OBXsS+A@mail.gmail.com> (raw) In-Reply-To: <CAAGnT3aQhvHJ4H1vaTYiYotN022wh1if76=xywTVGqA5o_UQrA@mail.gmail.com> On Fri, Jul 20, 2018 at 2:26 AM Ahmed Soliman <ahmedsoliman0x666@gmail.com> wrote: > > On 20 July 2018 at 00:59, Jann Horn <jannh@google.com> wrote: > > On Thu, Jul 19, 2018 at 11:40 PM Ahmed Abd El Mawgood > > > Why are you implementing this in the kernel, instead of doing it in > > host userspace? > > I thought about implementing it completely in QEMU but It won't be > possible for few reasons: > > - After talking to QEMU folks I came up to conclusion that it when it > comes to managing memory allocated for guest, it is always better to let > KVM handles everything, unless there is a good reason to play with that > memory chunk inside QEMU itself. Why? It seems to me like it'd be easier to add a way to mprotect() guest pages to readonly via virtio or whatever in QEMU than to add kernel code? And if you ever want to support VM snapshotting/resumption, you'll need support for restoring the protection flags from QEMU anyway. > - But actually there is a good reason for implementing ROE in kernel space, > it is that ROE is architecture dependent to great extent. How so? The host component just has to make pages in guest memory readonly, right? As far as I can tell, from QEMU, it'd more or less be a matter of calling mprotect() a few times? (Plus potentially some hooks to prevent other virtio code from crashing by attempting to access protected pages - but you'd need that anyway, no matter where the protection for the guest is enforced.) > I should have > emphasized that the only currently supported architecture is X86. I am > not sure how deep the dependency on architecture goes. But as for now > the current set of patches does a SPTE enumeration as part of the process. > To my best knowledge, this isn't exposed outside arch/x68/kvm let alone > having a host user space interface for it. Also the way I am planning to > protect TLB from malicious gva -> gpa mapping is by knowing that in x86 > it is possible to VMEXIT on page faults, I am not sure if it will safe to > assume that all kvm supported architectures will behave this way. You mean EPT faults, right? If so: I think all architectures have to support that - there are already other reasons why random guest memory accesses can fault. In particular, the host can page out guest memory. I think that's the case on all architectures? > For these reasons I thought it will be better if arch dependent stuff (the > mechanism implementation) is kept in arch/*/kvm folder and with minimal > modifications to virt/kvm/* after setting a kconfig variable to enable ROE. > But I left room for the user space app using kvm to decide the rightful policy > for handling ROE violations. The way it works by KVM_EXIT_MMIO error to user > space, keeping all the architectural details hidden away from user space. > > A last note is that I didn't create this from scratch, instead I extended > KVM_MEM_READONLY implementation to also allow R/O per page instead > R/O per whole slot which is already done in kernel space. But then you still have to also do something about virtio code in QEMU that might write to those pages, right? -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
WARNING: multiple messages have this Message-ID (diff)
From: Jann Horn <jannh@google.com> To: ahmedsoliman0x666@gmail.com Cc: kvm@vger.kernel.org, "Kernel Hardening" <kernel-hardening@lists.openwall.com>, virtualization@lists.linux-foundation.org, linux-doc@vger.kernel.org, "the arch/x86 maintainers" <x86@kernel.org>, "Paolo Bonzini" <pbonzini@redhat.com>, "Radim Krčmář" <rkrcmar@redhat.com>, "Jonathan Corbet" <corbet@lwn.net>, "Thomas Gleixner" <tglx@linutronix.de>, "Ingo Molnar" <mingo@redhat.com>, "H . Peter Anvin" <hpa@zytor.com>, "Kees Cook" <keescook@chromium.org>, "Ard Biesheuvel" <ard.biesheuvel@linaro.org>, david@redhat.com, "Boris Lukashev" <blukashev@sempervictus.com>, david.vrabel@nutanix.com, nigel.edwards@hpe.com, riel@surriel.com Subject: Re: [PATCH 3/3] [RFC V3] KVM: X86: Adding skeleton for Memory ROE Date: Fri, 20 Jul 2018 03:28:15 +0200 [thread overview] Message-ID: <CAG48ez0+KiOhyX1R3=FjWQe5M0MFZ5GC=AkV6ZiSYK3OBXsS+A@mail.gmail.com> (raw) In-Reply-To: <CAAGnT3aQhvHJ4H1vaTYiYotN022wh1if76=xywTVGqA5o_UQrA@mail.gmail.com> On Fri, Jul 20, 2018 at 2:26 AM Ahmed Soliman <ahmedsoliman0x666@gmail.com> wrote: > > On 20 July 2018 at 00:59, Jann Horn <jannh@google.com> wrote: > > On Thu, Jul 19, 2018 at 11:40 PM Ahmed Abd El Mawgood > > > Why are you implementing this in the kernel, instead of doing it in > > host userspace? > > I thought about implementing it completely in QEMU but It won't be > possible for few reasons: > > - After talking to QEMU folks I came up to conclusion that it when it > comes to managing memory allocated for guest, it is always better to let > KVM handles everything, unless there is a good reason to play with that > memory chunk inside QEMU itself. Why? It seems to me like it'd be easier to add a way to mprotect() guest pages to readonly via virtio or whatever in QEMU than to add kernel code? And if you ever want to support VM snapshotting/resumption, you'll need support for restoring the protection flags from QEMU anyway. > - But actually there is a good reason for implementing ROE in kernel space, > it is that ROE is architecture dependent to great extent. How so? The host component just has to make pages in guest memory readonly, right? As far as I can tell, from QEMU, it'd more or less be a matter of calling mprotect() a few times? (Plus potentially some hooks to prevent other virtio code from crashing by attempting to access protected pages - but you'd need that anyway, no matter where the protection for the guest is enforced.) > I should have > emphasized that the only currently supported architecture is X86. I am > not sure how deep the dependency on architecture goes. But as for now > the current set of patches does a SPTE enumeration as part of the process. > To my best knowledge, this isn't exposed outside arch/x68/kvm let alone > having a host user space interface for it. Also the way I am planning to > protect TLB from malicious gva -> gpa mapping is by knowing that in x86 > it is possible to VMEXIT on page faults, I am not sure if it will safe to > assume that all kvm supported architectures will behave this way. You mean EPT faults, right? If so: I think all architectures have to support that - there are already other reasons why random guest memory accesses can fault. In particular, the host can page out guest memory. I think that's the case on all architectures? > For these reasons I thought it will be better if arch dependent stuff (the > mechanism implementation) is kept in arch/*/kvm folder and with minimal > modifications to virt/kvm/* after setting a kconfig variable to enable ROE. > But I left room for the user space app using kvm to decide the rightful policy > for handling ROE violations. The way it works by KVM_EXIT_MMIO error to user > space, keeping all the architectural details hidden away from user space. > > A last note is that I didn't create this from scratch, instead I extended > KVM_MEM_READONLY implementation to also allow R/O per page instead > R/O per whole slot which is already done in kernel space. But then you still have to also do something about virtio code in QEMU that might write to those pages, right?
next prev parent reply other threads:[~2018-07-20 1:28 UTC|newest] Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top 2018-07-19 21:37 Memory Read Only Enforcement: VMM assisted kernel rootkit mitigation for KVM Ahmed Abd El Mawgood 2018-07-19 21:37 ` Ahmed Abd El Mawgood 2018-07-19 21:37 ` Ahmed Abd El Mawgood 2018-07-19 21:38 ` [PATCH 1/3] [RFC V3] KVM: X86: Memory ROE documentation Ahmed Abd El Mawgood 2018-07-19 21:38 ` Ahmed Abd El Mawgood 2018-07-20 1:11 ` Randy Dunlap 2018-07-20 1:11 ` Randy Dunlap 2018-07-20 1:11 ` Randy Dunlap 2018-07-19 21:38 ` Ahmed Abd El Mawgood 2018-07-19 21:38 ` [PATCH 2/3] [RFC V3] KVM: X86: Adding arbitrary data pointer in kvm memslot itterator functions Ahmed Abd El Mawgood 2018-07-19 21:38 ` Ahmed Abd El Mawgood 2018-07-19 21:38 ` Ahmed Abd El Mawgood 2018-07-19 21:38 ` [PATCH 3/3] [RFC V3] KVM: X86: Adding skeleton for Memory ROE Ahmed Abd El Mawgood 2018-07-19 21:38 ` Ahmed Abd El Mawgood 2018-07-19 22:59 ` Jann Horn 2018-07-19 22:59 ` Jann Horn 2018-07-20 0:26 ` Ahmed Soliman 2018-07-20 0:26 ` Ahmed Soliman 2018-07-20 0:26 ` Ahmed Soliman 2018-07-20 1:28 ` Jann Horn [this message] 2018-07-20 1:28 ` Jann Horn 2018-07-20 14:44 ` Ahmed Soliman 2018-07-20 14:44 ` Ahmed Soliman 2018-07-20 14:44 ` Ahmed Soliman 2018-07-20 1:07 ` Randy Dunlap 2018-07-20 1:07 ` Randy Dunlap 2018-07-20 1:07 ` Randy Dunlap 2018-07-19 21:38 ` Ahmed Abd El Mawgood 2018-07-20 2:45 ` Memory Read Only Enforcement: VMM assisted kernel rootkit mitigation for KVM Konrad Rzeszutek Wilk 2018-07-20 2:45 ` Konrad Rzeszutek Wilk 2018-07-20 2:45 ` Konrad Rzeszutek Wilk
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to='CAG48ez0+KiOhyX1R3=FjWQe5M0MFZ5GC=AkV6ZiSYK3OBXsS+A@mail.gmail.com' \ --to=jannh@google.com \ --cc=ahmedsoliman0x666@gmail.com \ --cc=ard.biesheuvel@linaro.org \ --cc=blukashev@sempervictus.com \ --cc=corbet@lwn.net \ --cc=david.vrabel@nutanix.com \ --cc=david@redhat.com \ --cc=hpa@zytor.com \ --cc=keescook@chromium.org \ --cc=kernel-hardening@lists.openwall.com \ --cc=kvm@vger.kernel.org \ --cc=linux-doc@vger.kernel.org \ --cc=mingo@redhat.com \ --cc=nigel.edwards@hpe.com \ --cc=pbonzini@redhat.com \ --cc=riel@surriel.com \ --cc=rkrcmar@redhat.com \ --cc=tglx@linutronix.de \ --cc=virtualization@lists.linux-foundation.org \ --cc=x86@kernel.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.