* [RFC PATCH V2] vhost: don't use kmap() to log dirty pages
@ 2019-05-09 12:58 Jason Wang
2019-05-09 13:18 ` Michael S. Tsirkin
0 siblings, 1 reply; 5+ messages in thread
From: Jason Wang @ 2019-05-09 12:58 UTC (permalink / raw)
To: mst, jasowang, kvm, virtualization, netdev, linux-kernel
Cc: Christoph Hellwig, James Bottomley, Andrea Arcangeli,
Thomas Gleixner, Ingo Molnar, Peter Zijlstra, Darren Hart
Vhost log dirty pages directly to a userspace bitmap through GUP and
kmap_atomic() since kernel doesn't have a set_bit_to_user()
helper. This will cause issues for the arch that has virtually tagged
caches. The way to fix is to keep using userspace virtual
address. Fortunately, futex has arch_futex_atomic_op_inuser() which
could be used for setting a bit to user.
Note:
- There're archs (few non popular ones) that don't implement futex
helper, we can't log dirty pages. We can fix them e.g for non
virtually tagged archs implement a kmap fallback on top or simply
disable LOG_ALL features of vhost.
- The helper also requires userspace pointer is located at 4-byte
boundary, need to check during dirty log setting
Cc: Christoph Hellwig <hch@infradead.org>
Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Darren Hart <dvhart@infradead.org>
Fixes: 3a4d5c94e9593 ("vhost_net: a kernel-level virtio server")
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
Changes from V1:
- switch to use arch_futex_atomic_op_inuser()
---
drivers/vhost/vhost.c | 35 +++++++++++++++++------------------
1 file changed, 17 insertions(+), 18 deletions(-)
diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index 351af88..4e5a004 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -31,6 +31,7 @@
#include <linux/sched/signal.h>
#include <linux/interval_tree_generic.h>
#include <linux/nospec.h>
+#include <asm/futex.h>
#include "vhost.h"
@@ -1652,6 +1653,10 @@ long vhost_dev_ioctl(struct vhost_dev *d, unsigned int ioctl, void __user *argp)
r = -EFAULT;
break;
}
+ if (p & 0x3) {
+ r = -EINVAL;
+ break;
+ }
for (i = 0; i < d->nvqs; ++i) {
struct vhost_virtqueue *vq;
void __user *base = (void __user *)(unsigned long)p;
@@ -1692,31 +1697,27 @@ long vhost_dev_ioctl(struct vhost_dev *d, unsigned int ioctl, void __user *argp)
}
EXPORT_SYMBOL_GPL(vhost_dev_ioctl);
-/* TODO: This is really inefficient. We need something like get_user()
- * (instruction directly accesses the data, with an exception table entry
- * returning -EFAULT). See Documentation/x86/exception-tables.txt.
- */
-static int set_bit_to_user(int nr, void __user *addr)
+static int set_bit_to_user(int nr, u32 __user *addr)
{
unsigned long log = (unsigned long)addr;
struct page *page;
- void *base;
- int bit = nr + (log % PAGE_SIZE) * 8;
+ u32 old;
int r;
r = get_user_pages_fast(log, 1, 1, &page);
if (r < 0)
return r;
BUG_ON(r != 1);
- base = kmap_atomic(page);
- set_bit(bit, base);
- kunmap_atomic(base);
+
+ r = arch_futex_atomic_op_inuser(FUTEX_OP_ADD, 1 << nr, &old, addr);
+ /* TODO: fallback to kmap() when -ENOSYS? */
+
set_page_dirty_lock(page);
put_page(page);
- return 0;
+ return r;
}
-static int log_write(void __user *log_base,
+static int log_write(u32 __user *log_base,
u64 write_address, u64 write_length)
{
u64 write_page = write_address / VHOST_PAGE_SIZE;
@@ -1726,12 +1727,10 @@ static int log_write(void __user *log_base,
return 0;
write_length += write_address % VHOST_PAGE_SIZE;
for (;;) {
- u64 base = (u64)(unsigned long)log_base;
- u64 log = base + write_page / 8;
- int bit = write_page % 8;
- if ((u64)(unsigned long)log != log)
- return -EFAULT;
- r = set_bit_to_user(bit, (void __user *)(unsigned long)log);
+ u32 __user *log = log_base + write_page / 32;
+ int bit = write_page % 32;
+
+ r = set_bit_to_user(bit, log);
if (r < 0)
return r;
if (write_length <= VHOST_PAGE_SIZE)
--
1.8.3.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [RFC PATCH V2] vhost: don't use kmap() to log dirty pages
2019-05-09 12:58 [RFC PATCH V2] vhost: don't use kmap() to log dirty pages Jason Wang
@ 2019-05-09 13:18 ` Michael S. Tsirkin
2019-05-10 2:59 ` Jason Wang
0 siblings, 1 reply; 5+ messages in thread
From: Michael S. Tsirkin @ 2019-05-09 13:18 UTC (permalink / raw)
To: Jason Wang
Cc: kvm, virtualization, netdev, linux-kernel, Christoph Hellwig,
James Bottomley, Andrea Arcangeli, Thomas Gleixner, Ingo Molnar,
Peter Zijlstra, Darren Hart
On Thu, May 09, 2019 at 08:58:00AM -0400, Jason Wang wrote:
> Vhost log dirty pages directly to a userspace bitmap through GUP and
> kmap_atomic() since kernel doesn't have a set_bit_to_user()
> helper. This will cause issues for the arch that has virtually tagged
> caches. The way to fix is to keep using userspace virtual
> address. Fortunately, futex has arch_futex_atomic_op_inuser() which
> could be used for setting a bit to user.
>
> Note:
> - There're archs (few non popular ones) that don't implement futex
> helper, we can't log dirty pages. We can fix them e.g for non
> virtually tagged archs implement a kmap fallback on top or simply
> disable LOG_ALL features of vhost.
> - The helper also requires userspace pointer is located at 4-byte
> boundary, need to check during dirty log setting
Why check? Round it down.
> Cc: Christoph Hellwig <hch@infradead.org>
> Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
> Cc: Andrea Arcangeli <aarcange@redhat.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Darren Hart <dvhart@infradead.org>
> Fixes: 3a4d5c94e9593 ("vhost_net: a kernel-level virtio server")
> Signed-off-by: Jason Wang <jasowang@redhat.com>
> ---
> Changes from V1:
> - switch to use arch_futex_atomic_op_inuser()
> ---
> drivers/vhost/vhost.c | 35 +++++++++++++++++------------------
> 1 file changed, 17 insertions(+), 18 deletions(-)
>
> diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
> index 351af88..4e5a004 100644
> --- a/drivers/vhost/vhost.c
> +++ b/drivers/vhost/vhost.c
> @@ -31,6 +31,7 @@
> #include <linux/sched/signal.h>
> #include <linux/interval_tree_generic.h>
> #include <linux/nospec.h>
> +#include <asm/futex.h>
>
> #include "vhost.h"
>
> @@ -1652,6 +1653,10 @@ long vhost_dev_ioctl(struct vhost_dev *d, unsigned int ioctl, void __user *argp)
> r = -EFAULT;
> break;
> }
> + if (p & 0x3) {
> + r = -EINVAL;
> + break;
> + }
> for (i = 0; i < d->nvqs; ++i) {
> struct vhost_virtqueue *vq;
> void __user *base = (void __user *)(unsigned long)p;
That's an ABI change and might break some userspace. I don't think
it's necessary: you are changing individual bits anyway.
> @@ -1692,31 +1697,27 @@ long vhost_dev_ioctl(struct vhost_dev *d, unsigned int ioctl, void __user *argp)
> }
> EXPORT_SYMBOL_GPL(vhost_dev_ioctl);
>
> -/* TODO: This is really inefficient. We need something like get_user()
> - * (instruction directly accesses the data, with an exception table entry
> - * returning -EFAULT). See Documentation/x86/exception-tables.txt.
> - */
> -static int set_bit_to_user(int nr, void __user *addr)
> +static int set_bit_to_user(int nr, u32 __user *addr)
> {
> unsigned long log = (unsigned long)addr;
> struct page *page;
> - void *base;
> - int bit = nr + (log % PAGE_SIZE) * 8;
> + u32 old;
> int r;
>
> r = get_user_pages_fast(log, 1, 1, &page);
OK so the trick is that page is pinned so you don't expect
arch_futex_atomic_op_inuser below to fail. get_user_pages_fast
guarantees page is not going away but does it guarantee PTE won't be
invaidated or write protected?
> if (r < 0)
> return r;
> BUG_ON(r != 1);
> - base = kmap_atomic(page);
> - set_bit(bit, base);
> - kunmap_atomic(base);
> +
> + r = arch_futex_atomic_op_inuser(FUTEX_OP_ADD, 1 << nr, &old, addr);
> + /* TODO: fallback to kmap() when -ENOSYS? */
> +
Add a comment why this won't fail? Maybe warn on EFAULT?
Also down the road a variant that does not need tricks like this is
still nice to have.
> set_page_dirty_lock(page);
> put_page(page);
> - return 0;
> + return r;
> }
>
> -static int log_write(void __user *log_base,
> +static int log_write(u32 __user *log_base,
> u64 write_address, u64 write_length)
> {
> u64 write_page = write_address / VHOST_PAGE_SIZE;
> @@ -1726,12 +1727,10 @@ static int log_write(void __user *log_base,
> return 0;
> write_length += write_address % VHOST_PAGE_SIZE;
> for (;;) {
> - u64 base = (u64)(unsigned long)log_base;
> - u64 log = base + write_page / 8;
> - int bit = write_page % 8;
> - if ((u64)(unsigned long)log != log)
> - return -EFAULT;
> - r = set_bit_to_user(bit, (void __user *)(unsigned long)log);
> + u32 __user *log = log_base + write_page / 32;
> + int bit = write_page % 32;
> +
> + r = set_bit_to_user(bit, log);
> if (r < 0)
> return r;
> if (write_length <= VHOST_PAGE_SIZE)
> --
> 1.8.3.1
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [RFC PATCH V2] vhost: don't use kmap() to log dirty pages
2019-05-09 13:18 ` Michael S. Tsirkin
@ 2019-05-10 2:59 ` Jason Wang
2019-05-10 4:48 ` Jason Wang
0 siblings, 1 reply; 5+ messages in thread
From: Jason Wang @ 2019-05-10 2:59 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: kvm, virtualization, netdev, linux-kernel, Christoph Hellwig,
James Bottomley, Andrea Arcangeli, Thomas Gleixner, Ingo Molnar,
Peter Zijlstra, Darren Hart
On 2019/5/9 下午9:18, Michael S. Tsirkin wrote:
> On Thu, May 09, 2019 at 08:58:00AM -0400, Jason Wang wrote:
>> Vhost log dirty pages directly to a userspace bitmap through GUP and
>> kmap_atomic() since kernel doesn't have a set_bit_to_user()
>> helper. This will cause issues for the arch that has virtually tagged
>> caches. The way to fix is to keep using userspace virtual
>> address. Fortunately, futex has arch_futex_atomic_op_inuser() which
>> could be used for setting a bit to user.
>>
>> Note:
>> - There're archs (few non popular ones) that don't implement futex
>> helper, we can't log dirty pages. We can fix them e.g for non
>> virtually tagged archs implement a kmap fallback on top or simply
>> disable LOG_ALL features of vhost.
>> - The helper also requires userspace pointer is located at 4-byte
>> boundary, need to check during dirty log setting
> Why check? Round it down.
Will do this.
>
>> Cc: Christoph Hellwig <hch@infradead.org>
>> Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
>> Cc: Andrea Arcangeli <aarcange@redhat.com>
>> Cc: Thomas Gleixner <tglx@linutronix.de>
>> Cc: Ingo Molnar <mingo@redhat.com>
>> Cc: Peter Zijlstra <peterz@infradead.org>
>> Cc: Darren Hart <dvhart@infradead.org>
>> Fixes: 3a4d5c94e9593 ("vhost_net: a kernel-level virtio server")
>> Signed-off-by: Jason Wang <jasowang@redhat.com>
>> ---
>> Changes from V1:
>> - switch to use arch_futex_atomic_op_inuser()
>> ---
>> drivers/vhost/vhost.c | 35 +++++++++++++++++------------------
>> 1 file changed, 17 insertions(+), 18 deletions(-)
>>
>> diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
>> index 351af88..4e5a004 100644
>> --- a/drivers/vhost/vhost.c
>> +++ b/drivers/vhost/vhost.c
>> @@ -31,6 +31,7 @@
>> #include <linux/sched/signal.h>
>> #include <linux/interval_tree_generic.h>
>> #include <linux/nospec.h>
>> +#include <asm/futex.h>
>>
>> #include "vhost.h"
>>
>> @@ -1652,6 +1653,10 @@ long vhost_dev_ioctl(struct vhost_dev *d, unsigned int ioctl, void __user *argp)
>> r = -EFAULT;
>> break;
>> }
>> + if (p & 0x3) {
>> + r = -EINVAL;
>> + break;
>> + }
>> for (i = 0; i < d->nvqs; ++i) {
>> struct vhost_virtqueue *vq;
>> void __user *base = (void __user *)(unsigned long)p;
> That's an ABI change and might break some userspace. I don't think
> it's necessary: you are changing individual bits anyway.
Right.
>
>> @@ -1692,31 +1697,27 @@ long vhost_dev_ioctl(struct vhost_dev *d, unsigned int ioctl, void __user *argp)
>> }
>> EXPORT_SYMBOL_GPL(vhost_dev_ioctl);
>>
>> -/* TODO: This is really inefficient. We need something like get_user()
>> - * (instruction directly accesses the data, with an exception table entry
>> - * returning -EFAULT). See Documentation/x86/exception-tables.txt.
>> - */
>> -static int set_bit_to_user(int nr, void __user *addr)
>> +static int set_bit_to_user(int nr, u32 __user *addr)
>> {
>> unsigned long log = (unsigned long)addr;
>> struct page *page;
>> - void *base;
>> - int bit = nr + (log % PAGE_SIZE) * 8;
>> + u32 old;
>> int r;
>>
>> r = get_user_pages_fast(log, 1, 1, &page);
> OK so the trick is that page is pinned so you don't expect
> arch_futex_atomic_op_inuser below to fail. get_user_pages_fast
> guarantees page is not going away but does it guarantee PTE won't be
> invaidated or write protected?
Good point, then I think we probably need to do manual fixup through
fixup_user_fault() if arch_futex_atomic_op_in_user() fail.
>
>> if (r < 0)
>> return r;
>> BUG_ON(r != 1);
>> - base = kmap_atomic(page);
>> - set_bit(bit, base);
>> - kunmap_atomic(base);
>> +
>> + r = arch_futex_atomic_op_inuser(FUTEX_OP_ADD, 1 << nr, &old, addr);
>> + /* TODO: fallback to kmap() when -ENOSYS? */
>> +
> Add a comment why this won't fail? Maybe warn on EFAULT?
>
> Also down the road a variant that does not need tricks like this is
> still nice to have.
Ok. Let me post a V3.
Thanks
>
>
>> set_page_dirty_lock(page);
>> put_page(page);
>> - return 0;
>> + return r;
>> }
>>
>> -static int log_write(void __user *log_base,
>> +static int log_write(u32 __user *log_base,
>> u64 write_address, u64 write_length)
>> {
>> u64 write_page = write_address / VHOST_PAGE_SIZE;
>> @@ -1726,12 +1727,10 @@ static int log_write(void __user *log_base,
>> return 0;
>> write_length += write_address % VHOST_PAGE_SIZE;
>> for (;;) {
>> - u64 base = (u64)(unsigned long)log_base;
>> - u64 log = base + write_page / 8;
>> - int bit = write_page % 8;
>> - if ((u64)(unsigned long)log != log)
>> - return -EFAULT;
>> - r = set_bit_to_user(bit, (void __user *)(unsigned long)log);
>> + u32 __user *log = log_base + write_page / 32;
>> + int bit = write_page % 32;
>> +
>> + r = set_bit_to_user(bit, log);
>> if (r < 0)
>> return r;
>> if (write_length <= VHOST_PAGE_SIZE)
>> --
>> 1.8.3.1
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [RFC PATCH V2] vhost: don't use kmap() to log dirty pages
2019-05-10 2:59 ` Jason Wang
@ 2019-05-10 4:48 ` Jason Wang
2019-05-13 5:22 ` Jason Wang
0 siblings, 1 reply; 5+ messages in thread
From: Jason Wang @ 2019-05-10 4:48 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: kvm, virtualization, netdev, linux-kernel, Christoph Hellwig,
James Bottomley, Andrea Arcangeli, Thomas Gleixner, Ingo Molnar,
Peter Zijlstra, Darren Hart
On 2019/5/10 上午10:59, Jason Wang wrote:
>>>
>>> r = get_user_pages_fast(log, 1, 1, &page);
>> OK so the trick is that page is pinned so you don't expect
>> arch_futex_atomic_op_inuser below to fail. get_user_pages_fast
>> guarantees page is not going away but does it guarantee PTE won't be
>> invaidated or write protected?
>
>
> Good point, then I think we probably need to do manual fixup through
> fixup_user_fault() if arch_futex_atomic_op_in_user() fail.
This looks like a overkill, we don't need to atomic environment here
actually. Instead, just keep pagefault enabled should work. So just
introduce arch_futex_atomic_op_inuser_inatomic() variant with pagefault
disabled there just for futex should be sufficient.
Thanks
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [RFC PATCH V2] vhost: don't use kmap() to log dirty pages
2019-05-10 4:48 ` Jason Wang
@ 2019-05-13 5:22 ` Jason Wang
0 siblings, 0 replies; 5+ messages in thread
From: Jason Wang @ 2019-05-13 5:22 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: kvm, virtualization, netdev, linux-kernel, Christoph Hellwig,
James Bottomley, Andrea Arcangeli, Thomas Gleixner, Ingo Molnar,
Peter Zijlstra, Darren Hart
On 2019/5/10 下午12:48, Jason Wang wrote:
>
> On 2019/5/10 上午10:59, Jason Wang wrote:
>>>>
>>>> r = get_user_pages_fast(log, 1, 1, &page);
>>> OK so the trick is that page is pinned so you don't expect
>>> arch_futex_atomic_op_inuser below to fail. get_user_pages_fast
>>> guarantees page is not going away but does it guarantee PTE won't be
>>> invaidated or write protected?
>>
>>
>> Good point, then I think we probably need to do manual fixup through
>> fixup_user_fault() if arch_futex_atomic_op_in_user() fail.
>
>
> This looks like a overkill, we don't need to atomic environment here
> actually. Instead, just keep pagefault enabled should work. So just
> introduce arch_futex_atomic_op_inuser_inatomic() variant with
> pagefault disabled there just for futex should be sufficient.
>
> Thanks
Ok, instead of using tricks, I think we can gracefully fallback to a
get_user()/put_user() pair protected by a mutex.
Let me post a non-rfc version for this.
Thanks
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2019-05-13 5:22 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-05-09 12:58 [RFC PATCH V2] vhost: don't use kmap() to log dirty pages Jason Wang
2019-05-09 13:18 ` Michael S. Tsirkin
2019-05-10 2:59 ` Jason Wang
2019-05-10 4:48 ` Jason Wang
2019-05-13 5:22 ` Jason Wang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).