qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* Re: presentation at kvm forum and pagefaults
@ 2019-11-01  4:06 Michael S. Tsirkin
  2019-11-01  8:51 ` Michael S. Tsirkin
  0 siblings, 1 reply; 2+ messages in thread
From: Michael S. Tsirkin @ 2019-11-01  4:06 UTC (permalink / raw)
  To: virtio-dev, qemu-devel, virtualization

Regarding the presentation I gave at the kvm forum
on pagefaults.

Two points:


1. pagefaults are important not just for migration.
They are important for performance features such as
autonuma and huge pages, since this relies on moving
pages around.
Migration can maybe be solved by switch to software but
this is not a good solution for numa and thp  since
at a given time some page is likely being moved.




2.  For devices such as networking RX order in which buffers are
used *does not matter*.
Thus if a device gets a fault in response to attempt to store a buffer
into memory, it can just re-try, using the next buffer in queue instead.

This works because normally buffers can be used out of order by device.

The faulted buffer will be reused by another buffer when driver notifies
device page has been faulted in.

Note buffers are processed by buffer in the order in which they have
been used, *not* the order in which they have been put in the queue.  So
this will *not* cause any packet reordering for the driver.

Packets will only get dropped if all buffers are swapped
out, which should be rare with a large RX queue.


As I said at the forum, a side buffer for X packets
to be stored temporarily is also additionally possible. But with the above
it is no longer strictly required.


This conflicts with the IN_ORDER feature flag, I guess we will have to
re-think this flag then. If we do feel we need to salvage IN_ORDER as is,
maybe device can use the buffer with length 0 and driver will re-post it
later, but I am not I am not sure about this since involving the VF
driver seems inelegant.


-- 
MST



^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: presentation at kvm forum and pagefaults
  2019-11-01  4:06 presentation at kvm forum and pagefaults Michael S. Tsirkin
@ 2019-11-01  8:51 ` Michael S. Tsirkin
  0 siblings, 0 replies; 2+ messages in thread
From: Michael S. Tsirkin @ 2019-11-01  8:51 UTC (permalink / raw)
  To: virtio-dev, qemu-devel, virtualization

On Fri, Nov 01, 2019 at 12:07:01AM -0400, Michael S. Tsirkin wrote:
> Regarding the presentation I gave at the kvm forum
> on pagefaults.
> 
> Two points:
> 
> 
> 1. pagefaults are important not just for migration.
> They are important for performance features such as
> autonuma and huge pages, since this relies on moving
> pages around.
> Migration can maybe be solved by switch to software but
> this is not a good solution for numa and thp  since
> at a given time some page is likely being moved.
> 

Also, pagefaults might allow iommu page table shadowing to scale better
to huge guests. As in, the host IOMMU page tables can be populated
lazily on fault. I'm not sure what the performance of such an approach
would be though, but this space might be worth exploring.


> 
> 
> 
> 2.  For devices such as networking RX order in which buffers are
> used *does not matter*.
> Thus if a device gets a fault in response to attempt to store a buffer
> into memory, it can just re-try, using the next buffer in queue instead.
> 
> This works because normally buffers can be used out of order by device.
> 
> The faulted buffer will be reused by another buffer when driver notifies
> device page has been faulted in.
> 
> Note buffers are processed by buffer in the order in which they have
> been used, *not* the order in which they have been put in the queue.  So
> this will *not* cause any packet reordering for the driver.
> 
> Packets will only get dropped if all buffers are swapped
> out, which should be rare with a large RX queue.
> 
> 
> As I said at the forum, a side buffer for X packets
> to be stored temporarily is also additionally possible. But with the above
> it is no longer strictly required.
> 
> 
> This conflicts with the IN_ORDER feature flag, I guess we will have to
> re-think this flag then. If we do feel we need to salvage IN_ORDER as is,
> maybe device can use the buffer with length 0 and driver will re-post it
> later, but I am not I am not sure about this since involving the VF
> driver seems inelegant.
> 
> -- 
> MST



^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2019-11-01  8:53 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-11-01  4:06 presentation at kvm forum and pagefaults Michael S. Tsirkin
2019-11-01  8:51 ` Michael S. Tsirkin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).