All of lore.kernel.org
 help / color / mirror / Atom feed
* Call for GSoC and Outreachy project ideas for summer 2023
@ 2023-01-27 15:17 Stefan Hajnoczi
  2023-01-27 17:10 ` Warner Losh
                   ` (5 more replies)
  0 siblings, 6 replies; 24+ messages in thread
From: Stefan Hajnoczi @ 2023-01-27 15:17 UTC (permalink / raw)
  To: qemu-devel, kvm, Rust-VMM Mailing List
  Cc: Alex Bennée, Philippe Mathieu-Daudé,
	Paolo Bonzini, Gerd Hoffmann, Marc-André Lureau,
	Thomas Huth, John Snow, Stefano Garzarella, Eugenio Pérez,
	Florescu, Andreea, Damien, Dmitry Fomichev, Hanna Reitz,
	Alberto Faria, Daniel Henrique Barboza, Cédric Le Goater,
	Bernhard Beschow, Sean Christopherson, Vitaly Kuznetsov,
	gmaglione

Dear QEMU, KVM, and rust-vmm communities,
QEMU will apply for Google Summer of Code 2023
(https://summerofcode.withgoogle.com/) and has been accepted into
Outreachy May 2023 (https://www.outreachy.org/). You can now
submit internship project ideas for QEMU, KVM, and rust-vmm!

Please reply to this email by February 6th with your project ideas.

If you have experience contributing to QEMU, KVM, or rust-vmm you can
be a mentor. Mentors support interns as they work on their project. It's a
great way to give back and you get to work with people who are just
starting out in open source.

Good project ideas are suitable for remote work by a competent
programmer who is not yet familiar with the codebase. In
addition, they are:
- Well-defined - the scope is clear
- Self-contained - there are few dependencies
- Uncontroversial - they are acceptable to the community
- Incremental - they produce deliverables along the way

Feel free to post ideas even if you are unable to mentor the project.
It doesn't hurt to share the idea!

I will review project ideas and keep you up-to-date on QEMU's
acceptance into GSoC.

Internship program details:
- Paid, remote work open source internships
- GSoC projects are 175 or 350 hours, Outreachy projects are 30
hrs/week for 12 weeks
- Mentored by volunteers from QEMU, KVM, and rust-vmm
- Mentors typically spend at least 5 hours per week during the coding period

For more background on QEMU internships, check out this video:
https://www.youtube.com/watch?v=xNVCX7YMUL8

Please let me know if you have any questions!

Stefan

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Call for GSoC and Outreachy project ideas for summer 2023
  2023-01-27 15:17 Call for GSoC and Outreachy project ideas for summer 2023 Stefan Hajnoczi
@ 2023-01-27 17:10 ` Warner Losh
  2023-01-27 22:01   ` Stefan Hajnoczi
  2023-02-05  8:14 ` Eugenio Perez Martin
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 24+ messages in thread
From: Warner Losh @ 2023-01-27 17:10 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: qemu-devel

[-- Attachment #1: Type: text/plain, Size: 5954 bytes --]

[[ cc list trimmed to just qemu-devel ]]

On Fri, Jan 27, 2023 at 8:18 AM Stefan Hajnoczi <stefanha@gmail.com> wrote:

> Dear QEMU, KVM, and rust-vmm communities,
> QEMU will apply for Google Summer of Code 2023
> (https://summerofcode.withgoogle.com/) and has been accepted into
> Outreachy May 2023 (https://www.outreachy.org/). You can now
> submit internship project ideas for QEMU, KVM, and rust-vmm!
>
> Please reply to this email by February 6th with your project ideas.
>
> If you have experience contributing to QEMU, KVM, or rust-vmm you can
> be a mentor. Mentors support interns as they work on their project. It's a
> great way to give back and you get to work with people who are just
> starting out in open source.
>
> Good project ideas are suitable for remote work by a competent
> programmer who is not yet familiar with the codebase. In
> addition, they are:
> - Well-defined - the scope is clear
> - Self-contained - there are few dependencies
> - Uncontroversial - they are acceptable to the community
> - Incremental - they produce deliverables along the way
>
> Feel free to post ideas even if you are unable to mentor the project.
> It doesn't hurt to share the idea!
>

I've been a GSoC mentor for the FreeBSD project on and off for maybe
10-15 years now. I thought I'd share this for feedback here.

My project idea falls between the two projects. I've been trying
to get bsd-user reviewed and upstreamed for some time now and my
time available to do the upstreaming has been greatly diminished lately.
It got me thinking: upstreaming is more than just getting patches reviewed
often times. While there is a rather mechanical aspect to it (and I could
likely
automate that aspect more), the real value of going through the review
process
is that it points out things that had been done wrong, things that need to
be
redone or refactored, etc. It's often these suggestions that lead to the
biggest
investment of time on my part: Is this idea good? if I do it, does it break
things?
Is the feedback right about what's wrong, but wrong about how to fix it?
etc.
Plus the inevitable, I thought this was a good idea, implemented it only to
find
it broke other things, and how do I explain that and provide feedback to the
reviewer about that breakage to see if it is worth pursuing further or not?

So my idea for a project is two fold: First, to create scripts to automate
the
upstreaming process: to break big files into bite-sized chunks for review on
this list. git publish does a great job from there. The current backlog to
upstream
is approximately " 175 files changed, 30270 insertions(+), 640
deletions(-)" which
is 300-600 patches at the 50-100 line patch guidance I've been given. So
even
at .1hr (6 minutes) per patch (which is about 3x faster than I can do it by
hand),
that's ~60 hours just to create the patches. Writing automation should take
much less time. Realistically, this is on the order of 10-20 hours to get
done.

Second, it's to take feedback from the reviews for refactoring
the bsd-user code base (which will eventually land in upstream). I often
spend
a few hours creating my patches each quarter, then about 10 or so hours for
the
30ish patches that I do processing the review feedback by refactoring other
things
(typically other architectures), checking details of other architectures
(usually by
looking at the FreeBSD kernel), or looking for ways to refactor to share
code with
linux-user  (though so far only the safe signals is upstream: elf could be
too), or
chatting online about the feedback to better understand it, to see what I
can mine
from linux-user (since the code is derived from that, but didn't pick up
all the changes
linus-user has), etc. This would be on the order of 100 hours.

Third, the testing infrastructure that exists for linux-user is not well
leveraged to test
bsd-user. I've done some tests from time to time with it, but it's not in a
state that it
can be used as, say, part of a CI pipeline. In addition, the FreeBSD
project has some
very large jobs, a subset of which could be used to further ensure that
critical bits of
infrastructure don't break (or are working if not in a CI pipeline). Things
like building
and using go, rust and the like are constantly breaking for reasons too
long to enumerate
here. This job could be as little as 50 hours to do a minimal but complete
enough for CI job,
or as much as 200 hours to do a more complete jobs that could be used to
bisect breakage
more quickly and give good assurance that at any given time bsd-user is
useful and working.

That's in addition to growing the number of people that can work on this
code and
on the *-user code in general since they are quite similar.

Some of these tasks are squarely in the qemu-realm, while others are in the
FreeBSD realm,
but that's similar to linux-user which requires very heavy interfacing with
the linux realm. It's
just that a lot of that work is already complete so the needs are
substantially less there on an
ongoing basis. Since it does stratal the two projects, I'm unsure where to
propose this project
be housed. But since this is a call for ideas, I thought I'd float it to
see what the feedback is. I'm
happy to write this up in a more formal sense if it would be seriously
considered, but want to get
feedback as to what areas I might want to emphasize in such a proposal.

Comments?

Warner

I will review project ideas and keep you up-to-date on QEMU's
> acceptance into GSoC.
>
> Internship program details:
> - Paid, remote work open source internships
> - GSoC projects are 175 or 350 hours, Outreachy projects are 30
> hrs/week for 12 weeks
> - Mentored by volunteers from QEMU, KVM, and rust-vmm
> - Mentors typically spend at least 5 hours per week during the coding
> period
>
> For more background on QEMU internships, check out this video:
> https://www.youtube.com/watch?v=xNVCX7YMUL8
>
> Please let me know if you have any questions!
>
> Stefan
>
>

[-- Attachment #2: Type: text/html, Size: 7528 bytes --]

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Call for GSoC and Outreachy project ideas for summer 2023
  2023-01-27 17:10 ` Warner Losh
@ 2023-01-27 22:01   ` Stefan Hajnoczi
  2023-02-08 23:01     ` Warner Losh
  0 siblings, 1 reply; 24+ messages in thread
From: Stefan Hajnoczi @ 2023-01-27 22:01 UTC (permalink / raw)
  To: Warner Losh; +Cc: qemu-devel

On Fri, 27 Jan 2023 at 12:10, Warner Losh <imp@bsdimp.com> wrote:
>
> [[ cc list trimmed to just qemu-devel ]]
>
> On Fri, Jan 27, 2023 at 8:18 AM Stefan Hajnoczi <stefanha@gmail.com> wrote:
>>
>> Dear QEMU, KVM, and rust-vmm communities,
>> QEMU will apply for Google Summer of Code 2023
>> (https://summerofcode.withgoogle.com/) and has been accepted into
>> Outreachy May 2023 (https://www.outreachy.org/). You can now
>> submit internship project ideas for QEMU, KVM, and rust-vmm!
>>
>> Please reply to this email by February 6th with your project ideas.
>>
>> If you have experience contributing to QEMU, KVM, or rust-vmm you can
>> be a mentor. Mentors support interns as they work on their project. It's a
>> great way to give back and you get to work with people who are just
>> starting out in open source.
>>
>> Good project ideas are suitable for remote work by a competent
>> programmer who is not yet familiar with the codebase. In
>> addition, they are:
>> - Well-defined - the scope is clear
>> - Self-contained - there are few dependencies
>> - Uncontroversial - they are acceptable to the community
>> - Incremental - they produce deliverables along the way
>>
>> Feel free to post ideas even if you are unable to mentor the project.
>> It doesn't hurt to share the idea!
>
>
> I've been a GSoC mentor for the FreeBSD project on and off for maybe
> 10-15 years now. I thought I'd share this for feedback here.
>
> My project idea falls between the two projects. I've been trying
> to get bsd-user reviewed and upstreamed for some time now and my
> time available to do the upstreaming has been greatly diminished lately.
> It got me thinking: upstreaming is more than just getting patches reviewed
> often times. While there is a rather mechanical aspect to it (and I could likely
> automate that aspect more), the real value of going through the review process
> is that it points out things that had been done wrong, things that need to be
> redone or refactored, etc. It's often these suggestions that lead to the biggest
> investment of time on my part: Is this idea good? if I do it, does it break things?
> Is the feedback right about what's wrong, but wrong about how to fix it? etc.
> Plus the inevitable, I thought this was a good idea, implemented it only to find
> it broke other things, and how do I explain that and provide feedback to the
> reviewer about that breakage to see if it is worth pursuing further or not?
>
> So my idea for a project is two fold: First, to create scripts to automate the
> upstreaming process: to break big files into bite-sized chunks for review on
> this list. git publish does a great job from there. The current backlog to upstream
> is approximately " 175 files changed, 30270 insertions(+), 640 deletions(-)" which
> is 300-600 patches at the 50-100 line patch guidance I've been given. So even
> at .1hr (6 minutes) per patch (which is about 3x faster than I can do it by hand),
> that's ~60 hours just to create the patches. Writing automation should take
> much less time. Realistically, this is on the order of 10-20 hours to get done.
>
> Second, it's to take feedback from the reviews for refactoring
> the bsd-user code base (which will eventually land in upstream). I often spend
> a few hours creating my patches each quarter, then about 10 or so hours for the
> 30ish patches that I do processing the review feedback by refactoring other things
> (typically other architectures), checking details of other architectures (usually by
> looking at the FreeBSD kernel), or looking for ways to refactor to share code with
> linux-user  (though so far only the safe signals is upstream: elf could be too), or
> chatting online about the feedback to better understand it, to see what I can mine
> from linux-user (since the code is derived from that, but didn't pick up all the changes
> linus-user has), etc. This would be on the order of 100 hours.
>
> Third, the testing infrastructure that exists for linux-user is not well leveraged to test
> bsd-user. I've done some tests from time to time with it, but it's not in a state that it
> can be used as, say, part of a CI pipeline. In addition, the FreeBSD project has some
> very large jobs, a subset of which could be used to further ensure that critical bits of
> infrastructure don't break (or are working if not in a CI pipeline). Things like building
> and using go, rust and the like are constantly breaking for reasons too long to enumerate
> here. This job could be as little as 50 hours to do a minimal but complete enough for CI job,
> or as much as 200 hours to do a more complete jobs that could be used to bisect breakage
> more quickly and give good assurance that at any given time bsd-user is useful and working.
>
> That's in addition to growing the number of people that can work on this code and
> on the *-user code in general since they are quite similar.
>
> Some of these tasks are squarely in the qemu-realm, while others are in the FreeBSD realm,
> but that's similar to linux-user which requires very heavy interfacing with the linux realm. It's
> just that a lot of that work is already complete so the needs are substantially less there on an
> ongoing basis. Since it does stratal the two projects, I'm unsure where to propose this project
> be housed. But since this is a call for ideas, I thought I'd float it to see what the feedback is. I'm
> happy to write this up in a more formal sense if it would be seriously considered, but want to get
> feedback as to what areas I might want to emphasize in such a proposal.
>
> Comments?

Hi Warner,
Don't worry about it spanning FreeBSD and QEMU, you're welcome to list
the project idea through QEMU. You can have co-mentors that are not
part of the QEMU community in order to bring in additional FreeBSD
expertise.

My main thought is that getting all code upstream sounds like a
sprawling project that likely won't be finished within one internship.
Can you pick just a subset of what you described? It should be a
well-defined project that depends minimally on other people finishing
stuff or reaching agreement on something controversial? That way the
intern will be able to come up with specific tasks for their project
plan and there is little risk that they can't complete them due to
outside factors.

One way to go about this might be for you to define a milestone that
involves completing, testing, and upstreaming just a subset of the
out-of-tree code. For example, it might implement a limited set of
core syscall families. The intern will then focus on delivering that
instead of worrying about the daunting task of getting everything
merged. Finishing this subset would advance bsd-user FreeBSD support
by a useful degree (e.g. ability to run certain applications).

Does that sound good?

Stefan


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Call for GSoC and Outreachy project ideas for summer 2023
  2023-01-27 15:17 Call for GSoC and Outreachy project ideas for summer 2023 Stefan Hajnoczi
  2023-01-27 17:10 ` Warner Losh
@ 2023-02-05  8:14 ` Eugenio Perez Martin
  2023-02-05 13:57   ` Stefan Hajnoczi
  2023-02-06 19:50 ` Alberto Faria
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 24+ messages in thread
From: Eugenio Perez Martin @ 2023-02-05  8:14 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: qemu-devel, kvm, Rust-VMM Mailing List, Alex Bennée,
	Philippe Mathieu-Daudé,
	Paolo Bonzini, Gerd Hoffmann, Marc-André Lureau,
	Thomas Huth, John Snow, Stefano Garzarella, Florescu, Andreea,
	Damien, Dmitry Fomichev, Hanna Reitz, Alberto Faria,
	Daniel Henrique Barboza, Cédric Le Goater, Bernhard Beschow,
	Sean Christopherson, Vitaly Kuznetsov, gmaglione, Jason Wang

On Fri, Jan 27, 2023 at 4:18 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:
>
> Dear QEMU, KVM, and rust-vmm communities,
> QEMU will apply for Google Summer of Code 2023
> (https://summerofcode.withgoogle.com/) and has been accepted into
> Outreachy May 2023 (https://www.outreachy.org/). You can now
> submit internship project ideas for QEMU, KVM, and rust-vmm!
>
> Please reply to this email by February 6th with your project ideas.
>
> If you have experience contributing to QEMU, KVM, or rust-vmm you can
> be a mentor. Mentors support interns as they work on their project. It's a
> great way to give back and you get to work with people who are just
> starting out in open source.
>
> Good project ideas are suitable for remote work by a competent
> programmer who is not yet familiar with the codebase. In
> addition, they are:
> - Well-defined - the scope is clear
> - Self-contained - there are few dependencies
> - Uncontroversial - they are acceptable to the community
> - Incremental - they produce deliverables along the way
>
> Feel free to post ideas even if you are unable to mentor the project.
> It doesn't hurt to share the idea!
>
> I will review project ideas and keep you up-to-date on QEMU's
> acceptance into GSoC.
>
> Internship program details:
> - Paid, remote work open source internships
> - GSoC projects are 175 or 350 hours, Outreachy projects are 30
> hrs/week for 12 weeks
> - Mentored by volunteers from QEMU, KVM, and rust-vmm
> - Mentors typically spend at least 5 hours per week during the coding period
>
> For more background on QEMU internships, check out this video:
> https://www.youtube.com/watch?v=xNVCX7YMUL8
>
> Please let me know if you have any questions!
>
> Stefan
>

Appending the different ideas here.

VIRTIO_F_IN_ORDER feature support for virtio devices
===
This was already a project the last year, and it produced a few series
upstream but was never merged. The previous series are totally useful
to start with, so it's not starting from scratch with them [1]:

Summary
---
Implement VIRTIO_F_IN_ORDER in QEMU and Linux (vhost and virtio drivers)

The VIRTIO specification defines a feature bit (VIRTIO_F_IN_ORDER)
that devices and drivers can negotiate when the device uses
descriptors in the same order in which they were made available by the
driver.

This feature can simplify device and driver implementations and
increase performance. For example, when VIRTIO_F_IN_ORDER is
negotiated, it may be easier to create a batch of buffers and reduce
DMA transactions when the device uses a batch of buffers.

Currently the devices and drivers available in Linux and QEMU do not
support this feature. An implementation is available in DPDK for the
virtio-net driver.

Goals
---
Implement VIRTIO_F_IN_ORDER for a single device/driver in QEMU and
Linux (virtio-net or virtio-serial are good starting points).
Generalize your approach to the common virtio core code for split and
packed virtqueue layouts.
If time allows, support for the packed virtqueue layout can be added
to Linux vhost, QEMU's libvhost-user, and/or QEMU's virtio qtest code.

Shadow Virtqueue missing virtio features
===

Summary
---
Some VirtIO devices like virtio-net have a control virtqueue (CVQ)
that allows them to dynamically change a number of parameters like MAC
or number of active queues. Changes to passthrough devices using vDPA
using CVQ are inherently hard to track if CVQ is handled as
passthrough data queues, because qemu is not aware of that
communication for performance reasons. In this situation, qemu is not
able to migrate these devices, as it is not able to tell the actual
state of the device.

Shadow Virtqueue (SVQ) allows qemu to offer an emulated queue to the
device, effectively forwarding the descriptors of that communication,
tracking the device internal state, and being able to migrate it to a
new destination qemu.

To restore that state in the destination, SVQ is able to send these
messages as regular CVQ commands. The code to understand and parse
virtio-net CVQ commands is already in qemu as part of its emulated
device, but the code to send the some of the new state is not, and
some features are missing. There is already code to restore basic
commands like mac or multiqueue, and it is easy to use it as a
template.

Goals
---
To implement missing virtio-net commands sending:
* VIRTIO_NET_CTRL_RX family, to control receive mode.
* VIRTIO_NET_CTRL_GUEST_OFFLOADS
* VIRTIO_NET_CTRL_VLAN family
* VIRTIO_NET_CTRL_MQ_HASH config
* VIRTIO_NET_CTRL_MQ_RSS config

Shadow Virtqueue performance optimization
===
Summary
---
To perform a virtual machine live migration with an external device to
qemu, qemu needs a way to know which memory the device modifies so it
is able to resend it. Otherwise the guest would resume with invalid /
outdated memory in the destination.

This is especially hard with passthrough hardware devices, as
transports like PCI imposes a few security and performance challenges.
As a method to overcome this for virtio devices, qemu can offer an
emulated virtqueue to the device, called Shadow Virtqueue (SVQ),
instead of allowing the device to communicate directly with the guest.
SVQ will then forward the writes to the guest, being the effective
writer in the guest memory and knowing when a portion of it needs to
be resent.

As this is effectively breaking the passthrough and it adds extra
steps in the communication, this comes with a performance penalty in
some forms: Context switches, more memory reads and writes increasing
cache pressure, etc.

At this moment the SVQ code is not optimized. It cannot forward
buffers in parallel using multiqueue and multithread, and it does not
use posted interrupts to notify the device skipping the host kernel
context switch (doorbells).

The SVQ code requires minimal modifications for the multithreading,
and these are examples of multithreaded devices already like
virtio-blk which can be used as a template-alike. Regarding the posted
interrupts, DPDK is able to use them so that code can also be used as
a template.

Goals
---
* Measure the latest SVQ performance compared to non-SVQ.
* Add multithreading to SVQ, extracting the code from the Big QEMU Lock (BQL).
* Add posted thread capabilities to QEMU, following the model of DPDK to it.

Thanks!

[1] https://wiki.qemu.org/Google_Summer_of_Code_2022#VIRTIO_F_IN_ORDER_support_for_virtio_devices


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Call for GSoC and Outreachy project ideas for summer 2023
  2023-02-05  8:14 ` Eugenio Perez Martin
@ 2023-02-05 13:57   ` Stefan Hajnoczi
  2023-02-06 11:52     ` Eugenio Perez Martin
  0 siblings, 1 reply; 24+ messages in thread
From: Stefan Hajnoczi @ 2023-02-05 13:57 UTC (permalink / raw)
  To: Eugenio Perez Martin
  Cc: qemu-devel, kvm, Rust-VMM Mailing List, Alex Bennée,
	Philippe Mathieu-Daudé,
	Paolo Bonzini, Gerd Hoffmann, Marc-André Lureau,
	Thomas Huth, John Snow, Stefano Garzarella, Florescu, Andreea,
	Damien, Dmitry Fomichev, Hanna Reitz, Alberto Faria,
	Daniel Henrique Barboza, Cédric Le Goater, Bernhard Beschow,
	Sean Christopherson, Vitaly Kuznetsov, gmaglione, Jason Wang

On Sun, 5 Feb 2023 at 03:15, Eugenio Perez Martin <eperezma@redhat.com> wrote:
>
> On Fri, Jan 27, 2023 at 4:18 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:
> >
> > Dear QEMU, KVM, and rust-vmm communities,
> > QEMU will apply for Google Summer of Code 2023
> > (https://summerofcode.withgoogle.com/) and has been accepted into
> > Outreachy May 2023 (https://www.outreachy.org/). You can now
> > submit internship project ideas for QEMU, KVM, and rust-vmm!
> >
> > Please reply to this email by February 6th with your project ideas.
> >
> > If you have experience contributing to QEMU, KVM, or rust-vmm you can
> > be a mentor. Mentors support interns as they work on their project. It's a
> > great way to give back and you get to work with people who are just
> > starting out in open source.
> >
> > Good project ideas are suitable for remote work by a competent
> > programmer who is not yet familiar with the codebase. In
> > addition, they are:
> > - Well-defined - the scope is clear
> > - Self-contained - there are few dependencies
> > - Uncontroversial - they are acceptable to the community
> > - Incremental - they produce deliverables along the way
> >
> > Feel free to post ideas even if you are unable to mentor the project.
> > It doesn't hurt to share the idea!
> >
> > I will review project ideas and keep you up-to-date on QEMU's
> > acceptance into GSoC.
> >
> > Internship program details:
> > - Paid, remote work open source internships
> > - GSoC projects are 175 or 350 hours, Outreachy projects are 30
> > hrs/week for 12 weeks
> > - Mentored by volunteers from QEMU, KVM, and rust-vmm
> > - Mentors typically spend at least 5 hours per week during the coding period
> >
> > For more background on QEMU internships, check out this video:
> > https://www.youtube.com/watch?v=xNVCX7YMUL8
> >
> > Please let me know if you have any questions!
> >
> > Stefan
> >
>
> Appending the different ideas here.

Hi Eugenio,
Thanks for sharing your project ideas. I have added some questions
below before we add them to the ideas list wiki page.

> VIRTIO_F_IN_ORDER feature support for virtio devices
> ===
> This was already a project the last year, and it produced a few series
> upstream but was never merged. The previous series are totally useful
> to start with, so it's not starting from scratch with them [1]:

Has Zhi Guo stopped working on the patches?

What is the state of the existing patches? What work remains to be done?

>
> Summary
> ---
> Implement VIRTIO_F_IN_ORDER in QEMU and Linux (vhost and virtio drivers)
>
> The VIRTIO specification defines a feature bit (VIRTIO_F_IN_ORDER)
> that devices and drivers can negotiate when the device uses
> descriptors in the same order in which they were made available by the
> driver.
>
> This feature can simplify device and driver implementations and
> increase performance. For example, when VIRTIO_F_IN_ORDER is
> negotiated, it may be easier to create a batch of buffers and reduce
> DMA transactions when the device uses a batch of buffers.
>
> Currently the devices and drivers available in Linux and QEMU do not
> support this feature. An implementation is available in DPDK for the
> virtio-net driver.
>
> Goals
> ---
> Implement VIRTIO_F_IN_ORDER for a single device/driver in QEMU and
> Linux (virtio-net or virtio-serial are good starting points).
> Generalize your approach to the common virtio core code for split and
> packed virtqueue layouts.
> If time allows, support for the packed virtqueue layout can be added
> to Linux vhost, QEMU's libvhost-user, and/or QEMU's virtio qtest code.
>
> Shadow Virtqueue missing virtio features
> ===
>
> Summary
> ---
> Some VirtIO devices like virtio-net have a control virtqueue (CVQ)
> that allows them to dynamically change a number of parameters like MAC
> or number of active queues. Changes to passthrough devices using vDPA
> using CVQ are inherently hard to track if CVQ is handled as
> passthrough data queues, because qemu is not aware of that
> communication for performance reasons. In this situation, qemu is not
> able to migrate these devices, as it is not able to tell the actual
> state of the device.
>
> Shadow Virtqueue (SVQ) allows qemu to offer an emulated queue to the
> device, effectively forwarding the descriptors of that communication,
> tracking the device internal state, and being able to migrate it to a
> new destination qemu.
>
> To restore that state in the destination, SVQ is able to send these
> messages as regular CVQ commands. The code to understand and parse
> virtio-net CVQ commands is already in qemu as part of its emulated
> device, but the code to send the some of the new state is not, and
> some features are missing. There is already code to restore basic
> commands like mac or multiqueue, and it is easy to use it as a
> template.
>
> Goals
> ---
> To implement missing virtio-net commands sending:
> * VIRTIO_NET_CTRL_RX family, to control receive mode.
> * VIRTIO_NET_CTRL_GUEST_OFFLOADS
> * VIRTIO_NET_CTRL_VLAN family
> * VIRTIO_NET_CTRL_MQ_HASH config
> * VIRTIO_NET_CTRL_MQ_RSS config

Is there enough work here for a 350 hour or 175 hour GSoC project?

The project description mentions "there is already code to restore
basic commands like mac and multiqueue", please include a link.

> Shadow Virtqueue performance optimization
> ===
> Summary
> ---
> To perform a virtual machine live migration with an external device to
> qemu, qemu needs a way to know which memory the device modifies so it
> is able to resend it. Otherwise the guest would resume with invalid /
> outdated memory in the destination.
>
> This is especially hard with passthrough hardware devices, as
> transports like PCI imposes a few security and performance challenges.
> As a method to overcome this for virtio devices, qemu can offer an
> emulated virtqueue to the device, called Shadow Virtqueue (SVQ),
> instead of allowing the device to communicate directly with the guest.
> SVQ will then forward the writes to the guest, being the effective
> writer in the guest memory and knowing when a portion of it needs to
> be resent.
>
> As this is effectively breaking the passthrough and it adds extra
> steps in the communication, this comes with a performance penalty in
> some forms: Context switches, more memory reads and writes increasing
> cache pressure, etc.
>
> At this moment the SVQ code is not optimized. It cannot forward
> buffers in parallel using multiqueue and multithread, and it does not
> use posted interrupts to notify the device skipping the host kernel
> context switch (doorbells).
>
> The SVQ code requires minimal modifications for the multithreading,
> and these are examples of multithreaded devices already like
> virtio-blk which can be used as a template-alike. Regarding the posted
> interrupts, DPDK is able to use them so that code can also be used as
> a template.
>
> Goals
> ---
> * Measure the latest SVQ performance compared to non-SVQ.

Which benchmark workload and which benchmarking tool do you recommend?
Someone unfamiliar with QEMU and SVQ needs more details in order to
know what to do.

> * Add multithreading to SVQ, extracting the code from the Big QEMU Lock (BQL).

What do you have in mind? Allowing individual virtqueues to be
assigned to IOThreads? Or processing all virtqueues in a single
IOThread (like virtio-blk and virtio-scsi do today)?

> * Add posted thread capabilities to QEMU, following the model of DPDK to it.

What is this about? I thought KVM uses posted interrupts when
available, so what needs to be done here? Please also include a link
to the relevant DPDK code.

>
> Thanks!
>
> [1] https://wiki.qemu.org/Google_Summer_of_Code_2022#VIRTIO_F_IN_ORDER_support_for_virtio_devices
>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Call for GSoC and Outreachy project ideas for summer 2023
  2023-02-05 13:57   ` Stefan Hajnoczi
@ 2023-02-06 11:52     ` Eugenio Perez Martin
  2023-02-06 14:21       ` Stefan Hajnoczi
  0 siblings, 1 reply; 24+ messages in thread
From: Eugenio Perez Martin @ 2023-02-06 11:52 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: qemu-devel, kvm, Rust-VMM Mailing List, Alex Bennée,
	Philippe Mathieu-Daudé,
	Paolo Bonzini, Gerd Hoffmann, Marc-André Lureau,
	Thomas Huth, John Snow, Stefano Garzarella, Florescu, Andreea,
	Damien, Dmitry Fomichev, Hanna Reitz, Alberto Faria,
	Daniel Henrique Barboza, Cédric Le Goater, Bernhard Beschow,
	Sean Christopherson, Vitaly Kuznetsov, gmaglione, Jason Wang

On Sun, Feb 5, 2023 at 2:57 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:
>
> On Sun, 5 Feb 2023 at 03:15, Eugenio Perez Martin <eperezma@redhat.com> wrote:
> >
> > On Fri, Jan 27, 2023 at 4:18 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:
> > >
> > > Dear QEMU, KVM, and rust-vmm communities,
> > > QEMU will apply for Google Summer of Code 2023
> > > (https://summerofcode.withgoogle.com/) and has been accepted into
> > > Outreachy May 2023 (https://www.outreachy.org/). You can now
> > > submit internship project ideas for QEMU, KVM, and rust-vmm!
> > >
> > > Please reply to this email by February 6th with your project ideas.
> > >
> > > If you have experience contributing to QEMU, KVM, or rust-vmm you can
> > > be a mentor. Mentors support interns as they work on their project. It's a
> > > great way to give back and you get to work with people who are just
> > > starting out in open source.
> > >
> > > Good project ideas are suitable for remote work by a competent
> > > programmer who is not yet familiar with the codebase. In
> > > addition, they are:
> > > - Well-defined - the scope is clear
> > > - Self-contained - there are few dependencies
> > > - Uncontroversial - they are acceptable to the community
> > > - Incremental - they produce deliverables along the way
> > >
> > > Feel free to post ideas even if you are unable to mentor the project.
> > > It doesn't hurt to share the idea!
> > >
> > > I will review project ideas and keep you up-to-date on QEMU's
> > > acceptance into GSoC.
> > >
> > > Internship program details:
> > > - Paid, remote work open source internships
> > > - GSoC projects are 175 or 350 hours, Outreachy projects are 30
> > > hrs/week for 12 weeks
> > > - Mentored by volunteers from QEMU, KVM, and rust-vmm
> > > - Mentors typically spend at least 5 hours per week during the coding period
> > >
> > > For more background on QEMU internships, check out this video:
> > > https://www.youtube.com/watch?v=xNVCX7YMUL8
> > >
> > > Please let me know if you have any questions!
> > >
> > > Stefan
> > >
> >
> > Appending the different ideas here.
>
> Hi Eugenio,
> Thanks for sharing your project ideas. I have added some questions
> below before we add them to the ideas list wiki page.
>
> > VIRTIO_F_IN_ORDER feature support for virtio devices
> > ===
> > This was already a project the last year, and it produced a few series
> > upstream but was never merged. The previous series are totally useful
> > to start with, so it's not starting from scratch with them [1]:
>
> Has Zhi Guo stopped working on the patches?
>

I can ask him for sure.

> What is the state of the existing patches? What work remains to be done?
>

There are some pending comments from upstream. However if somebody
starts it from scratch it needs time to review some of the VirtIO
standard to understand the virtio in_order feature, both in split and
packed vq.


> >
> > Summary
> > ---
> > Implement VIRTIO_F_IN_ORDER in QEMU and Linux (vhost and virtio drivers)
> >
> > The VIRTIO specification defines a feature bit (VIRTIO_F_IN_ORDER)
> > that devices and drivers can negotiate when the device uses
> > descriptors in the same order in which they were made available by the
> > driver.
> >
> > This feature can simplify device and driver implementations and
> > increase performance. For example, when VIRTIO_F_IN_ORDER is
> > negotiated, it may be easier to create a batch of buffers and reduce
> > DMA transactions when the device uses a batch of buffers.
> >
> > Currently the devices and drivers available in Linux and QEMU do not
> > support this feature. An implementation is available in DPDK for the
> > virtio-net driver.
> >
> > Goals
> > ---
> > Implement VIRTIO_F_IN_ORDER for a single device/driver in QEMU and
> > Linux (virtio-net or virtio-serial are good starting points).
> > Generalize your approach to the common virtio core code for split and
> > packed virtqueue layouts.
> > If time allows, support for the packed virtqueue layout can be added
> > to Linux vhost, QEMU's libvhost-user, and/or QEMU's virtio qtest code.
> >
> > Shadow Virtqueue missing virtio features
> > ===
> >
> > Summary
> > ---
> > Some VirtIO devices like virtio-net have a control virtqueue (CVQ)
> > that allows them to dynamically change a number of parameters like MAC
> > or number of active queues. Changes to passthrough devices using vDPA
> > using CVQ are inherently hard to track if CVQ is handled as
> > passthrough data queues, because qemu is not aware of that
> > communication for performance reasons. In this situation, qemu is not
> > able to migrate these devices, as it is not able to tell the actual
> > state of the device.
> >
> > Shadow Virtqueue (SVQ) allows qemu to offer an emulated queue to the
> > device, effectively forwarding the descriptors of that communication,
> > tracking the device internal state, and being able to migrate it to a
> > new destination qemu.
> >
> > To restore that state in the destination, SVQ is able to send these
> > messages as regular CVQ commands. The code to understand and parse
> > virtio-net CVQ commands is already in qemu as part of its emulated
> > device, but the code to send the some of the new state is not, and
> > some features are missing. There is already code to restore basic
> > commands like mac or multiqueue, and it is easy to use it as a
> > template.
> >
> > Goals
> > ---
> > To implement missing virtio-net commands sending:
> > * VIRTIO_NET_CTRL_RX family, to control receive mode.
> > * VIRTIO_NET_CTRL_GUEST_OFFLOADS
> > * VIRTIO_NET_CTRL_VLAN family
> > * VIRTIO_NET_CTRL_MQ_HASH config
> > * VIRTIO_NET_CTRL_MQ_RSS config
>
> Is there enough work here for a 350 hour or 175 hour GSoC project?
>

I think 175 hour should fit better. If needed more features can be
added (packed vq, ring reset, etc), but to start contributing a 175
hour should work.

> The project description mentions "there is already code to restore
> basic commands like mac and multiqueue", please include a link.
>

MAC address was merged with ASID support so the whole series is more
complicated than it should be. Here is it the most relevant patch:
* https://lists.gnu.org/archive/html/qemu-devel/2022-09/msg00342.html

MQ is way cleaner in that regard, and future series should look more
similar to this one:
* https://www.mail-archive.com/qemu-devel@nongnu.org/msg906273.html

> > Shadow Virtqueue performance optimization
> > ===
> > Summary
> > ---
> > To perform a virtual machine live migration with an external device to
> > qemu, qemu needs a way to know which memory the device modifies so it
> > is able to resend it. Otherwise the guest would resume with invalid /
> > outdated memory in the destination.
> >
> > This is especially hard with passthrough hardware devices, as
> > transports like PCI imposes a few security and performance challenges.
> > As a method to overcome this for virtio devices, qemu can offer an
> > emulated virtqueue to the device, called Shadow Virtqueue (SVQ),
> > instead of allowing the device to communicate directly with the guest.
> > SVQ will then forward the writes to the guest, being the effective
> > writer in the guest memory and knowing when a portion of it needs to
> > be resent.
> >
> > As this is effectively breaking the passthrough and it adds extra
> > steps in the communication, this comes with a performance penalty in
> > some forms: Context switches, more memory reads and writes increasing
> > cache pressure, etc.
> >
> > At this moment the SVQ code is not optimized. It cannot forward
> > buffers in parallel using multiqueue and multithread, and it does not
> > use posted interrupts to notify the device skipping the host kernel
> > context switch (doorbells).
> >
> > The SVQ code requires minimal modifications for the multithreading,
> > and these are examples of multithreaded devices already like
> > virtio-blk which can be used as a template-alike. Regarding the posted
> > interrupts, DPDK is able to use them so that code can also be used as
> > a template.
> >
> > Goals
> > ---
> > * Measure the latest SVQ performance compared to non-SVQ.
>
> Which benchmark workload and which benchmarking tool do you recommend?
> Someone unfamiliar with QEMU and SVQ needs more details in order to
> know what to do.
>

In my opinion netperf (TCP_STREAM & TCP_RR) or iperf equivalent +
testpmd in AF_PACKET mode should test these scenarios better. But
maybe upstream requests additional testings. Feedback on this would be
appreciated actually.

My intention is not for the intern to develop new tests or anything
like that, they are just a means to justify the changes in SVQ. This
part would be very guided, or it can be offloaded from the project. So
if these tools are not enough descriptive maybe it's better to take
this out of the goals and add it to the description like that.

> > * Add multithreading to SVQ, extracting the code from the Big QEMU Lock (BQL).
>
> What do you have in mind? Allowing individual virtqueues to be
> assigned to IOThreads? Or processing all virtqueues in a single
> IOThread (like virtio-blk and virtio-scsi do today)?
>

My idea was to use iothreads. I thought virtio-blk and virtio-scsi
were done that way actually, is there a reason / advantage to use just
a single iothread?

> > * Add posted thread capabilities to QEMU, following the model of DPDK to it.
>
> What is this about? I thought KVM uses posted interrupts when
> available, so what needs to be done here? Please also include a link
> to the relevant DPDK code.
>

The guest in KVM may use posted interrupts but SVQ code runs in
userland qemu :). There were no previous uses of HW posted interrupts
as far as I know so SVQ is only able to use vhost-vdpa kick eventfds
to notify queues. This has a performance penalty in the form of host
kernel context switches.

If I'm not wrong this patch adds it to DPDK, but I may be missing
additional context or versions:
* https://lore.kernel.org/all/1579539790-3882-31-git-send-email-matan@mellanox.com/

Please let me know if you need further information. Thanks!


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Call for GSoC and Outreachy project ideas for summer 2023
  2023-02-06 11:52     ` Eugenio Perez Martin
@ 2023-02-06 14:21       ` Stefan Hajnoczi
  2023-02-06 16:46         ` Eugenio Perez Martin
  0 siblings, 1 reply; 24+ messages in thread
From: Stefan Hajnoczi @ 2023-02-06 14:21 UTC (permalink / raw)
  To: Eugenio Perez Martin
  Cc: qemu-devel, kvm, Rust-VMM Mailing List, Alex Bennée,
	Philippe Mathieu-Daudé,
	Paolo Bonzini, Gerd Hoffmann, Marc-André Lureau,
	Thomas Huth, John Snow, Stefano Garzarella, Florescu, Andreea,
	Damien, Dmitry Fomichev, Hanna Reitz, Alberto Faria,
	Daniel Henrique Barboza, Cédric Le Goater, Bernhard Beschow,
	Sean Christopherson, Vitaly Kuznetsov, gmaglione, Jason Wang

On Mon, 6 Feb 2023 at 06:53, Eugenio Perez Martin <eperezma@redhat.com> wrote:
>
> On Sun, Feb 5, 2023 at 2:57 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:
> >
> > On Sun, 5 Feb 2023 at 03:15, Eugenio Perez Martin <eperezma@redhat.com> wrote:
> > >
> > > On Fri, Jan 27, 2023 at 4:18 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:
> > > >
> > > > Dear QEMU, KVM, and rust-vmm communities,
> > > > QEMU will apply for Google Summer of Code 2023
> > > > (https://summerofcode.withgoogle.com/) and has been accepted into
> > > > Outreachy May 2023 (https://www.outreachy.org/). You can now
> > > > submit internship project ideas for QEMU, KVM, and rust-vmm!
> > > >
> > > > Please reply to this email by February 6th with your project ideas.
> > > >
> > > > If you have experience contributing to QEMU, KVM, or rust-vmm you can
> > > > be a mentor. Mentors support interns as they work on their project. It's a
> > > > great way to give back and you get to work with people who are just
> > > > starting out in open source.
> > > >
> > > > Good project ideas are suitable for remote work by a competent
> > > > programmer who is not yet familiar with the codebase. In
> > > > addition, they are:
> > > > - Well-defined - the scope is clear
> > > > - Self-contained - there are few dependencies
> > > > - Uncontroversial - they are acceptable to the community
> > > > - Incremental - they produce deliverables along the way
> > > >
> > > > Feel free to post ideas even if you are unable to mentor the project.
> > > > It doesn't hurt to share the idea!
> > > >
> > > > I will review project ideas and keep you up-to-date on QEMU's
> > > > acceptance into GSoC.
> > > >
> > > > Internship program details:
> > > > - Paid, remote work open source internships
> > > > - GSoC projects are 175 or 350 hours, Outreachy projects are 30
> > > > hrs/week for 12 weeks
> > > > - Mentored by volunteers from QEMU, KVM, and rust-vmm
> > > > - Mentors typically spend at least 5 hours per week during the coding period
> > > >
> > > > For more background on QEMU internships, check out this video:
> > > > https://www.youtube.com/watch?v=xNVCX7YMUL8
> > > >
> > > > Please let me know if you have any questions!
> > > >
> > > > Stefan
> > > >
> > >
> > > Appending the different ideas here.
> >
> > Hi Eugenio,
> > Thanks for sharing your project ideas. I have added some questions
> > below before we add them to the ideas list wiki page.

Thanks for the discussion. Do you want to focus on 1 or 2 project
ideas? 3 might be a bit much to mentor.

Please send an updated version of the project descriptions and I'll
post it on the wiki.

> >
> > > VIRTIO_F_IN_ORDER feature support for virtio devices
> > > ===
> > > This was already a project the last year, and it produced a few series
> > > upstream but was never merged. The previous series are totally useful
> > > to start with, so it's not starting from scratch with them [1]:
> >
> > Has Zhi Guo stopped working on the patches?
> >
>
> I can ask him for sure.
>
> > What is the state of the existing patches? What work remains to be done?
> >
>
> There are some pending comments from upstream. However if somebody
> starts it from scratch it needs time to review some of the VirtIO
> standard to understand the virtio in_order feature, both in split and
> packed vq.

The intern will need to take ownership and deal with code review
feedback for code they didn't write. That can be difficult for someone
who is new unless the requested changes are easy to address.

It's okay to start from scratch. You're in a better position than an
applicant to decide whether that's the best approach.

>
>
> > >
> > > Summary
> > > ---
> > > Implement VIRTIO_F_IN_ORDER in QEMU and Linux (vhost and virtio drivers)
> > >
> > > The VIRTIO specification defines a feature bit (VIRTIO_F_IN_ORDER)
> > > that devices and drivers can negotiate when the device uses
> > > descriptors in the same order in which they were made available by the
> > > driver.
> > >
> > > This feature can simplify device and driver implementations and
> > > increase performance. For example, when VIRTIO_F_IN_ORDER is
> > > negotiated, it may be easier to create a batch of buffers and reduce
> > > DMA transactions when the device uses a batch of buffers.
> > >
> > > Currently the devices and drivers available in Linux and QEMU do not
> > > support this feature. An implementation is available in DPDK for the
> > > virtio-net driver.
> > >
> > > Goals
> > > ---
> > > Implement VIRTIO_F_IN_ORDER for a single device/driver in QEMU and
> > > Linux (virtio-net or virtio-serial are good starting points).
> > > Generalize your approach to the common virtio core code for split and
> > > packed virtqueue layouts.
> > > If time allows, support for the packed virtqueue layout can be added
> > > to Linux vhost, QEMU's libvhost-user, and/or QEMU's virtio qtest code.
> > >
> > > Shadow Virtqueue missing virtio features
> > > ===
> > >
> > > Summary
> > > ---
> > > Some VirtIO devices like virtio-net have a control virtqueue (CVQ)
> > > that allows them to dynamically change a number of parameters like MAC
> > > or number of active queues. Changes to passthrough devices using vDPA
> > > using CVQ are inherently hard to track if CVQ is handled as
> > > passthrough data queues, because qemu is not aware of that
> > > communication for performance reasons. In this situation, qemu is not
> > > able to migrate these devices, as it is not able to tell the actual
> > > state of the device.
> > >
> > > Shadow Virtqueue (SVQ) allows qemu to offer an emulated queue to the
> > > device, effectively forwarding the descriptors of that communication,
> > > tracking the device internal state, and being able to migrate it to a
> > > new destination qemu.
> > >
> > > To restore that state in the destination, SVQ is able to send these
> > > messages as regular CVQ commands. The code to understand and parse
> > > virtio-net CVQ commands is already in qemu as part of its emulated
> > > device, but the code to send the some of the new state is not, and
> > > some features are missing. There is already code to restore basic
> > > commands like mac or multiqueue, and it is easy to use it as a
> > > template.
> > >
> > > Goals
> > > ---
> > > To implement missing virtio-net commands sending:
> > > * VIRTIO_NET_CTRL_RX family, to control receive mode.
> > > * VIRTIO_NET_CTRL_GUEST_OFFLOADS
> > > * VIRTIO_NET_CTRL_VLAN family
> > > * VIRTIO_NET_CTRL_MQ_HASH config
> > > * VIRTIO_NET_CTRL_MQ_RSS config
> >
> > Is there enough work here for a 350 hour or 175 hour GSoC project?
> >
>
> I think 175 hour should fit better. If needed more features can be
> added (packed vq, ring reset, etc), but to start contributing a 175
> hour should work.
>
> > The project description mentions "there is already code to restore
> > basic commands like mac and multiqueue", please include a link.
> >
>
> MAC address was merged with ASID support so the whole series is more
> complicated than it should be. Here is it the most relevant patch:
> * https://lists.gnu.org/archive/html/qemu-devel/2022-09/msg00342.html
>
> MQ is way cleaner in that regard, and future series should look more
> similar to this one:
> * https://www.mail-archive.com/qemu-devel@nongnu.org/msg906273.html
>
> > > Shadow Virtqueue performance optimization
> > > ===
> > > Summary
> > > ---
> > > To perform a virtual machine live migration with an external device to
> > > qemu, qemu needs a way to know which memory the device modifies so it
> > > is able to resend it. Otherwise the guest would resume with invalid /
> > > outdated memory in the destination.
> > >
> > > This is especially hard with passthrough hardware devices, as
> > > transports like PCI imposes a few security and performance challenges.
> > > As a method to overcome this for virtio devices, qemu can offer an
> > > emulated virtqueue to the device, called Shadow Virtqueue (SVQ),
> > > instead of allowing the device to communicate directly with the guest.
> > > SVQ will then forward the writes to the guest, being the effective
> > > writer in the guest memory and knowing when a portion of it needs to
> > > be resent.
> > >
> > > As this is effectively breaking the passthrough and it adds extra
> > > steps in the communication, this comes with a performance penalty in
> > > some forms: Context switches, more memory reads and writes increasing
> > > cache pressure, etc.
> > >
> > > At this moment the SVQ code is not optimized. It cannot forward
> > > buffers in parallel using multiqueue and multithread, and it does not
> > > use posted interrupts to notify the device skipping the host kernel
> > > context switch (doorbells).
> > >
> > > The SVQ code requires minimal modifications for the multithreading,
> > > and these are examples of multithreaded devices already like
> > > virtio-blk which can be used as a template-alike. Regarding the posted
> > > interrupts, DPDK is able to use them so that code can also be used as
> > > a template.
> > >
> > > Goals
> > > ---
> > > * Measure the latest SVQ performance compared to non-SVQ.
> >
> > Which benchmark workload and which benchmarking tool do you recommend?
> > Someone unfamiliar with QEMU and SVQ needs more details in order to
> > know what to do.
> >
>
> In my opinion netperf (TCP_STREAM & TCP_RR) or iperf equivalent +
> testpmd in AF_PACKET mode should test these scenarios better. But
> maybe upstream requests additional testings. Feedback on this would be
> appreciated actually.
>
> My intention is not for the intern to develop new tests or anything
> like that, they are just a means to justify the changes in SVQ. This
> part would be very guided, or it can be offloaded from the project. So
> if these tools are not enough descriptive maybe it's better to take
> this out of the goals and add it to the description like that.

Great, "netperf (TCP_STREAM & TCP_RR) or iperf equivalent + testpmd in
AF_PACKET mode" is enough information.

>
> > > * Add multithreading to SVQ, extracting the code from the Big QEMU Lock (BQL).
> >
> > What do you have in mind? Allowing individual virtqueues to be
> > assigned to IOThreads? Or processing all virtqueues in a single
> > IOThread (like virtio-blk and virtio-scsi do today)?
> >
>
> My idea was to use iothreads. I thought virtio-blk and virtio-scsi
> were done that way actually, is there a reason / advantage to use just
> a single iothread?

The reason for only supporting a single IOThread at the moment is
thread-safety. There is multi-queue work in progress that will remove
this limitation in the future.

I sent a patch series proposing a command-line syntax for multi-queue here:
https://www.mail-archive.com/qemu-devel@nongnu.org/msg933001.html

The idea is that the same syntax can be used by other devices that
support mapping vqs to multiple IOThreads.

>
> > > * Add posted thread capabilities to QEMU, following the model of DPDK to it.
> >
> > What is this about? I thought KVM uses posted interrupts when
> > available, so what needs to be done here? Please also include a link
> > to the relevant DPDK code.
> >
>
> The guest in KVM may use posted interrupts but SVQ code runs in
> userland qemu :). There were no previous uses of HW posted interrupts
> as far as I know so SVQ is only able to use vhost-vdpa kick eventfds
> to notify queues. This has a performance penalty in the form of host
> kernel context switches.
>
> If I'm not wrong this patch adds it to DPDK, but I may be missing
> additional context or versions:
> * https://lore.kernel.org/all/1579539790-3882-31-git-send-email-matan@mellanox.com/
>
> Please let me know if you need further information. Thanks!

This patch does not appear related to posted interrupts because it's
using the kickfd (available buffer notification) instead of the callfd
(used buffer notification). It's the glue that forwards a virtqueue
kick to hardware.

I don't think that userspace available buffer notification
interception can be bypassed in the SVQ model. SVQ needs to take a
copy of available buffers so it knows the scatter-gather lists before
forwarding the kick to the vDPA device. If the notification is
bypassed then SVQ cannot reliably capture the scatter-gather list.

I also don't think it's possible to bypass userspace in the used
buffer notification path. The vDPA used buffer notification must be
intercepted so SVQ can mark memory pages in the scatter-gather list
dirty before it fills in a guest used buffer and sends a guest used
buffer notification.

The guest used buffer notification should already be a VT-d Posted
Interrupt on hardware that supports the feature. KVM takes care of
that.

I probably don't understand what the optimization idea is. You want
SVQ to avoid a system call when sending vDPA available buffer
notifications? That's not related to posted interrupts though, so I'm
confused...

Stefan

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Call for GSoC and Outreachy project ideas for summer 2023
  2023-02-06 14:21       ` Stefan Hajnoczi
@ 2023-02-06 16:46         ` Eugenio Perez Martin
  2023-02-06 17:21           ` Stefan Hajnoczi
  0 siblings, 1 reply; 24+ messages in thread
From: Eugenio Perez Martin @ 2023-02-06 16:46 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: qemu-devel, kvm, Rust-VMM Mailing List, Alex Bennée,
	Philippe Mathieu-Daudé,
	Paolo Bonzini, Gerd Hoffmann, Marc-André Lureau,
	Thomas Huth, John Snow, Stefano Garzarella, Florescu, Andreea,
	Damien, Dmitry Fomichev, Hanna Reitz, Alberto Faria,
	Daniel Henrique Barboza, Cédric Le Goater, Bernhard Beschow,
	Sean Christopherson, Vitaly Kuznetsov, gmaglione, Jason Wang

On Mon, Feb 6, 2023 at 3:21 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:
>
> On Mon, 6 Feb 2023 at 06:53, Eugenio Perez Martin <eperezma@redhat.com> wrote:
> >
> > On Sun, Feb 5, 2023 at 2:57 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:
> > >
> > > On Sun, 5 Feb 2023 at 03:15, Eugenio Perez Martin <eperezma@redhat.com> wrote:
> > > >
> > > > On Fri, Jan 27, 2023 at 4:18 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:
> > > > >
> > > > > Dear QEMU, KVM, and rust-vmm communities,
> > > > > QEMU will apply for Google Summer of Code 2023
> > > > > (https://summerofcode.withgoogle.com/) and has been accepted into
> > > > > Outreachy May 2023 (https://www.outreachy.org/). You can now
> > > > > submit internship project ideas for QEMU, KVM, and rust-vmm!
> > > > >
> > > > > Please reply to this email by February 6th with your project ideas.
> > > > >
> > > > > If you have experience contributing to QEMU, KVM, or rust-vmm you can
> > > > > be a mentor. Mentors support interns as they work on their project. It's a
> > > > > great way to give back and you get to work with people who are just
> > > > > starting out in open source.
> > > > >
> > > > > Good project ideas are suitable for remote work by a competent
> > > > > programmer who is not yet familiar with the codebase. In
> > > > > addition, they are:
> > > > > - Well-defined - the scope is clear
> > > > > - Self-contained - there are few dependencies
> > > > > - Uncontroversial - they are acceptable to the community
> > > > > - Incremental - they produce deliverables along the way
> > > > >
> > > > > Feel free to post ideas even if you are unable to mentor the project.
> > > > > It doesn't hurt to share the idea!
> > > > >
> > > > > I will review project ideas and keep you up-to-date on QEMU's
> > > > > acceptance into GSoC.
> > > > >
> > > > > Internship program details:
> > > > > - Paid, remote work open source internships
> > > > > - GSoC projects are 175 or 350 hours, Outreachy projects are 30
> > > > > hrs/week for 12 weeks
> > > > > - Mentored by volunteers from QEMU, KVM, and rust-vmm
> > > > > - Mentors typically spend at least 5 hours per week during the coding period
> > > > >
> > > > > For more background on QEMU internships, check out this video:
> > > > > https://www.youtube.com/watch?v=xNVCX7YMUL8
> > > > >
> > > > > Please let me know if you have any questions!
> > > > >
> > > > > Stefan
> > > > >
> > > >
> > > > Appending the different ideas here.
> > >
> > > Hi Eugenio,
> > > Thanks for sharing your project ideas. I have added some questions
> > > below before we add them to the ideas list wiki page.
>
> Thanks for the discussion. Do you want to focus on 1 or 2 project
> ideas? 3 might be a bit much to mentor.
>

Right, my idea was to reduce that amount afterwards just in case some
of them were rejected. But sure, we can filter out some if needed.

> Please send an updated version of the project descriptions and I'll
> post it on the wiki.
>
> > >
> > > > VIRTIO_F_IN_ORDER feature support for virtio devices
> > > > ===
> > > > This was already a project the last year, and it produced a few series
> > > > upstream but was never merged. The previous series are totally useful
> > > > to start with, so it's not starting from scratch with them [1]:
> > >
> > > Has Zhi Guo stopped working on the patches?
> > >
> >
> > I can ask him for sure.
> >
> > > What is the state of the existing patches? What work remains to be done?
> > >
> >
> > There are some pending comments from upstream. However if somebody
> > starts it from scratch it needs time to review some of the VirtIO
> > standard to understand the virtio in_order feature, both in split and
> > packed vq.
>
> The intern will need to take ownership and deal with code review
> feedback for code they didn't write. That can be difficult for someone
> who is new unless the requested changes are easy to address.
>

Indeed that is a very good point.

> It's okay to start from scratch. You're in a better position than an
> applicant to decide whether that's the best approach.
>
> >
> >
> > > >
> > > > Summary
> > > > ---
> > > > Implement VIRTIO_F_IN_ORDER in QEMU and Linux (vhost and virtio drivers)
> > > >
> > > > The VIRTIO specification defines a feature bit (VIRTIO_F_IN_ORDER)
> > > > that devices and drivers can negotiate when the device uses
> > > > descriptors in the same order in which they were made available by the
> > > > driver.
> > > >
> > > > This feature can simplify device and driver implementations and
> > > > increase performance. For example, when VIRTIO_F_IN_ORDER is
> > > > negotiated, it may be easier to create a batch of buffers and reduce
> > > > DMA transactions when the device uses a batch of buffers.
> > > >
> > > > Currently the devices and drivers available in Linux and QEMU do not
> > > > support this feature. An implementation is available in DPDK for the
> > > > virtio-net driver.
> > > >
> > > > Goals
> > > > ---
> > > > Implement VIRTIO_F_IN_ORDER for a single device/driver in QEMU and
> > > > Linux (virtio-net or virtio-serial are good starting points).
> > > > Generalize your approach to the common virtio core code for split and
> > > > packed virtqueue layouts.
> > > > If time allows, support for the packed virtqueue layout can be added
> > > > to Linux vhost, QEMU's libvhost-user, and/or QEMU's virtio qtest code.
> > > >
> > > > Shadow Virtqueue missing virtio features
> > > > ===
> > > >
> > > > Summary
> > > > ---
> > > > Some VirtIO devices like virtio-net have a control virtqueue (CVQ)
> > > > that allows them to dynamically change a number of parameters like MAC
> > > > or number of active queues. Changes to passthrough devices using vDPA
> > > > using CVQ are inherently hard to track if CVQ is handled as
> > > > passthrough data queues, because qemu is not aware of that
> > > > communication for performance reasons. In this situation, qemu is not
> > > > able to migrate these devices, as it is not able to tell the actual
> > > > state of the device.
> > > >
> > > > Shadow Virtqueue (SVQ) allows qemu to offer an emulated queue to the
> > > > device, effectively forwarding the descriptors of that communication,
> > > > tracking the device internal state, and being able to migrate it to a
> > > > new destination qemu.
> > > >
> > > > To restore that state in the destination, SVQ is able to send these
> > > > messages as regular CVQ commands. The code to understand and parse
> > > > virtio-net CVQ commands is already in qemu as part of its emulated
> > > > device, but the code to send the some of the new state is not, and
> > > > some features are missing. There is already code to restore basic
> > > > commands like mac or multiqueue, and it is easy to use it as a
> > > > template.
> > > >
> > > > Goals
> > > > ---
> > > > To implement missing virtio-net commands sending:
> > > > * VIRTIO_NET_CTRL_RX family, to control receive mode.
> > > > * VIRTIO_NET_CTRL_GUEST_OFFLOADS
> > > > * VIRTIO_NET_CTRL_VLAN family
> > > > * VIRTIO_NET_CTRL_MQ_HASH config
> > > > * VIRTIO_NET_CTRL_MQ_RSS config
> > >
> > > Is there enough work here for a 350 hour or 175 hour GSoC project?
> > >
> >
> > I think 175 hour should fit better. If needed more features can be
> > added (packed vq, ring reset, etc), but to start contributing a 175
> > hour should work.
> >
> > > The project description mentions "there is already code to restore
> > > basic commands like mac and multiqueue", please include a link.
> > >
> >
> > MAC address was merged with ASID support so the whole series is more
> > complicated than it should be. Here is it the most relevant patch:
> > * https://lists.gnu.org/archive/html/qemu-devel/2022-09/msg00342.html
> >
> > MQ is way cleaner in that regard, and future series should look more
> > similar to this one:
> > * https://www.mail-archive.com/qemu-devel@nongnu.org/msg906273.html
> >
> > > > Shadow Virtqueue performance optimization
> > > > ===
> > > > Summary
> > > > ---
> > > > To perform a virtual machine live migration with an external device to
> > > > qemu, qemu needs a way to know which memory the device modifies so it
> > > > is able to resend it. Otherwise the guest would resume with invalid /
> > > > outdated memory in the destination.
> > > >
> > > > This is especially hard with passthrough hardware devices, as
> > > > transports like PCI imposes a few security and performance challenges.
> > > > As a method to overcome this for virtio devices, qemu can offer an
> > > > emulated virtqueue to the device, called Shadow Virtqueue (SVQ),
> > > > instead of allowing the device to communicate directly with the guest.
> > > > SVQ will then forward the writes to the guest, being the effective
> > > > writer in the guest memory and knowing when a portion of it needs to
> > > > be resent.
> > > >
> > > > As this is effectively breaking the passthrough and it adds extra
> > > > steps in the communication, this comes with a performance penalty in
> > > > some forms: Context switches, more memory reads and writes increasing
> > > > cache pressure, etc.
> > > >
> > > > At this moment the SVQ code is not optimized. It cannot forward
> > > > buffers in parallel using multiqueue and multithread, and it does not
> > > > use posted interrupts to notify the device skipping the host kernel
> > > > context switch (doorbells).
> > > >
> > > > The SVQ code requires minimal modifications for the multithreading,
> > > > and these are examples of multithreaded devices already like
> > > > virtio-blk which can be used as a template-alike. Regarding the posted
> > > > interrupts, DPDK is able to use them so that code can also be used as
> > > > a template.
> > > >
> > > > Goals
> > > > ---
> > > > * Measure the latest SVQ performance compared to non-SVQ.
> > >
> > > Which benchmark workload and which benchmarking tool do you recommend?
> > > Someone unfamiliar with QEMU and SVQ needs more details in order to
> > > know what to do.
> > >
> >
> > In my opinion netperf (TCP_STREAM & TCP_RR) or iperf equivalent +
> > testpmd in AF_PACKET mode should test these scenarios better. But
> > maybe upstream requests additional testings. Feedback on this would be
> > appreciated actually.
> >
> > My intention is not for the intern to develop new tests or anything
> > like that, they are just a means to justify the changes in SVQ. This
> > part would be very guided, or it can be offloaded from the project. So
> > if these tools are not enough descriptive maybe it's better to take
> > this out of the goals and add it to the description like that.
>
> Great, "netperf (TCP_STREAM & TCP_RR) or iperf equivalent + testpmd in
> AF_PACKET mode" is enough information.
>
> >
> > > > * Add multithreading to SVQ, extracting the code from the Big QEMU Lock (BQL).
> > >
> > > What do you have in mind? Allowing individual virtqueues to be
> > > assigned to IOThreads? Or processing all virtqueues in a single
> > > IOThread (like virtio-blk and virtio-scsi do today)?
> > >
> >
> > My idea was to use iothreads. I thought virtio-blk and virtio-scsi
> > were done that way actually, is there a reason / advantage to use just
> > a single iothread?
>
> The reason for only supporting a single IOThread at the moment is
> thread-safety. There is multi-queue work in progress that will remove
> this limitation in the future.
>
> I sent a patch series proposing a command-line syntax for multi-queue here:
> https://www.mail-archive.com/qemu-devel@nongnu.org/msg933001.html
>
> The idea is that the same syntax can be used by other devices that
> support mapping vqs to multiple IOThreads.
>

Understood. I'll take a look, thanks!

> >
> > > > * Add posted thread capabilities to QEMU, following the model of DPDK to it.
> > >
> > > What is this about? I thought KVM uses posted interrupts when
> > > available, so what needs to be done here? Please also include a link
> > > to the relevant DPDK code.
> > >
> >
> > The guest in KVM may use posted interrupts but SVQ code runs in
> > userland qemu :). There were no previous uses of HW posted interrupts
> > as far as I know so SVQ is only able to use vhost-vdpa kick eventfds
> > to notify queues. This has a performance penalty in the form of host
> > kernel context switches.
> >
> > If I'm not wrong this patch adds it to DPDK, but I may be missing
> > additional context or versions:
> > * https://lore.kernel.org/all/1579539790-3882-31-git-send-email-matan@mellanox.com/
> >
> > Please let me know if you need further information. Thanks!
>
> This patch does not appear related to posted interrupts because it's
> using the kickfd (available buffer notification) instead of the callfd
> (used buffer notification). It's the glue that forwards a virtqueue
> kick to hardware.
>

I'm sorry, that's because I confused the terms in my head and I wanted
to say "host notifiers memory regions" or "hardware doorbell mapping".
Maybe it is clearer that way?

> I don't think that userspace available buffer notification
> interception can be bypassed in the SVQ model. SVQ needs to take a
> copy of available buffers so it knows the scatter-gather lists before
> forwarding the kick to the vDPA device. If the notification is
> bypassed then SVQ cannot reliably capture the scatter-gather list.
>
> I also don't think it's possible to bypass userspace in the used
> buffer notification path. The vDPA used buffer notification must be
> intercepted so SVQ can mark memory pages in the scatter-gather list
> dirty before it fills in a guest used buffer and sends a guest used
> buffer notification.
>
> The guest used buffer notification should already be a VT-d Posted
> Interrupt on hardware that supports the feature. KVM takes care of
> that.
>
> I probably don't understand what the optimization idea is. You want
> SVQ to avoid a system call when sending vDPA available buffer
> notifications? That's not related to posted interrupts though, so I'm
> confused...
>

That's right, you described the idea perfectly that way :). I'll
complete the projects summary but I'll be ok if you think it is not
qualified, we can leave that part out of the proposal.

Thanks!


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Call for GSoC and Outreachy project ideas for summer 2023
  2023-02-06 16:46         ` Eugenio Perez Martin
@ 2023-02-06 17:21           ` Stefan Hajnoczi
  2023-02-06 18:47             ` Eugenio Perez Martin
  0 siblings, 1 reply; 24+ messages in thread
From: Stefan Hajnoczi @ 2023-02-06 17:21 UTC (permalink / raw)
  To: Eugenio Perez Martin
  Cc: qemu-devel, kvm, Rust-VMM Mailing List, Alex Bennée,
	Philippe Mathieu-Daudé,
	Paolo Bonzini, Gerd Hoffmann, Marc-André Lureau,
	Thomas Huth, John Snow, Stefano Garzarella, Florescu, Andreea,
	Damien, Dmitry Fomichev, Hanna Reitz, Alberto Faria,
	Daniel Henrique Barboza, Cédric Le Goater, Bernhard Beschow,
	Sean Christopherson, Vitaly Kuznetsov, gmaglione, Jason Wang

On Mon, 6 Feb 2023 at 11:47, Eugenio Perez Martin <eperezma@redhat.com> wrote:
>
> On Mon, Feb 6, 2023 at 3:21 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:
> >
> > On Mon, 6 Feb 2023 at 06:53, Eugenio Perez Martin <eperezma@redhat.com> wrote:
> > >
> > > On Sun, Feb 5, 2023 at 2:57 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:
> > > >
> > > > On Sun, 5 Feb 2023 at 03:15, Eugenio Perez Martin <eperezma@redhat.com> wrote:
> > > > >
> > > > > On Fri, Jan 27, 2023 at 4:18 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:
> > > > > >
> > > > > > Dear QEMU, KVM, and rust-vmm communities,
> > > > > > QEMU will apply for Google Summer of Code 2023
> > > > > > (https://summerofcode.withgoogle.com/) and has been accepted into
> > > > > > Outreachy May 2023 (https://www.outreachy.org/). You can now
> > > > > > submit internship project ideas for QEMU, KVM, and rust-vmm!
> > > > > >
> > > > > > Please reply to this email by February 6th with your project ideas.
> > > > > >
> > > > > > If you have experience contributing to QEMU, KVM, or rust-vmm you can
> > > > > > be a mentor. Mentors support interns as they work on their project. It's a
> > > > > > great way to give back and you get to work with people who are just
> > > > > > starting out in open source.
> > > > > >
> > > > > > Good project ideas are suitable for remote work by a competent
> > > > > > programmer who is not yet familiar with the codebase. In
> > > > > > addition, they are:
> > > > > > - Well-defined - the scope is clear
> > > > > > - Self-contained - there are few dependencies
> > > > > > - Uncontroversial - they are acceptable to the community
> > > > > > - Incremental - they produce deliverables along the way
> > > > > >
> > > > > > Feel free to post ideas even if you are unable to mentor the project.
> > > > > > It doesn't hurt to share the idea!
> > > > > >
> > > > > > I will review project ideas and keep you up-to-date on QEMU's
> > > > > > acceptance into GSoC.
> > > > > >
> > > > > > Internship program details:
> > > > > > - Paid, remote work open source internships
> > > > > > - GSoC projects are 175 or 350 hours, Outreachy projects are 30
> > > > > > hrs/week for 12 weeks
> > > > > > - Mentored by volunteers from QEMU, KVM, and rust-vmm
> > > > > > - Mentors typically spend at least 5 hours per week during the coding period
> > > > > >
> > > > > > For more background on QEMU internships, check out this video:
> > > > > > https://www.youtube.com/watch?v=xNVCX7YMUL8
> > > > > >
> > > > > > Please let me know if you have any questions!
> > > > > >
> > > > > > Stefan
> > > > > >
> > > > >
> > > > > Appending the different ideas here.
> > > >
> > > > Hi Eugenio,
> > > > Thanks for sharing your project ideas. I have added some questions
> > > > below before we add them to the ideas list wiki page.
> >
> > Thanks for the discussion. Do you want to focus on 1 or 2 project
> > ideas? 3 might be a bit much to mentor.
> >
>
> Right, my idea was to reduce that amount afterwards just in case some
> of them were rejected. But sure, we can filter out some if needed.

Do you mean in case there is no realistic applicant? You can do that
if you want, just keep in mind it may be more work for you during the
application phase. If it turns out there is a strong applicant for
each project idea you could see if someone else is willing to mentor
the project(s) you don't have time for.

I'll post the project ideas once you've updated them.

> > Please send an updated version of the project descriptions and I'll
> > post it on the wiki.
> >
> > > >
> > > > > VIRTIO_F_IN_ORDER feature support for virtio devices
> > > > > ===
> > > > > This was already a project the last year, and it produced a few series
> > > > > upstream but was never merged. The previous series are totally useful
> > > > > to start with, so it's not starting from scratch with them [1]:
> > > >
> > > > Has Zhi Guo stopped working on the patches?
> > > >
> > >
> > > I can ask him for sure.
> > >
> > > > What is the state of the existing patches? What work remains to be done?
> > > >
> > >
> > > There are some pending comments from upstream. However if somebody
> > > starts it from scratch it needs time to review some of the VirtIO
> > > standard to understand the virtio in_order feature, both in split and
> > > packed vq.
> >
> > The intern will need to take ownership and deal with code review
> > feedback for code they didn't write. That can be difficult for someone
> > who is new unless the requested changes are easy to address.
> >
>
> Indeed that is a very good point.
>
> > It's okay to start from scratch. You're in a better position than an
> > applicant to decide whether that's the best approach.
> >
> > >
> > >
> > > > >
> > > > > Summary
> > > > > ---
> > > > > Implement VIRTIO_F_IN_ORDER in QEMU and Linux (vhost and virtio drivers)
> > > > >
> > > > > The VIRTIO specification defines a feature bit (VIRTIO_F_IN_ORDER)
> > > > > that devices and drivers can negotiate when the device uses
> > > > > descriptors in the same order in which they were made available by the
> > > > > driver.
> > > > >
> > > > > This feature can simplify device and driver implementations and
> > > > > increase performance. For example, when VIRTIO_F_IN_ORDER is
> > > > > negotiated, it may be easier to create a batch of buffers and reduce
> > > > > DMA transactions when the device uses a batch of buffers.
> > > > >
> > > > > Currently the devices and drivers available in Linux and QEMU do not
> > > > > support this feature. An implementation is available in DPDK for the
> > > > > virtio-net driver.
> > > > >
> > > > > Goals
> > > > > ---
> > > > > Implement VIRTIO_F_IN_ORDER for a single device/driver in QEMU and
> > > > > Linux (virtio-net or virtio-serial are good starting points).
> > > > > Generalize your approach to the common virtio core code for split and
> > > > > packed virtqueue layouts.
> > > > > If time allows, support for the packed virtqueue layout can be added
> > > > > to Linux vhost, QEMU's libvhost-user, and/or QEMU's virtio qtest code.
> > > > >
> > > > > Shadow Virtqueue missing virtio features
> > > > > ===
> > > > >
> > > > > Summary
> > > > > ---
> > > > > Some VirtIO devices like virtio-net have a control virtqueue (CVQ)
> > > > > that allows them to dynamically change a number of parameters like MAC
> > > > > or number of active queues. Changes to passthrough devices using vDPA
> > > > > using CVQ are inherently hard to track if CVQ is handled as
> > > > > passthrough data queues, because qemu is not aware of that
> > > > > communication for performance reasons. In this situation, qemu is not
> > > > > able to migrate these devices, as it is not able to tell the actual
> > > > > state of the device.
> > > > >
> > > > > Shadow Virtqueue (SVQ) allows qemu to offer an emulated queue to the
> > > > > device, effectively forwarding the descriptors of that communication,
> > > > > tracking the device internal state, and being able to migrate it to a
> > > > > new destination qemu.
> > > > >
> > > > > To restore that state in the destination, SVQ is able to send these
> > > > > messages as regular CVQ commands. The code to understand and parse
> > > > > virtio-net CVQ commands is already in qemu as part of its emulated
> > > > > device, but the code to send the some of the new state is not, and
> > > > > some features are missing. There is already code to restore basic
> > > > > commands like mac or multiqueue, and it is easy to use it as a
> > > > > template.
> > > > >
> > > > > Goals
> > > > > ---
> > > > > To implement missing virtio-net commands sending:
> > > > > * VIRTIO_NET_CTRL_RX family, to control receive mode.
> > > > > * VIRTIO_NET_CTRL_GUEST_OFFLOADS
> > > > > * VIRTIO_NET_CTRL_VLAN family
> > > > > * VIRTIO_NET_CTRL_MQ_HASH config
> > > > > * VIRTIO_NET_CTRL_MQ_RSS config
> > > >
> > > > Is there enough work here for a 350 hour or 175 hour GSoC project?
> > > >
> > >
> > > I think 175 hour should fit better. If needed more features can be
> > > added (packed vq, ring reset, etc), but to start contributing a 175
> > > hour should work.
> > >
> > > > The project description mentions "there is already code to restore
> > > > basic commands like mac and multiqueue", please include a link.
> > > >
> > >
> > > MAC address was merged with ASID support so the whole series is more
> > > complicated than it should be. Here is it the most relevant patch:
> > > * https://lists.gnu.org/archive/html/qemu-devel/2022-09/msg00342.html
> > >
> > > MQ is way cleaner in that regard, and future series should look more
> > > similar to this one:
> > > * https://www.mail-archive.com/qemu-devel@nongnu.org/msg906273.html
> > >
> > > > > Shadow Virtqueue performance optimization
> > > > > ===
> > > > > Summary
> > > > > ---
> > > > > To perform a virtual machine live migration with an external device to
> > > > > qemu, qemu needs a way to know which memory the device modifies so it
> > > > > is able to resend it. Otherwise the guest would resume with invalid /
> > > > > outdated memory in the destination.
> > > > >
> > > > > This is especially hard with passthrough hardware devices, as
> > > > > transports like PCI imposes a few security and performance challenges.
> > > > > As a method to overcome this for virtio devices, qemu can offer an
> > > > > emulated virtqueue to the device, called Shadow Virtqueue (SVQ),
> > > > > instead of allowing the device to communicate directly with the guest.
> > > > > SVQ will then forward the writes to the guest, being the effective
> > > > > writer in the guest memory and knowing when a portion of it needs to
> > > > > be resent.
> > > > >
> > > > > As this is effectively breaking the passthrough and it adds extra
> > > > > steps in the communication, this comes with a performance penalty in
> > > > > some forms: Context switches, more memory reads and writes increasing
> > > > > cache pressure, etc.
> > > > >
> > > > > At this moment the SVQ code is not optimized. It cannot forward
> > > > > buffers in parallel using multiqueue and multithread, and it does not
> > > > > use posted interrupts to notify the device skipping the host kernel
> > > > > context switch (doorbells).
> > > > >
> > > > > The SVQ code requires minimal modifications for the multithreading,
> > > > > and these are examples of multithreaded devices already like
> > > > > virtio-blk which can be used as a template-alike. Regarding the posted
> > > > > interrupts, DPDK is able to use them so that code can also be used as
> > > > > a template.
> > > > >
> > > > > Goals
> > > > > ---
> > > > > * Measure the latest SVQ performance compared to non-SVQ.
> > > >
> > > > Which benchmark workload and which benchmarking tool do you recommend?
> > > > Someone unfamiliar with QEMU and SVQ needs more details in order to
> > > > know what to do.
> > > >
> > >
> > > In my opinion netperf (TCP_STREAM & TCP_RR) or iperf equivalent +
> > > testpmd in AF_PACKET mode should test these scenarios better. But
> > > maybe upstream requests additional testings. Feedback on this would be
> > > appreciated actually.
> > >
> > > My intention is not for the intern to develop new tests or anything
> > > like that, they are just a means to justify the changes in SVQ. This
> > > part would be very guided, or it can be offloaded from the project. So
> > > if these tools are not enough descriptive maybe it's better to take
> > > this out of the goals and add it to the description like that.
> >
> > Great, "netperf (TCP_STREAM & TCP_RR) or iperf equivalent + testpmd in
> > AF_PACKET mode" is enough information.
> >
> > >
> > > > > * Add multithreading to SVQ, extracting the code from the Big QEMU Lock (BQL).
> > > >
> > > > What do you have in mind? Allowing individual virtqueues to be
> > > > assigned to IOThreads? Or processing all virtqueues in a single
> > > > IOThread (like virtio-blk and virtio-scsi do today)?
> > > >
> > >
> > > My idea was to use iothreads. I thought virtio-blk and virtio-scsi
> > > were done that way actually, is there a reason / advantage to use just
> > > a single iothread?
> >
> > The reason for only supporting a single IOThread at the moment is
> > thread-safety. There is multi-queue work in progress that will remove
> > this limitation in the future.
> >
> > I sent a patch series proposing a command-line syntax for multi-queue here:
> > https://www.mail-archive.com/qemu-devel@nongnu.org/msg933001.html
> >
> > The idea is that the same syntax can be used by other devices that
> > support mapping vqs to multiple IOThreads.
> >
>
> Understood. I'll take a look, thanks!
>
> > >
> > > > > * Add posted thread capabilities to QEMU, following the model of DPDK to it.
> > > >
> > > > What is this about? I thought KVM uses posted interrupts when
> > > > available, so what needs to be done here? Please also include a link
> > > > to the relevant DPDK code.
> > > >
> > >
> > > The guest in KVM may use posted interrupts but SVQ code runs in
> > > userland qemu :). There were no previous uses of HW posted interrupts
> > > as far as I know so SVQ is only able to use vhost-vdpa kick eventfds
> > > to notify queues. This has a performance penalty in the form of host
> > > kernel context switches.
> > >
> > > If I'm not wrong this patch adds it to DPDK, but I may be missing
> > > additional context or versions:
> > > * https://lore.kernel.org/all/1579539790-3882-31-git-send-email-matan@mellanox.com/
> > >
> > > Please let me know if you need further information. Thanks!
> >
> > This patch does not appear related to posted interrupts because it's
> > using the kickfd (available buffer notification) instead of the callfd
> > (used buffer notification). It's the glue that forwards a virtqueue
> > kick to hardware.
> >
>
> I'm sorry, that's because I confused the terms in my head and I wanted
> to say "host notifiers memory regions" or "hardware doorbell mapping".
> Maybe it is clearer that way?

The VIRTIO spec calls this memory the Queue Notify address.

>
> > I don't think that userspace available buffer notification
> > interception can be bypassed in the SVQ model. SVQ needs to take a
> > copy of available buffers so it knows the scatter-gather lists before
> > forwarding the kick to the vDPA device. If the notification is
> > bypassed then SVQ cannot reliably capture the scatter-gather list.
> >
> > I also don't think it's possible to bypass userspace in the used
> > buffer notification path. The vDPA used buffer notification must be
> > intercepted so SVQ can mark memory pages in the scatter-gather list
> > dirty before it fills in a guest used buffer and sends a guest used
> > buffer notification.
> >
> > The guest used buffer notification should already be a VT-d Posted
> > Interrupt on hardware that supports the feature. KVM takes care of
> > that.
> >
> > I probably don't understand what the optimization idea is. You want
> > SVQ to avoid a system call when sending vDPA available buffer
> > notifications? That's not related to posted interrupts though, so I'm
> > confused...
> >
>
> That's right, you described the idea perfectly that way :). I'll
> complete the projects summary but I'll be ok if you think it is not
> qualified, we can leave that part out of the proposal.

Thanks, I think I get it now. The task is to implement the dual of
QEMU's virtio_queue_set_host_notifier_mr() so SVQ can perform
virtqueue kicks on the vDPA device via memory store instructions.

That's a cool feature and I think it should be included in the project idea.

Stefan

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Call for GSoC and Outreachy project ideas for summer 2023
  2023-02-06 17:21           ` Stefan Hajnoczi
@ 2023-02-06 18:47             ` Eugenio Perez Martin
  2023-02-07  1:00               ` Stefan Hajnoczi
  0 siblings, 1 reply; 24+ messages in thread
From: Eugenio Perez Martin @ 2023-02-06 18:47 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: qemu-devel, kvm, Rust-VMM Mailing List, Alex Bennée,
	Philippe Mathieu-Daudé,
	Paolo Bonzini, Gerd Hoffmann, Marc-André Lureau,
	Thomas Huth, John Snow, Stefano Garzarella, Florescu, Andreea,
	Damien, Dmitry Fomichev, Hanna Reitz, Alberto Faria,
	Daniel Henrique Barboza, Cédric Le Goater, Bernhard Beschow,
	Sean Christopherson, Vitaly Kuznetsov, gmaglione, Jason Wang

On Mon, Feb 6, 2023 at 6:22 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:
>
> On Mon, 6 Feb 2023 at 11:47, Eugenio Perez Martin <eperezma@redhat.com> wrote:
> >
> > On Mon, Feb 6, 2023 at 3:21 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:
> > >
> > > On Mon, 6 Feb 2023 at 06:53, Eugenio Perez Martin <eperezma@redhat.com> wrote:
> > > >
> > > > On Sun, Feb 5, 2023 at 2:57 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:
> > > > >
> > > > > On Sun, 5 Feb 2023 at 03:15, Eugenio Perez Martin <eperezma@redhat.com> wrote:
> > > > > >
> > > > > > On Fri, Jan 27, 2023 at 4:18 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:
> > > > > > >
> > > > > > > Dear QEMU, KVM, and rust-vmm communities,
> > > > > > > QEMU will apply for Google Summer of Code 2023
> > > > > > > (https://summerofcode.withgoogle.com/) and has been accepted into
> > > > > > > Outreachy May 2023 (https://www.outreachy.org/). You can now
> > > > > > > submit internship project ideas for QEMU, KVM, and rust-vmm!
> > > > > > >
> > > > > > > Please reply to this email by February 6th with your project ideas.
> > > > > > >
> > > > > > > If you have experience contributing to QEMU, KVM, or rust-vmm you can
> > > > > > > be a mentor. Mentors support interns as they work on their project. It's a
> > > > > > > great way to give back and you get to work with people who are just
> > > > > > > starting out in open source.
> > > > > > >
> > > > > > > Good project ideas are suitable for remote work by a competent
> > > > > > > programmer who is not yet familiar with the codebase. In
> > > > > > > addition, they are:
> > > > > > > - Well-defined - the scope is clear
> > > > > > > - Self-contained - there are few dependencies
> > > > > > > - Uncontroversial - they are acceptable to the community
> > > > > > > - Incremental - they produce deliverables along the way
> > > > > > >
> > > > > > > Feel free to post ideas even if you are unable to mentor the project.
> > > > > > > It doesn't hurt to share the idea!
> > > > > > >
> > > > > > > I will review project ideas and keep you up-to-date on QEMU's
> > > > > > > acceptance into GSoC.
> > > > > > >
> > > > > > > Internship program details:
> > > > > > > - Paid, remote work open source internships
> > > > > > > - GSoC projects are 175 or 350 hours, Outreachy projects are 30
> > > > > > > hrs/week for 12 weeks
> > > > > > > - Mentored by volunteers from QEMU, KVM, and rust-vmm
> > > > > > > - Mentors typically spend at least 5 hours per week during the coding period
> > > > > > >
> > > > > > > For more background on QEMU internships, check out this video:
> > > > > > > https://www.youtube.com/watch?v=xNVCX7YMUL8
> > > > > > >
> > > > > > > Please let me know if you have any questions!
> > > > > > >
> > > > > > > Stefan
> > > > > > >
> > > > > >
> > > > > > Appending the different ideas here.
> > > > >
> > > > > Hi Eugenio,
> > > > > Thanks for sharing your project ideas. I have added some questions
> > > > > below before we add them to the ideas list wiki page.
> > >
> > > Thanks for the discussion. Do you want to focus on 1 or 2 project
> > > ideas? 3 might be a bit much to mentor.
> > >
> >
> > Right, my idea was to reduce that amount afterwards just in case some
> > of them were rejected. But sure, we can filter out some if needed.
>
> Do you mean in case there is no realistic applicant? You can do that
> if you want, just keep in mind it may be more work for you during the
> application phase. If it turns out there is a strong applicant for
> each project idea you could see if someone else is willing to mentor
> the project(s) you don't have time for.
>

Good point, I'll discard the IN_ORDER project from the list.

> I'll post the project ideas once you've updated them.
>
> > > Please send an updated version of the project descriptions and I'll
> > > post it on the wiki.
> > >
> > > > >
> > > > > > VIRTIO_F_IN_ORDER feature support for virtio devices
> > > > > > ===
> > > > > > This was already a project the last year, and it produced a few series
> > > > > > upstream but was never merged. The previous series are totally useful
> > > > > > to start with, so it's not starting from scratch with them [1]:
> > > > >
> > > > > Has Zhi Guo stopped working on the patches?
> > > > >
> > > >
> > > > I can ask him for sure.
> > > >
> > > > > What is the state of the existing patches? What work remains to be done?
> > > > >
> > > >
> > > > There are some pending comments from upstream. However if somebody
> > > > starts it from scratch it needs time to review some of the VirtIO
> > > > standard to understand the virtio in_order feature, both in split and
> > > > packed vq.
> > >
> > > The intern will need to take ownership and deal with code review
> > > feedback for code they didn't write. That can be difficult for someone
> > > who is new unless the requested changes are easy to address.
> > >
> >
> > Indeed that is a very good point.
> >
> > > It's okay to start from scratch. You're in a better position than an
> > > applicant to decide whether that's the best approach.
> > >
> > > >
> > > >
> > > > > >
> > > > > > Summary
> > > > > > ---
> > > > > > Implement VIRTIO_F_IN_ORDER in QEMU and Linux (vhost and virtio drivers)
> > > > > >
> > > > > > The VIRTIO specification defines a feature bit (VIRTIO_F_IN_ORDER)
> > > > > > that devices and drivers can negotiate when the device uses
> > > > > > descriptors in the same order in which they were made available by the
> > > > > > driver.
> > > > > >
> > > > > > This feature can simplify device and driver implementations and
> > > > > > increase performance. For example, when VIRTIO_F_IN_ORDER is
> > > > > > negotiated, it may be easier to create a batch of buffers and reduce
> > > > > > DMA transactions when the device uses a batch of buffers.
> > > > > >
> > > > > > Currently the devices and drivers available in Linux and QEMU do not
> > > > > > support this feature. An implementation is available in DPDK for the
> > > > > > virtio-net driver.
> > > > > >
> > > > > > Goals
> > > > > > ---
> > > > > > Implement VIRTIO_F_IN_ORDER for a single device/driver in QEMU and
> > > > > > Linux (virtio-net or virtio-serial are good starting points).
> > > > > > Generalize your approach to the common virtio core code for split and
> > > > > > packed virtqueue layouts.
> > > > > > If time allows, support for the packed virtqueue layout can be added
> > > > > > to Linux vhost, QEMU's libvhost-user, and/or QEMU's virtio qtest code.
> > > > > >
> > > > > > Shadow Virtqueue missing virtio features
> > > > > > ===
> > > > > >
> > > > > > Summary
> > > > > > ---
> > > > > > Some VirtIO devices like virtio-net have a control virtqueue (CVQ)
> > > > > > that allows them to dynamically change a number of parameters like MAC
> > > > > > or number of active queues. Changes to passthrough devices using vDPA
> > > > > > using CVQ are inherently hard to track if CVQ is handled as
> > > > > > passthrough data queues, because qemu is not aware of that
> > > > > > communication for performance reasons. In this situation, qemu is not
> > > > > > able to migrate these devices, as it is not able to tell the actual
> > > > > > state of the device.
> > > > > >
> > > > > > Shadow Virtqueue (SVQ) allows qemu to offer an emulated queue to the
> > > > > > device, effectively forwarding the descriptors of that communication,
> > > > > > tracking the device internal state, and being able to migrate it to a
> > > > > > new destination qemu.
> > > > > >
> > > > > > To restore that state in the destination, SVQ is able to send these
> > > > > > messages as regular CVQ commands. The code to understand and parse
> > > > > > virtio-net CVQ commands is already in qemu as part of its emulated
> > > > > > device, but the code to send the some of the new state is not, and
> > > > > > some features are missing. There is already code to restore basic
> > > > > > commands like mac or multiqueue, and it is easy to use it as a
> > > > > > template.
> > > > > >
> > > > > > Goals
> > > > > > ---
> > > > > > To implement missing virtio-net commands sending:
> > > > > > * VIRTIO_NET_CTRL_RX family, to control receive mode.
> > > > > > * VIRTIO_NET_CTRL_GUEST_OFFLOADS
> > > > > > * VIRTIO_NET_CTRL_VLAN family
> > > > > > * VIRTIO_NET_CTRL_MQ_HASH config
> > > > > > * VIRTIO_NET_CTRL_MQ_RSS config
> > > > >
> > > > > Is there enough work here for a 350 hour or 175 hour GSoC project?
> > > > >
> > > >
> > > > I think 175 hour should fit better. If needed more features can be
> > > > added (packed vq, ring reset, etc), but to start contributing a 175
> > > > hour should work.
> > > >
> > > > > The project description mentions "there is already code to restore
> > > > > basic commands like mac and multiqueue", please include a link.
> > > > >
> > > >
> > > > MAC address was merged with ASID support so the whole series is more
> > > > complicated than it should be. Here is it the most relevant patch:
> > > > * https://lists.gnu.org/archive/html/qemu-devel/2022-09/msg00342.html
> > > >
> > > > MQ is way cleaner in that regard, and future series should look more
> > > > similar to this one:
> > > > * https://www.mail-archive.com/qemu-devel@nongnu.org/msg906273.html
> > > >
> > > > > > Shadow Virtqueue performance optimization
> > > > > > ===
> > > > > > Summary
> > > > > > ---
> > > > > > To perform a virtual machine live migration with an external device to
> > > > > > qemu, qemu needs a way to know which memory the device modifies so it
> > > > > > is able to resend it. Otherwise the guest would resume with invalid /
> > > > > > outdated memory in the destination.
> > > > > >
> > > > > > This is especially hard with passthrough hardware devices, as
> > > > > > transports like PCI imposes a few security and performance challenges.
> > > > > > As a method to overcome this for virtio devices, qemu can offer an
> > > > > > emulated virtqueue to the device, called Shadow Virtqueue (SVQ),
> > > > > > instead of allowing the device to communicate directly with the guest.
> > > > > > SVQ will then forward the writes to the guest, being the effective
> > > > > > writer in the guest memory and knowing when a portion of it needs to
> > > > > > be resent.
> > > > > >
> > > > > > As this is effectively breaking the passthrough and it adds extra
> > > > > > steps in the communication, this comes with a performance penalty in
> > > > > > some forms: Context switches, more memory reads and writes increasing
> > > > > > cache pressure, etc.
> > > > > >
> > > > > > At this moment the SVQ code is not optimized. It cannot forward
> > > > > > buffers in parallel using multiqueue and multithread, and it does not
> > > > > > use posted interrupts to notify the device skipping the host kernel
> > > > > > context switch (doorbells).
> > > > > >
> > > > > > The SVQ code requires minimal modifications for the multithreading,
> > > > > > and these are examples of multithreaded devices already like
> > > > > > virtio-blk which can be used as a template-alike. Regarding the posted
> > > > > > interrupts, DPDK is able to use them so that code can also be used as
> > > > > > a template.
> > > > > >
> > > > > > Goals
> > > > > > ---
> > > > > > * Measure the latest SVQ performance compared to non-SVQ.
> > > > >
> > > > > Which benchmark workload and which benchmarking tool do you recommend?
> > > > > Someone unfamiliar with QEMU and SVQ needs more details in order to
> > > > > know what to do.
> > > > >
> > > >
> > > > In my opinion netperf (TCP_STREAM & TCP_RR) or iperf equivalent +
> > > > testpmd in AF_PACKET mode should test these scenarios better. But
> > > > maybe upstream requests additional testings. Feedback on this would be
> > > > appreciated actually.
> > > >
> > > > My intention is not for the intern to develop new tests or anything
> > > > like that, they are just a means to justify the changes in SVQ. This
> > > > part would be very guided, or it can be offloaded from the project. So
> > > > if these tools are not enough descriptive maybe it's better to take
> > > > this out of the goals and add it to the description like that.
> > >
> > > Great, "netperf (TCP_STREAM & TCP_RR) or iperf equivalent + testpmd in
> > > AF_PACKET mode" is enough information.
> > >
> > > >
> > > > > > * Add multithreading to SVQ, extracting the code from the Big QEMU Lock (BQL).
> > > > >
> > > > > What do you have in mind? Allowing individual virtqueues to be
> > > > > assigned to IOThreads? Or processing all virtqueues in a single
> > > > > IOThread (like virtio-blk and virtio-scsi do today)?
> > > > >
> > > >
> > > > My idea was to use iothreads. I thought virtio-blk and virtio-scsi
> > > > were done that way actually, is there a reason / advantage to use just
> > > > a single iothread?
> > >
> > > The reason for only supporting a single IOThread at the moment is
> > > thread-safety. There is multi-queue work in progress that will remove
> > > this limitation in the future.
> > >
> > > I sent a patch series proposing a command-line syntax for multi-queue here:
> > > https://www.mail-archive.com/qemu-devel@nongnu.org/msg933001.html
> > >
> > > The idea is that the same syntax can be used by other devices that
> > > support mapping vqs to multiple IOThreads.
> > >
> >
> > Understood. I'll take a look, thanks!
> >
> > > >
> > > > > > * Add posted thread capabilities to QEMU, following the model of DPDK to it.
> > > > >
> > > > > What is this about? I thought KVM uses posted interrupts when
> > > > > available, so what needs to be done here? Please also include a link
> > > > > to the relevant DPDK code.
> > > > >
> > > >
> > > > The guest in KVM may use posted interrupts but SVQ code runs in
> > > > userland qemu :). There were no previous uses of HW posted interrupts
> > > > as far as I know so SVQ is only able to use vhost-vdpa kick eventfds
> > > > to notify queues. This has a performance penalty in the form of host
> > > > kernel context switches.
> > > >
> > > > If I'm not wrong this patch adds it to DPDK, but I may be missing
> > > > additional context or versions:
> > > > * https://lore.kernel.org/all/1579539790-3882-31-git-send-email-matan@mellanox.com/
> > > >
> > > > Please let me know if you need further information. Thanks!
> > >
> > > This patch does not appear related to posted interrupts because it's
> > > using the kickfd (available buffer notification) instead of the callfd
> > > (used buffer notification). It's the glue that forwards a virtqueue
> > > kick to hardware.
> > >
> >
> > I'm sorry, that's because I confused the terms in my head and I wanted
> > to say "host notifiers memory regions" or "hardware doorbell mapping".
> > Maybe it is clearer that way?
>
> The VIRTIO spec calls this memory the Queue Notify address.
>
> >
> > > I don't think that userspace available buffer notification
> > > interception can be bypassed in the SVQ model. SVQ needs to take a
> > > copy of available buffers so it knows the scatter-gather lists before
> > > forwarding the kick to the vDPA device. If the notification is
> > > bypassed then SVQ cannot reliably capture the scatter-gather list.
> > >
> > > I also don't think it's possible to bypass userspace in the used
> > > buffer notification path. The vDPA used buffer notification must be
> > > intercepted so SVQ can mark memory pages in the scatter-gather list
> > > dirty before it fills in a guest used buffer and sends a guest used
> > > buffer notification.
> > >
> > > The guest used buffer notification should already be a VT-d Posted
> > > Interrupt on hardware that supports the feature. KVM takes care of
> > > that.
> > >
> > > I probably don't understand what the optimization idea is. You want
> > > SVQ to avoid a system call when sending vDPA available buffer
> > > notifications? That's not related to posted interrupts though, so I'm
> > > confused...
> > >
> >
> > That's right, you described the idea perfectly that way :). I'll
> > complete the projects summary but I'll be ok if you think it is not
> > qualified, we can leave that part out of the proposal.
>
> Thanks, I think I get it now. The task is to implement the dual of
> QEMU's virtio_queue_set_host_notifier_mr() so SVQ can perform
> virtqueue kicks on the vDPA device via memory store instructions.
>
> That's a cool feature and I think it should be included in the project idea.
>
> Stefan
>

Thanks for all the feedback, it makes the proposal way clearer. I add
the updated proposals here, please let me know if you think they need
further modifications.

Shadow Virtqueue missing virtio features
===

Summary
---
Some VirtIO devices like virtio-net have a control virtqueue (CVQ)
that allows them to dynamically change a number of parameters like MAC
or number of active queues. Changes to passthrough devices using vDPA
using CVQ are inherently hard to track if CVQ is handled as
passthrough data queues, because qemu is not aware of that
communication for performance reasons. In this situation, qemu is not
able to migrate these devices, as it is not able to tell the actual
state of the device.

Shadow Virtqueue (SVQ) allows qemu to offer an emulated queue to the
device, effectively forwarding the descriptors of that communication,
tracking the device internal state, and being able to migrate it to a
new destination qemu.

To restore that state in the destination, SVQ is able to send these
messages as regular CVQ commands. The code to understand and parse
virtio-net CVQ commands is already in qemu as part of its emulated
device, but the code to send some of the new state is not, and
some features are missing. There is already code to restore basic
commands like mac [1] or multiqueue [2], and it is easy to use them as
a template.

[1] https://lists.gnu.org/archive/html/qemu-devel/2022-09/msg00342.html
[2] https://www.mail-archive.com/qemu-devel@nongnu.org/msg906273.html

Goals
---
To implement missing virtio-net commands sending:
* VIRTIO_NET_CTRL_RX family, to control receive mode.
* VIRTIO_NET_CTRL_GUEST_OFFLOADS
* VIRTIO_NET_CTRL_VLAN family
* VIRTIO_NET_CTRL_MQ_HASH config
* VIRTIO_NET_CTRL_MQ_RSS config

Shadow Virtqueue performance optimization
===
Summary
---
To perform a virtual machine live migration with an external device to
qemu, qemu needs a way to know which memory the device modifies so it
is able to resend it. Otherwise the guest would resume with invalid /
outdated memory in the destination.

This is especially hard with passthrough hardware devices, as
transports like PCI imposes a few security and performance challenges.
As a method to overcome this for virtio devices, qemu can offer an
emulated virtqueue to the device, called Shadow Virtqueue (SVQ),
instead of allowing the device to communicate directly with the guest.
SVQ will then forward the writes to the guest, being the effective
writer in the guest memory and knowing when a portion of it needs to
be resent.

As this is effectively breaking the passthrough and it adds extra
steps in the communication, this comes with a performance penalty in
some forms: Context switches, more memory reads and writes increasing
cache pressure, etc.

At this moment the SVQ code is not optimized. It cannot forward
buffers in parallel using multiqueue and multithread, and it does not
use the Queue Notify address to notify the device for available
buffers, so these notifications needs to perform an extra host kernel
context switch.

The SVQ code requires minimal modifications for the multithreading,
and these are examples of multithreaded devices already like
virtio-blk which can be used as a template-alike. Proposals about the
cmdline syntax of mapping virtio queues to iothread using the qemu
command line have been sent to qemu mail list already [1].

Regarding the use of Queue Notify address, DPDK is able to use them so
that code can also be used as a template.

[1] https://www.mail-archive.com/qemu-devel@nongnu.org/msg933001.html

Goals
---
* Measure the latest SVQ performance compared to non-SVQ with
standardized profiling tools like netperf (TCP_STREAM & TCP_RR) or
iperf equivalent + DPDK's testpmd in AF_PACKET.
* Add multithreading to SVQ, extracting the code from the Big QEMU Lock (BQL)
* Add Queue Notify write capabilities to QEMU, following the model of
DPDK to it.


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Call for GSoC and Outreachy project ideas for summer 2023
  2023-01-27 15:17 Call for GSoC and Outreachy project ideas for summer 2023 Stefan Hajnoczi
  2023-01-27 17:10 ` Warner Losh
  2023-02-05  8:14 ` Eugenio Perez Martin
@ 2023-02-06 19:50 ` Alberto Faria
  2023-02-06 21:21   ` Stefan Hajnoczi
  2023-02-17 16:23 ` Stefano Garzarella
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 24+ messages in thread
From: Alberto Faria @ 2023-02-06 19:50 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: qemu-devel, kvm, Rust-VMM Mailing List, Alex Bennée,
	Philippe Mathieu-Daudé,
	Paolo Bonzini, Gerd Hoffmann, Marc-André Lureau,
	Thomas Huth, John Snow, Stefano Garzarella, Eugenio Pérez,
	Florescu, Andreea, Damien, Dmitry Fomichev, Hanna Reitz,
	Daniel Henrique Barboza, Cédric Le Goater, Bernhard Beschow,
	Sean Christopherson, Vitaly Kuznetsov, gmaglione

On Fri, Jan 27, 2023 at 3:17 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:
> Dear QEMU, KVM, and rust-vmm communities,
> QEMU will apply for Google Summer of Code 2023
> (https://summerofcode.withgoogle.com/) and has been accepted into
> Outreachy May 2023 (https://www.outreachy.org/). You can now
> submit internship project ideas for QEMU, KVM, and rust-vmm!
>
> Please reply to this email by February 6th with your project ideas.
>
> If you have experience contributing to QEMU, KVM, or rust-vmm you can
> be a mentor. Mentors support interns as they work on their project. It's a
> great way to give back and you get to work with people who are just
> starting out in open source.
>
> Good project ideas are suitable for remote work by a competent
> programmer who is not yet familiar with the codebase. In
> addition, they are:
> - Well-defined - the scope is clear
> - Self-contained - there are few dependencies
> - Uncontroversial - they are acceptable to the community
> - Incremental - they produce deliverables along the way
>
> Feel free to post ideas even if you are unable to mentor the project.
> It doesn't hurt to share the idea!
>
> I will review project ideas and keep you up-to-date on QEMU's
> acceptance into GSoC.
>
> Internship program details:
> - Paid, remote work open source internships
> - GSoC projects are 175 or 350 hours, Outreachy projects are 30
> hrs/week for 12 weeks
> - Mentored by volunteers from QEMU, KVM, and rust-vmm
> - Mentors typically spend at least 5 hours per week during the coding period
>
> For more background on QEMU internships, check out this video:
> https://www.youtube.com/watch?v=xNVCX7YMUL8
>
> Please let me know if you have any questions!
>
> Stefan

FWIW there is some work to be done on libblkio [1] that QEMU could
benefit from. Maybe these would be appropriate as QEMU projects?

One possible project would be to add zoned device support to libblkio
and all its drivers [2]. This would allow QEMU to use zoned
vhost-user-blk devices, for instance (once general zoned device
support lands [3]).

Another idea would be to add an NVMe driver to libblkio that
internally relies on xNVMe [4, 5]. This would enable QEMU users to use
the NVMe drivers from SPDK or libvfn.

Thanks,
Alberto

[1] https://libblkio.gitlab.io/libblkio/
[2] https://gitlab.com/libblkio/libblkio/-/issues/44
[3] https://lore.kernel.org/qemu-devel/20230129102850.84731-1-faithilikerun@gmail.com/
[4] https://gitlab.com/libblkio/libblkio/-/issues/45
[5] https://github.com/OpenMPDK/xNVMe


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Call for GSoC and Outreachy project ideas for summer 2023
  2023-02-06 19:50 ` Alberto Faria
@ 2023-02-06 21:21   ` Stefan Hajnoczi
  2023-02-07 10:23     ` Alberto Faria
  0 siblings, 1 reply; 24+ messages in thread
From: Stefan Hajnoczi @ 2023-02-06 21:21 UTC (permalink / raw)
  To: Alberto Faria
  Cc: qemu-devel, kvm, Rust-VMM Mailing List, Alex Bennée,
	Philippe Mathieu-Daudé,
	Paolo Bonzini, Gerd Hoffmann, Marc-André Lureau,
	Thomas Huth, John Snow, Stefano Garzarella, Eugenio Pérez,
	Florescu, Andreea, Damien, Dmitry Fomichev, Hanna Reitz,
	Daniel Henrique Barboza, Cédric Le Goater, Bernhard Beschow,
	Sean Christopherson, Vitaly Kuznetsov, gmaglione

On Mon, 6 Feb 2023 at 14:51, Alberto Faria <afaria@redhat.com> wrote:
>
> On Fri, Jan 27, 2023 at 3:17 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:
> > Dear QEMU, KVM, and rust-vmm communities,
> > QEMU will apply for Google Summer of Code 2023
> > (https://summerofcode.withgoogle.com/) and has been accepted into
> > Outreachy May 2023 (https://www.outreachy.org/). You can now
> > submit internship project ideas for QEMU, KVM, and rust-vmm!
> >
> > Please reply to this email by February 6th with your project ideas.
> >
> > If you have experience contributing to QEMU, KVM, or rust-vmm you can
> > be a mentor. Mentors support interns as they work on their project. It's a
> > great way to give back and you get to work with people who are just
> > starting out in open source.
> >
> > Good project ideas are suitable for remote work by a competent
> > programmer who is not yet familiar with the codebase. In
> > addition, they are:
> > - Well-defined - the scope is clear
> > - Self-contained - there are few dependencies
> > - Uncontroversial - they are acceptable to the community
> > - Incremental - they produce deliverables along the way
> >
> > Feel free to post ideas even if you are unable to mentor the project.
> > It doesn't hurt to share the idea!
> >
> > I will review project ideas and keep you up-to-date on QEMU's
> > acceptance into GSoC.
> >
> > Internship program details:
> > - Paid, remote work open source internships
> > - GSoC projects are 175 or 350 hours, Outreachy projects are 30
> > hrs/week for 12 weeks
> > - Mentored by volunteers from QEMU, KVM, and rust-vmm
> > - Mentors typically spend at least 5 hours per week during the coding period
> >
> > For more background on QEMU internships, check out this video:
> > https://www.youtube.com/watch?v=xNVCX7YMUL8
> >
> > Please let me know if you have any questions!
> >
> > Stefan
>
> FWIW there is some work to be done on libblkio [1] that QEMU could
> benefit from. Maybe these would be appropriate as QEMU projects?
>
> One possible project would be to add zoned device support to libblkio
> and all its drivers [2]. This would allow QEMU to use zoned
> vhost-user-blk devices, for instance (once general zoned device
> support lands [3]).
>
> Another idea would be to add an NVMe driver to libblkio that
> internally relies on xNVMe [4, 5]. This would enable QEMU users to use
> the NVMe drivers from SPDK or libvfn.

Great that you're interesting, Alberto! Both sound feasible. I would
like to co-mentor the zoned storage project or can at least commit to
being available to help because zoned storage is currently on my mind
anyway :).

Do you want to write up one or both of them using the project template
below? You can use the other project ideas as a reference for how much
detail to include: https://wiki.qemu.org/Google_Summer_of_Code_2023

=== TITLE ===

 '''Summary:''' Short description of the project

 Detailed description of the project.

 '''Links:'''
 * Wiki links to relevant material
 * External links to mailing lists or web sites

 '''Details:'''
 * Skill level: beginner or intermediate or advanced
 * Language: C
 * Mentor: Email address and IRC nick
 * Suggested by: Person who suggested the idea

Thanks,
Stefan

>
> Thanks,
> Alberto
>
> [1] https://libblkio.gitlab.io/libblkio/
> [2] https://gitlab.com/libblkio/libblkio/-/issues/44
> [3] https://lore.kernel.org/qemu-devel/20230129102850.84731-1-faithilikerun@gmail.com/
> [4] https://gitlab.com/libblkio/libblkio/-/issues/45
> [5] https://github.com/OpenMPDK/xNVMe
>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Call for GSoC and Outreachy project ideas for summer 2023
  2023-02-06 18:47             ` Eugenio Perez Martin
@ 2023-02-07  1:00               ` Stefan Hajnoczi
  0 siblings, 0 replies; 24+ messages in thread
From: Stefan Hajnoczi @ 2023-02-07  1:00 UTC (permalink / raw)
  To: Eugenio Perez Martin
  Cc: qemu-devel, kvm, Rust-VMM Mailing List, Alex Bennée,
	Philippe Mathieu-Daudé,
	Paolo Bonzini, Gerd Hoffmann, Marc-André Lureau,
	Thomas Huth, John Snow, Stefano Garzarella, Florescu, Andreea,
	Damien, Dmitry Fomichev, Hanna Reitz, Alberto Faria,
	Daniel Henrique Barboza, Cédric Le Goater, Bernhard Beschow,
	Sean Christopherson, Vitaly Kuznetsov, gmaglione, Jason Wang

On Mon, 6 Feb 2023 at 13:48, Eugenio Perez Martin <eperezma@redhat.com> wrote:>
> Thanks for all the feedback, it makes the proposal way clearer. I add
> the updated proposals here, please let me know if you think they need
> further modifications.

Thanks, I have added them to the wiki:
https://wiki.qemu.org/Google_Summer_of_Code_2023

I edited them more (e.g. specifically mentioned vhost_svq_kick() and
vhost_vdpa_host_notifier_init() so it's clear which functions need to
be tweaked for the mmap Queue Notify address support). Please feel
free to make changes.

Stefan

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Call for GSoC and Outreachy project ideas for summer 2023
  2023-02-06 21:21   ` Stefan Hajnoczi
@ 2023-02-07 10:23     ` Alberto Faria
  2023-02-07 10:29       ` Alberto Faria
  0 siblings, 1 reply; 24+ messages in thread
From: Alberto Faria @ 2023-02-07 10:23 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: qemu-devel, kvm, Rust-VMM Mailing List, Alex Bennée,
	Philippe Mathieu-Daudé,
	Paolo Bonzini, Gerd Hoffmann, Marc-André Lureau,
	Thomas Huth, John Snow, Stefano Garzarella, Eugenio Pérez,
	Florescu, Andreea, Damien, Dmitry Fomichev, Hanna Reitz,
	Daniel Henrique Barboza, Cédric Le Goater, Bernhard Beschow,
	Sean Christopherson, Vitaly Kuznetsov, gmaglione

On Mon, Feb 6, 2023 at 9:22 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:
> Great that you're interesting, Alberto! Both sound feasible. I would
> like to co-mentor the zoned storage project or can at least commit to
> being available to help because zoned storage is currently on my mind
> anyway :).

Perfect, I'll have time to co-mentor one project, but probably not
two, so let's leave the NVMe driver project aside for now. If anyone
wants to take that one over, though, go for it.

> Do you want to write up one or both of them using the project template
> below? You can use the other project ideas as a reference for how much
> detail to include: https://wiki.qemu.org/Google_Summer_of_Code_2023

I feel like this is closer to a 175 hour project than a 350 hour one,
but I'm not entirely sure.

  === Zoned device support for libblkio ===

   '''Summary:''' Add support for zoned block devices to the libblkio library.

   Zoned block devices are special kinds of disks that are split into several
   regions called zones, where each zone may only be written
sequentially and data
   can't be updated without resetting the entire zone.

   libblkio is a library that provides an API for efficiently accessing block
   devices using modern high-performance block I/O interfaces like
Linux io_uring.

   The goal is to extend libblkio so users can use it to access zoned devices
   properly. This will require adding support for more request types, expanding
   its API to expose additional metadata about the device, and making the
   appropriate changes to each libblkio "driver".

   This is important for QEMU since it will soon support zoned devices too and
   several of its BlockDrivers rely on libblkio. In particular, this
project would
   enable QEMU to access zoned vhost-user-blk and vhost-vdpa-blk devices.

   '''Links:'''
   * https://zonedstorage.io/
   * https://libblkio.gitlab.io/libblkio/
   * https://gitlab.com/libblkio/libblkio/-/issues/44

   '''Details:'''
   * Project size: 175 hours
   * Skill level: intermediate
   * Language: Rust, C
   * Mentor: Alberto Faria <afaria@redhat.com>, Stefan Hajnoczi
<stefanha@gmail.com>
   * Suggested by: Alberto Faria <afaria@redhat.com>

Alberto


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Call for GSoC and Outreachy project ideas for summer 2023
  2023-02-07 10:23     ` Alberto Faria
@ 2023-02-07 10:29       ` Alberto Faria
  2023-02-07 14:32         ` Stefan Hajnoczi
  0 siblings, 1 reply; 24+ messages in thread
From: Alberto Faria @ 2023-02-07 10:29 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: qemu-devel, kvm, Rust-VMM Mailing List, Alex Bennée,
	Philippe Mathieu-Daudé,
	Paolo Bonzini, Gerd Hoffmann, Marc-André Lureau,
	Thomas Huth, John Snow, Stefano Garzarella, Eugenio Pérez,
	Florescu, Andreea, Damien, Dmitry Fomichev, Hanna Reitz,
	Daniel Henrique Barboza, Cédric Le Goater, Bernhard Beschow,
	Sean Christopherson, Vitaly Kuznetsov, gmaglione

On Tue, Feb 7, 2023 at 10:23 AM Alberto Faria <afaria@redhat.com> wrote:
> On Mon, Feb 6, 2023 at 9:22 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:
> > Great that you're interesting, Alberto! Both sound feasible. I would
> > like to co-mentor the zoned storage project or can at least commit to
> > being available to help because zoned storage is currently on my mind
> > anyway :).
>
> Perfect, I'll have time to co-mentor one project, but probably not
> two, so let's leave the NVMe driver project aside for now. If anyone
> wants to take that one over, though, go for it.
>
> > Do you want to write up one or both of them using the project template
> > below? You can use the other project ideas as a reference for how much
> > detail to include: https://wiki.qemu.org/Google_Summer_of_Code_2023
>
> I feel like this is closer to a 175 hour project than a 350 hour one,
> but I'm not entirely sure.
>
>   === Zoned device support for libblkio ===
>
>    '''Summary:''' Add support for zoned block devices to the libblkio library.
>
>    Zoned block devices are special kinds of disks that are split into several
>    regions called zones, where each zone may only be written
> sequentially and data
>    can't be updated without resetting the entire zone.
>
>    libblkio is a library that provides an API for efficiently accessing block
>    devices using modern high-performance block I/O interfaces like
> Linux io_uring.
>
>    The goal is to extend libblkio so users can use it to access zoned devices
>    properly. This will require adding support for more request types, expanding
>    its API to expose additional metadata about the device, and making the
>    appropriate changes to each libblkio "driver".
>
>    This is important for QEMU since it will soon support zoned devices too and
>    several of its BlockDrivers rely on libblkio. In particular, this
> project would
>    enable QEMU to access zoned vhost-user-blk and vhost-vdpa-blk devices.

Also, a stretch/bonus goal could be to make the necessary changes to
QEMU to actually make use of libblkio's zoned device support.

>    '''Links:'''
>    * https://zonedstorage.io/
>    * https://libblkio.gitlab.io/libblkio/
>    * https://gitlab.com/libblkio/libblkio/-/issues/44
>
>    '''Details:'''
>    * Project size: 175 hours
>    * Skill level: intermediate
>    * Language: Rust, C
>    * Mentor: Alberto Faria <afaria@redhat.com>, Stefan Hajnoczi
> <stefanha@gmail.com>
>    * Suggested by: Alberto Faria <afaria@redhat.com>
>
> Alberto


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Call for GSoC and Outreachy project ideas for summer 2023
  2023-02-07 10:29       ` Alberto Faria
@ 2023-02-07 14:32         ` Stefan Hajnoczi
  0 siblings, 0 replies; 24+ messages in thread
From: Stefan Hajnoczi @ 2023-02-07 14:32 UTC (permalink / raw)
  To: Alberto Faria
  Cc: qemu-devel, kvm, Rust-VMM Mailing List, Alex Bennée,
	Philippe Mathieu-Daudé,
	Paolo Bonzini, Gerd Hoffmann, Marc-André Lureau,
	Thomas Huth, John Snow, Stefano Garzarella, Eugenio Pérez,
	Florescu, Andreea, Damien, Dmitry Fomichev, Hanna Reitz,
	Daniel Henrique Barboza, Cédric Le Goater, Bernhard Beschow,
	Sean Christopherson, Vitaly Kuznetsov, gmaglione

On Tue, 7 Feb 2023 at 05:30, Alberto Faria <afaria@redhat.com> wrote:
>
> On Tue, Feb 7, 2023 at 10:23 AM Alberto Faria <afaria@redhat.com> wrote:
> > On Mon, Feb 6, 2023 at 9:22 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:
> > > Great that you're interesting, Alberto! Both sound feasible. I would
> > > like to co-mentor the zoned storage project or can at least commit to
> > > being available to help because zoned storage is currently on my mind
> > > anyway :).
> >
> > Perfect, I'll have time to co-mentor one project, but probably not
> > two, so let's leave the NVMe driver project aside for now. If anyone
> > wants to take that one over, though, go for it.
> >
> > > Do you want to write up one or both of them using the project template
> > > below? You can use the other project ideas as a reference for how much
> > > detail to include: https://wiki.qemu.org/Google_Summer_of_Code_2023
> >
> > I feel like this is closer to a 175 hour project than a 350 hour one,
> > but I'm not entirely sure.
> >
> >   === Zoned device support for libblkio ===
> >
> >    '''Summary:''' Add support for zoned block devices to the libblkio library.
> >
> >    Zoned block devices are special kinds of disks that are split into several
> >    regions called zones, where each zone may only be written
> > sequentially and data
> >    can't be updated without resetting the entire zone.
> >
> >    libblkio is a library that provides an API for efficiently accessing block
> >    devices using modern high-performance block I/O interfaces like
> > Linux io_uring.
> >
> >    The goal is to extend libblkio so users can use it to access zoned devices
> >    properly. This will require adding support for more request types, expanding
> >    its API to expose additional metadata about the device, and making the
> >    appropriate changes to each libblkio "driver".
> >
> >    This is important for QEMU since it will soon support zoned devices too and
> >    several of its BlockDrivers rely on libblkio. In particular, this
> > project would
> >    enable QEMU to access zoned vhost-user-blk and vhost-vdpa-blk devices.
>
> Also, a stretch/bonus goal could be to make the necessary changes to
> QEMU to actually make use of libblkio's zoned device support.

Great, I have added it to the wiki and included a list of tasks:
https://wiki.qemu.org/Internships/ProjectIdeas/LibblkioZonedStorage

Feel free to edit it.

I think this project could just as easily be 350 hours, but I'm happy
to mentor a 175 hour project with a more modest scope.

Stefan

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Call for GSoC and Outreachy project ideas for summer 2023
  2023-01-27 22:01   ` Stefan Hajnoczi
@ 2023-02-08 23:01     ` Warner Losh
  2023-02-09  1:25       ` Stefan Hajnoczi
  0 siblings, 1 reply; 24+ messages in thread
From: Warner Losh @ 2023-02-08 23:01 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: qemu-devel

[-- Attachment #1: Type: text/plain, Size: 7971 bytes --]

On Fri, Jan 27, 2023 at 3:02 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:

> On Fri, 27 Jan 2023 at 12:10, Warner Losh <imp@bsdimp.com> wrote:
> >
> > [[ cc list trimmed to just qemu-devel ]]
> >
> > On Fri, Jan 27, 2023 at 8:18 AM Stefan Hajnoczi <stefanha@gmail.com>
> wrote:
> >>
> >> Dear QEMU, KVM, and rust-vmm communities,
> >> QEMU will apply for Google Summer of Code 2023
> >> (https://summerofcode.withgoogle.com/) and has been accepted into
> >> Outreachy May 2023 (https://www.outreachy.org/). You can now
> >> submit internship project ideas for QEMU, KVM, and rust-vmm!
> >>
> >> Please reply to this email by February 6th with your project ideas.
> >>
> >> If you have experience contributing to QEMU, KVM, or rust-vmm you can
> >> be a mentor. Mentors support interns as they work on their project.
> It's a
> >> great way to give back and you get to work with people who are just
> >> starting out in open source.
> >>
> >> Good project ideas are suitable for remote work by a competent
> >> programmer who is not yet familiar with the codebase. In
> >> addition, they are:
> >> - Well-defined - the scope is clear
> >> - Self-contained - there are few dependencies
> >> - Uncontroversial - they are acceptable to the community
> >> - Incremental - they produce deliverables along the way
> >>
> >> Feel free to post ideas even if you are unable to mentor the project.
> >> It doesn't hurt to share the idea!
> >
> >
> > I've been a GSoC mentor for the FreeBSD project on and off for maybe
> > 10-15 years now. I thought I'd share this for feedback here.
> >
> > My project idea falls between the two projects. I've been trying
> > to get bsd-user reviewed and upstreamed for some time now and my
> > time available to do the upstreaming has been greatly diminished lately.
> > It got me thinking: upstreaming is more than just getting patches
> reviewed
> > often times. While there is a rather mechanical aspect to it (and I
> could likely
> > automate that aspect more), the real value of going through the review
> process
> > is that it points out things that had been done wrong, things that need
> to be
> > redone or refactored, etc. It's often these suggestions that lead to the
> biggest
> > investment of time on my part: Is this idea good? if I do it, does it
> break things?
> > Is the feedback right about what's wrong, but wrong about how to fix it?
> etc.
> > Plus the inevitable, I thought this was a good idea, implemented it only
> to find
> > it broke other things, and how do I explain that and provide feedback to
> the
> > reviewer about that breakage to see if it is worth pursuing further or
> not?
> >
> > So my idea for a project is two fold: First, to create scripts to
> automate the
> > upstreaming process: to break big files into bite-sized chunks for
> review on
> > this list. git publish does a great job from there. The current backlog
> to upstream
> > is approximately " 175 files changed, 30270 insertions(+), 640
> deletions(-)" which
> > is 300-600 patches at the 50-100 line patch guidance I've been given. So
> even
> > at .1hr (6 minutes) per patch (which is about 3x faster than I can do it
> by hand),
> > that's ~60 hours just to create the patches. Writing automation should
> take
> > much less time. Realistically, this is on the order of 10-20 hours to
> get done.
> >
> > Second, it's to take feedback from the reviews for refactoring
> > the bsd-user code base (which will eventually land in upstream). I often
> spend
> > a few hours creating my patches each quarter, then about 10 or so hours
> for the
> > 30ish patches that I do processing the review feedback by refactoring
> other things
> > (typically other architectures), checking details of other architectures
> (usually by
> > looking at the FreeBSD kernel), or looking for ways to refactor to share
> code with
> > linux-user  (though so far only the safe signals is upstream: elf could
> be too), or
> > chatting online about the feedback to better understand it, to see what
> I can mine
> > from linux-user (since the code is derived from that, but didn't pick up
> all the changes
> > linus-user has), etc. This would be on the order of 100 hours.
> >
> > Third, the testing infrastructure that exists for linux-user is not well
> leveraged to test
> > bsd-user. I've done some tests from time to time with it, but it's not
> in a state that it
> > can be used as, say, part of a CI pipeline. In addition, the FreeBSD
> project has some
> > very large jobs, a subset of which could be used to further ensure that
> critical bits of
> > infrastructure don't break (or are working if not in a CI pipeline).
> Things like building
> > and using go, rust and the like are constantly breaking for reasons too
> long to enumerate
> > here. This job could be as little as 50 hours to do a minimal but
> complete enough for CI job,
> > or as much as 200 hours to do a more complete jobs that could be used to
> bisect breakage
> > more quickly and give good assurance that at any given time bsd-user is
> useful and working.
> >
> > That's in addition to growing the number of people that can work on this
> code and
> > on the *-user code in general since they are quite similar.
> >
> > Some of these tasks are squarely in the qemu-realm, while others are in
> the FreeBSD realm,
> > but that's similar to linux-user which requires very heavy interfacing
> with the linux realm. It's
> > just that a lot of that work is already complete so the needs are
> substantially less there on an
> > ongoing basis. Since it does stratal the two projects, I'm unsure where
> to propose this project
> > be housed. But since this is a call for ideas, I thought I'd float it to
> see what the feedback is. I'm
> > happy to write this up in a more formal sense if it would be seriously
> considered, but want to get
> > feedback as to what areas I might want to emphasize in such a proposal.
> >
> > Comments?
>
> Hi Warner,
> Don't worry about it spanning FreeBSD and QEMU, you're welcome to list
> the project idea through QEMU. You can have co-mentors that are not
> part of the QEMU community in order to bring in additional FreeBSD
> expertise.
>
> My main thought is that getting all code upstream sounds like a
> sprawling project that likely won't be finished within one internship.
> Can you pick just a subset of what you described? It should be a
> well-defined project that depends minimally on other people finishing
> stuff or reaching agreement on something controversial? That way the
> intern will be able to come up with specific tasks for their project
> plan and there is little risk that they can't complete them due to
> outside factors.
>

I like this notion of limiting the  scope. There's three or maybe four main
areas
that I can call out. I got to thinking about all the details I have to do
for how
I've been upstreaming things, and realized that there's a lot due to the
complicated
history here...

One way to go about this might be for you to define a milestone that
> involves completing, testing, and upstreaming just a subset of the
> out-of-tree code. For example, it might implement a limited set of
> core syscall families. The intern will then focus on delivering that
> instead of worrying about the daunting task of getting everything
> merged. Finishing this subset would advance bsd-user FreeBSD support
> by a useful degree (e.g. ability to run certain applications).
>
> Does that sound good?
>

Yes. I like this, but it's hard to know what that might be because many
things are
hidden behind the scenes... But I'll try running a quick build to see if I
can gather
enough stats to come up with a good set of tests... But maybe I'll start
with building
'hello world' with clang on armv7 running on an amd64 host to see what's
missing
today. I also have an aarch64 set of patches I might try hard to get in
ASAP so that
might be the target instead (since it might be a bit more useful).

Warner


> Stefan
>

[-- Attachment #2: Type: text/html, Size: 9738 bytes --]

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Call for GSoC and Outreachy project ideas for summer 2023
  2023-02-08 23:01     ` Warner Losh
@ 2023-02-09  1:25       ` Stefan Hajnoczi
  0 siblings, 0 replies; 24+ messages in thread
From: Stefan Hajnoczi @ 2023-02-09  1:25 UTC (permalink / raw)
  To: Warner Losh; +Cc: qemu-devel

On Wed, 8 Feb 2023 at 18:02, Warner Losh <imp@bsdimp.com> wrote:
> On Fri, Jan 27, 2023 at 3:02 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:
>>
>> On Fri, 27 Jan 2023 at 12:10, Warner Losh <imp@bsdimp.com> wrote:
>> >
>> > [[ cc list trimmed to just qemu-devel ]]
>> >
>> > On Fri, Jan 27, 2023 at 8:18 AM Stefan Hajnoczi <stefanha@gmail.com> wrote:
>> >>
>> >> Dear QEMU, KVM, and rust-vmm communities,
>> >> QEMU will apply for Google Summer of Code 2023
>> >> (https://summerofcode.withgoogle.com/) and has been accepted into
>> >> Outreachy May 2023 (https://www.outreachy.org/). You can now
>> >> submit internship project ideas for QEMU, KVM, and rust-vmm!
>> >>
>> >> Please reply to this email by February 6th with your project ideas.
>> >>
>> >> If you have experience contributing to QEMU, KVM, or rust-vmm you can
>> >> be a mentor. Mentors support interns as they work on their project. It's a
>> >> great way to give back and you get to work with people who are just
>> >> starting out in open source.
>> >>
>> >> Good project ideas are suitable for remote work by a competent
>> >> programmer who is not yet familiar with the codebase. In
>> >> addition, they are:
>> >> - Well-defined - the scope is clear
>> >> - Self-contained - there are few dependencies
>> >> - Uncontroversial - they are acceptable to the community
>> >> - Incremental - they produce deliverables along the way
>> >>
>> >> Feel free to post ideas even if you are unable to mentor the project.
>> >> It doesn't hurt to share the idea!
>> >
>> >
>> > I've been a GSoC mentor for the FreeBSD project on and off for maybe
>> > 10-15 years now. I thought I'd share this for feedback here.
>> >
>> > My project idea falls between the two projects. I've been trying
>> > to get bsd-user reviewed and upstreamed for some time now and my
>> > time available to do the upstreaming has been greatly diminished lately.
>> > It got me thinking: upstreaming is more than just getting patches reviewed
>> > often times. While there is a rather mechanical aspect to it (and I could likely
>> > automate that aspect more), the real value of going through the review process
>> > is that it points out things that had been done wrong, things that need to be
>> > redone or refactored, etc. It's often these suggestions that lead to the biggest
>> > investment of time on my part: Is this idea good? if I do it, does it break things?
>> > Is the feedback right about what's wrong, but wrong about how to fix it? etc.
>> > Plus the inevitable, I thought this was a good idea, implemented it only to find
>> > it broke other things, and how do I explain that and provide feedback to the
>> > reviewer about that breakage to see if it is worth pursuing further or not?
>> >
>> > So my idea for a project is two fold: First, to create scripts to automate the
>> > upstreaming process: to break big files into bite-sized chunks for review on
>> > this list. git publish does a great job from there. The current backlog to upstream
>> > is approximately " 175 files changed, 30270 insertions(+), 640 deletions(-)" which
>> > is 300-600 patches at the 50-100 line patch guidance I've been given. So even
>> > at .1hr (6 minutes) per patch (which is about 3x faster than I can do it by hand),
>> > that's ~60 hours just to create the patches. Writing automation should take
>> > much less time. Realistically, this is on the order of 10-20 hours to get done.
>> >
>> > Second, it's to take feedback from the reviews for refactoring
>> > the bsd-user code base (which will eventually land in upstream). I often spend
>> > a few hours creating my patches each quarter, then about 10 or so hours for the
>> > 30ish patches that I do processing the review feedback by refactoring other things
>> > (typically other architectures), checking details of other architectures (usually by
>> > looking at the FreeBSD kernel), or looking for ways to refactor to share code with
>> > linux-user  (though so far only the safe signals is upstream: elf could be too), or
>> > chatting online about the feedback to better understand it, to see what I can mine
>> > from linux-user (since the code is derived from that, but didn't pick up all the changes
>> > linus-user has), etc. This would be on the order of 100 hours.
>> >
>> > Third, the testing infrastructure that exists for linux-user is not well leveraged to test
>> > bsd-user. I've done some tests from time to time with it, but it's not in a state that it
>> > can be used as, say, part of a CI pipeline. In addition, the FreeBSD project has some
>> > very large jobs, a subset of which could be used to further ensure that critical bits of
>> > infrastructure don't break (or are working if not in a CI pipeline). Things like building
>> > and using go, rust and the like are constantly breaking for reasons too long to enumerate
>> > here. This job could be as little as 50 hours to do a minimal but complete enough for CI job,
>> > or as much as 200 hours to do a more complete jobs that could be used to bisect breakage
>> > more quickly and give good assurance that at any given time bsd-user is useful and working.
>> >
>> > That's in addition to growing the number of people that can work on this code and
>> > on the *-user code in general since they are quite similar.
>> >
>> > Some of these tasks are squarely in the qemu-realm, while others are in the FreeBSD realm,
>> > but that's similar to linux-user which requires very heavy interfacing with the linux realm. It's
>> > just that a lot of that work is already complete so the needs are substantially less there on an
>> > ongoing basis. Since it does stratal the two projects, I'm unsure where to propose this project
>> > be housed. But since this is a call for ideas, I thought I'd float it to see what the feedback is. I'm
>> > happy to write this up in a more formal sense if it would be seriously considered, but want to get
>> > feedback as to what areas I might want to emphasize in such a proposal.
>> >
>> > Comments?
>>
>> Hi Warner,
>> Don't worry about it spanning FreeBSD and QEMU, you're welcome to list
>> the project idea through QEMU. You can have co-mentors that are not
>> part of the QEMU community in order to bring in additional FreeBSD
>> expertise.
>>
>> My main thought is that getting all code upstream sounds like a
>> sprawling project that likely won't be finished within one internship.
>> Can you pick just a subset of what you described? It should be a
>> well-defined project that depends minimally on other people finishing
>> stuff or reaching agreement on something controversial? That way the
>> intern will be able to come up with specific tasks for their project
>> plan and there is little risk that they can't complete them due to
>> outside factors.
>
>
> I like this notion of limiting the  scope. There's three or maybe four main areas
> that I can call out. I got to thinking about all the details I have to do for how
> I've been upstreaming things, and realized that there's a lot due to the complicated
> history here...
>
>> One way to go about this might be for you to define a milestone that
>> involves completing, testing, and upstreaming just a subset of the
>> out-of-tree code. For example, it might implement a limited set of
>> core syscall families. The intern will then focus on delivering that
>> instead of worrying about the daunting task of getting everything
>> merged. Finishing this subset would advance bsd-user FreeBSD support
>> by a useful degree (e.g. ability to run certain applications).
>>
>> Does that sound good?
>
>
> Yes. I like this, but it's hard to know what that might be because many things are
> hidden behind the scenes... But I'll try running a quick build to see if I can gather
> enough stats to come up with a good set of tests... But maybe I'll start with building
> 'hello world' with clang on armv7 running on an amd64 host to see what's missing
> today. I also have an aarch64 set of patches I might try hard to get in ASAP so that
> might be the target instead (since it might be a bit more useful).

Hi Warner,
Great to hear back from you. Don't worry if you don't have the details
right now. I have created a placeholder on the ideas list that you can
fill in over the coming days:
https://wiki.qemu.org/Internships/ProjectIdeas/FreeBSDUser

You can either reply via email and I'll post your project description
on the wiki, or feel free to edit the above wiki page directly.

Thanks,
Stefan


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Call for GSoC and Outreachy project ideas for summer 2023
  2023-01-27 15:17 Call for GSoC and Outreachy project ideas for summer 2023 Stefan Hajnoczi
                   ` (2 preceding siblings ...)
  2023-02-06 19:50 ` Alberto Faria
@ 2023-02-17 16:23 ` Stefano Garzarella
  2023-02-17 16:53   ` Stefan Hajnoczi
  2023-02-17 16:42 ` German Maglione
  2023-02-17 16:59 ` Stefan Hajnoczi
  5 siblings, 1 reply; 24+ messages in thread
From: Stefano Garzarella @ 2023-02-17 16:23 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: qemu-devel, kvm, Rust-VMM Mailing List, Alex Bennée,
	Philippe Mathieu-Daudé,
	Paolo Bonzini, Gerd Hoffmann, Marc-André Lureau,
	Thomas Huth, John Snow, Eugenio Pérez, Florescu, Andreea,
	Damien, Dmitry Fomichev, Hanna Reitz, Alberto Faria,
	Daniel Henrique Barboza, Cédric Le Goater, Bernhard Beschow,
	Sean Christopherson, Vitaly Kuznetsov, gmaglione

Hi Stefan,

On Fri, Jan 27, 2023 at 10:17:40AM -0500, Stefan Hajnoczi wrote:
>Dear QEMU, KVM, and rust-vmm communities,
>QEMU will apply for Google Summer of Code 2023
>(https://summerofcode.withgoogle.com/) and has been accepted into
>Outreachy May 2023 (https://www.outreachy.org/). You can now
>submit internship project ideas for QEMU, KVM, and rust-vmm!
>
>Please reply to this email by February 6th with your project ideas.

sorry for being late, if there is still time I would like to propose the 
following project.

Please, let me know if I should add it to the wiki page.

Any feedback or co-mentors are welcome :-)

Thanks,
Stefano



=== Sibling VM communication in vhost-user-vsock ===

'''Summary:''' Extend the existing vhost-user-vsock Rust application to
support sibling VM communication

During GSoC 2021, we developed vhost-user-vsock application in Rust. It
leveraged the vhost-user protocol to emulate a virtio-vsock device in an
external process. It provides the hybrid VSOCK interface over AF_UNIX
introduced by Firecracker.

The current implementation supports a single virtual machine (VM) per
process instance.
The idea of this project is to extend the vhost-user-vsock crate
available in the rust-vmm/vhost-device workspace to support multiple VMs
per instance and allow communication between sibling VMs.

This project will allow you to learn more about the virtio-vsock
specification, rust-vmm crates, and vhost-user protocol to interface
with QEMU.

This work will be done in Rust, but we may need to patch the
virtio-vsock driver or vsock core in Linux if we will find some issues.
AF_VSOCK in Linux already supports the VMADDR_FLAG_TO_HOST flag to be
used in the struct sockaddr_vm to communicate with sibling VMs.

Goals:
* Understand how a virtio-vsock device works
* Refactor vhost-user-vsock code to allow multiple virtio-vsock device instances
* Extend the vhost-user-vsock CLI
* Implement sibling VM communication
* (optional) Support adding new VMs at runtime

'''Links:'''
* [https://gitlab.com/vsock/vsock vsock info and issues]
* [https://wiki.qemu.org/Features/VirtioVsock virtio-vsock QEMU wiki page]
* [https://github.com/rust-vmm/vhost-device/tree/main/crates/vsock vhost-user-vsock application]
* [https://summerofcode.withgoogle.com/archive/2021/projects/6126117680840704 vhost-user-vsock project @ GSoC 2021]
* [https://github.com/firecracker-microvm/firecracker/blob/master/docs/vsock.md Firecracker's hybrid VSOCK]
* [https://gitlab.com/qemu-project/qemu/-/blob/master/docs/interop/vhost-user.rst vhost-user protocol]
* [https://lore.kernel.org/lkml/20201214161122.37717-1-andraprs@amazon.com/ VMADDR_FLAG_TO_HOST flag support in Linux]

'''Details:'''
* Project size: 350 hours
* Skill level: intermediate (knowledge of Rust and virtualization)
* Language: Rust
* Mentor: Stefano Garzarella <sgarzare@redhat.com>
** IRC: sgarzare / Matrix: @sgarzare:matrix.org
* Suggested by: Stefano Garzarella <sgarzare@redhat.com>


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Call for GSoC and Outreachy project ideas for summer 2023
  2023-01-27 15:17 Call for GSoC and Outreachy project ideas for summer 2023 Stefan Hajnoczi
                   ` (3 preceding siblings ...)
  2023-02-17 16:23 ` Stefano Garzarella
@ 2023-02-17 16:42 ` German Maglione
  2023-02-17 16:56   ` Stefan Hajnoczi
  2023-02-17 16:59 ` Stefan Hajnoczi
  5 siblings, 1 reply; 24+ messages in thread
From: German Maglione @ 2023-02-17 16:42 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: qemu-devel, kvm, Rust-VMM Mailing List, Alex Bennée,
	Philippe Mathieu-Daudé,
	Paolo Bonzini, Gerd Hoffmann, Marc-André Lureau,
	Thomas Huth, John Snow, Stefano Garzarella, Eugenio Pérez,
	Florescu, Andreea, Damien, Dmitry Fomichev, Hanna Reitz,
	Alberto Faria, Daniel Henrique Barboza, Cédric Le Goater,
	Bernhard Beschow, Sean Christopherson, Vitaly Kuznetsov

[-- Attachment #1: Type: text/plain, Size: 3180 bytes --]

Hi Stefan,

Sorry for being so late, if it is still possible I would like to propose the
following project:

=== A sandboxing tool for virtiofsd ===

''Summary:''' Create a tool that runs virtiofsd in a sandboxed environment

Virtiofs is a shared file system that lets virtual machines access a
directory
tree on the host. Unlike existing approaches, it is designed to
offer local file system semantics and performance.

Currently, virtiofsd integrates the sandboxing code and the server code in a
single binary. The goal is to extract that code and create an external tool
that
creates a sandbox environment and runs virtiofsd in it. In addition, that
tool
should be extended to be able to run virtiofsd in a restricted environment
with
Landlock.

This will allow greater flexibility when integrating virtiofsd into a VMM
or
running it inside a container.

Goals:
* Understand how to setup a restricted environment using chroot,
namespaces, and
  Landlock
* Refactor virtiofsd to extract the sandbox code to its own crate
* Create an external sandboxing tool for virtiofsd

'''Links:'''
* https://virtio-fs.gitlab.io/
* https://gitlab.com/virtio-fs/virtiofsd
* https://landlock.io/

'''Details:'''
* Project size: 175 hours
* Skill level: intermediate (knowledge of Rust and C)
* Language: Rust
* Mentor: German Maglione <gmaglione@redhat.com>, Stefano Garzarella <
sgarzare@redhat.com>
* Suggested by: German Maglione <gmaglione@redhat.com>


On Fri, Jan 27, 2023 at 4:18 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:

> Dear QEMU, KVM, and rust-vmm communities,
> QEMU will apply for Google Summer of Code 2023
> (https://summerofcode.withgoogle.com/) and has been accepted into
> Outreachy May 2023 (https://www.outreachy.org/). You can now
> submit internship project ideas for QEMU, KVM, and rust-vmm!
>
> Please reply to this email by February 6th with your project ideas.
>
> If you have experience contributing to QEMU, KVM, or rust-vmm you can
> be a mentor. Mentors support interns as they work on their project. It's a
> great way to give back and you get to work with people who are just
> starting out in open source.
>
> Good project ideas are suitable for remote work by a competent
> programmer who is not yet familiar with the codebase. In
> addition, they are:
> - Well-defined - the scope is clear
> - Self-contained - there are few dependencies
> - Uncontroversial - they are acceptable to the community
> - Incremental - they produce deliverables along the way
>
> Feel free to post ideas even if you are unable to mentor the project.
> It doesn't hurt to share the idea!
>
> I will review project ideas and keep you up-to-date on QEMU's
> acceptance into GSoC.
>
> Internship program details:
> - Paid, remote work open source internships
> - GSoC projects are 175 or 350 hours, Outreachy projects are 30
> hrs/week for 12 weeks
> - Mentored by volunteers from QEMU, KVM, and rust-vmm
> - Mentors typically spend at least 5 hours per week during the coding
> period
>
> For more background on QEMU internships, check out this video:
> https://www.youtube.com/watch?v=xNVCX7YMUL8
>
> Please let me know if you have any questions!
>
> Stefan
>
>

-- 
German

[-- Attachment #2: Type: text/html, Size: 4459 bytes --]

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Call for GSoC and Outreachy project ideas for summer 2023
  2023-02-17 16:23 ` Stefano Garzarella
@ 2023-02-17 16:53   ` Stefan Hajnoczi
  2023-02-17 16:56     ` Stefano Garzarella
  0 siblings, 1 reply; 24+ messages in thread
From: Stefan Hajnoczi @ 2023-02-17 16:53 UTC (permalink / raw)
  To: Stefano Garzarella
  Cc: qemu-devel, kvm, Rust-VMM Mailing List, Alex Bennée,
	Philippe Mathieu-Daudé,
	Paolo Bonzini, Gerd Hoffmann, Marc-André Lureau,
	Thomas Huth, John Snow, Eugenio Pérez, Florescu, Andreea,
	Damien, Dmitry Fomichev, Hanna Reitz, Alberto Faria,
	Daniel Henrique Barboza, Cédric Le Goater, Bernhard Beschow,
	Sean Christopherson, Vitaly Kuznetsov, gmaglione

On Fri, 17 Feb 2023 at 11:23, Stefano Garzarella <sgarzare@redhat.com> wrote:
>
> Hi Stefan,
>
> On Fri, Jan 27, 2023 at 10:17:40AM -0500, Stefan Hajnoczi wrote:
> >Dear QEMU, KVM, and rust-vmm communities,
> >QEMU will apply for Google Summer of Code 2023
> >(https://summerofcode.withgoogle.com/) and has been accepted into
> >Outreachy May 2023 (https://www.outreachy.org/). You can now
> >submit internship project ideas for QEMU, KVM, and rust-vmm!
> >
> >Please reply to this email by February 6th with your project ideas.
>
> sorry for being late, if there is still time I would like to propose the
> following project.
>
> Please, let me know if I should add it to the wiki page.

Hi Stefano,
I have added it to the wiki page:
https://wiki.qemu.org/Internships/ProjectIdeas/VsockSiblingCommunication

I noticed that the project idea describes in words but never gives
concrete details about what sibling VM communication is and how it
should work. For someone who has never heard of AF_VSOCK or know how
addressing works, I think it would help to have more detail: does the
vhost-user-vsock program need new command-line arguments that define
sibling VMs, does a { .svm_cid = 2, .svm_port = 1234 } address usually
talk to a guest but the TO_HOST flag changes the meaning and you wish
to exploit that, etc? I'm not suggesting making the description much
longer, but instead tweaking it with more concrete details/keywords so
someone can research the idea and understand what the tasks will be.

Stefan

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Call for GSoC and Outreachy project ideas for summer 2023
  2023-02-17 16:42 ` German Maglione
@ 2023-02-17 16:56   ` Stefan Hajnoczi
  0 siblings, 0 replies; 24+ messages in thread
From: Stefan Hajnoczi @ 2023-02-17 16:56 UTC (permalink / raw)
  To: German Maglione
  Cc: qemu-devel, kvm, Rust-VMM Mailing List, Alex Bennée,
	Philippe Mathieu-Daudé,
	Paolo Bonzini, Gerd Hoffmann, Marc-André Lureau,
	Thomas Huth, John Snow, Stefano Garzarella, Eugenio Pérez,
	Florescu, Andreea, Damien, Dmitry Fomichev, Hanna Reitz,
	Alberto Faria, Daniel Henrique Barboza, Cédric Le Goater,
	Bernhard Beschow, Sean Christopherson, Vitaly Kuznetsov

On Fri, 17 Feb 2023 at 11:43, German Maglione <gmaglione@redhat.com> wrote:
>
> Hi Stefan,
>
> Sorry for being so late, if it is still possible I would like to propose the
> following project:

Added, thanks!
https://wiki.qemu.org/Internships/ProjectIdeas/VirtiofsdSandboxingTool

Stefan

>
> === A sandboxing tool for virtiofsd ===
>
> ''Summary:''' Create a tool that runs virtiofsd in a sandboxed environment
>
> Virtiofs is a shared file system that lets virtual machines access a directory
> tree on the host. Unlike existing approaches, it is designed to
> offer local file system semantics and performance.
>
> Currently, virtiofsd integrates the sandboxing code and the server code in a
> single binary. The goal is to extract that code and create an external tool that
> creates a sandbox environment and runs virtiofsd in it. In addition, that tool
> should be extended to be able to run virtiofsd in a restricted environment with
> Landlock.
>
> This will allow greater flexibility when integrating virtiofsd into a VMM or
> running it inside a container.
>
> Goals:
> * Understand how to setup a restricted environment using chroot, namespaces, and
>   Landlock
> * Refactor virtiofsd to extract the sandbox code to its own crate
> * Create an external sandboxing tool for virtiofsd
>
> '''Links:'''
> * https://virtio-fs.gitlab.io/
> * https://gitlab.com/virtio-fs/virtiofsd
> * https://landlock.io/
>
> '''Details:'''
> * Project size: 175 hours
> * Skill level: intermediate (knowledge of Rust and C)
> * Language: Rust
> * Mentor: German Maglione <gmaglione@redhat.com>, Stefano Garzarella <sgarzare@redhat.com>
> * Suggested by: German Maglione <gmaglione@redhat.com>
>
>
> On Fri, Jan 27, 2023 at 4:18 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:
>>
>> Dear QEMU, KVM, and rust-vmm communities,
>> QEMU will apply for Google Summer of Code 2023
>> (https://summerofcode.withgoogle.com/) and has been accepted into
>> Outreachy May 2023 (https://www.outreachy.org/). You can now
>> submit internship project ideas for QEMU, KVM, and rust-vmm!
>>
>> Please reply to this email by February 6th with your project ideas.
>>
>> If you have experience contributing to QEMU, KVM, or rust-vmm you can
>> be a mentor. Mentors support interns as they work on their project. It's a
>> great way to give back and you get to work with people who are just
>> starting out in open source.
>>
>> Good project ideas are suitable for remote work by a competent
>> programmer who is not yet familiar with the codebase. In
>> addition, they are:
>> - Well-defined - the scope is clear
>> - Self-contained - there are few dependencies
>> - Uncontroversial - they are acceptable to the community
>> - Incremental - they produce deliverables along the way
>>
>> Feel free to post ideas even if you are unable to mentor the project.
>> It doesn't hurt to share the idea!
>>
>> I will review project ideas and keep you up-to-date on QEMU's
>> acceptance into GSoC.
>>
>> Internship program details:
>> - Paid, remote work open source internships
>> - GSoC projects are 175 or 350 hours, Outreachy projects are 30
>> hrs/week for 12 weeks
>> - Mentored by volunteers from QEMU, KVM, and rust-vmm
>> - Mentors typically spend at least 5 hours per week during the coding period
>>
>> For more background on QEMU internships, check out this video:
>> https://www.youtube.com/watch?v=xNVCX7YMUL8
>>
>> Please let me know if you have any questions!
>>
>> Stefan
>>
>
>
> --
> German

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Call for GSoC and Outreachy project ideas for summer 2023
  2023-02-17 16:53   ` Stefan Hajnoczi
@ 2023-02-17 16:56     ` Stefano Garzarella
  0 siblings, 0 replies; 24+ messages in thread
From: Stefano Garzarella @ 2023-02-17 16:56 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: qemu-devel, kvm, Rust-VMM Mailing List, Alex Bennée,
	Philippe Mathieu-Daudé,
	Paolo Bonzini, Gerd Hoffmann, Marc-André Lureau,
	Thomas Huth, John Snow, Eugenio Pérez, Florescu, Andreea,
	Damien, Dmitry Fomichev, Hanna Reitz, Alberto Faria,
	Daniel Henrique Barboza, Cédric Le Goater, Bernhard Beschow,
	Sean Christopherson, Vitaly Kuznetsov, gmaglione

On Fri, Feb 17, 2023 at 11:53:03AM -0500, Stefan Hajnoczi wrote:
>On Fri, 17 Feb 2023 at 11:23, Stefano Garzarella <sgarzare@redhat.com> wrote:
>>
>> Hi Stefan,
>>
>> On Fri, Jan 27, 2023 at 10:17:40AM -0500, Stefan Hajnoczi wrote:
>> >Dear QEMU, KVM, and rust-vmm communities,
>> >QEMU will apply for Google Summer of Code 2023
>> >(https://summerofcode.withgoogle.com/) and has been accepted into
>> >Outreachy May 2023 (https://www.outreachy.org/). You can now
>> >submit internship project ideas for QEMU, KVM, and rust-vmm!
>> >
>> >Please reply to this email by February 6th with your project ideas.
>>
>> sorry for being late, if there is still time I would like to propose the
>> following project.
>>
>> Please, let me know if I should add it to the wiki page.
>
>Hi Stefano,
>I have added it to the wiki page:
>https://wiki.qemu.org/Internships/ProjectIdeas/VsockSiblingCommunication

Great, thanks!

>
>I noticed that the project idea describes in words but never gives
>concrete details about what sibling VM communication is and how it
>should work. For someone who has never heard of AF_VSOCK or know how
>addressing works, I think it would help to have more detail: does the
>vhost-user-vsock program need new command-line arguments that define
>sibling VMs, does a { .svm_cid = 2, .svm_port = 1234 } address usually
>talk to a guest but the TO_HOST flag changes the meaning and you wish
>to exploit that, etc? I'm not suggesting making the description much
>longer, but instead tweaking it with more concrete details/keywords so
>someone can research the idea and understand what the tasks will be.

You are right, I will add more details/keywords to make it clearer.

Thanks,
Stefano


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: Call for GSoC and Outreachy project ideas for summer 2023
  2023-01-27 15:17 Call for GSoC and Outreachy project ideas for summer 2023 Stefan Hajnoczi
                   ` (4 preceding siblings ...)
  2023-02-17 16:42 ` German Maglione
@ 2023-02-17 16:59 ` Stefan Hajnoczi
  5 siblings, 0 replies; 24+ messages in thread
From: Stefan Hajnoczi @ 2023-02-17 16:59 UTC (permalink / raw)
  To: qemu-devel, kvm, Rust-VMM Mailing List
  Cc: Alex Bennée, Philippe Mathieu-Daudé,
	Paolo Bonzini, Gerd Hoffmann, Marc-André Lureau,
	Thomas Huth, John Snow, Stefano Garzarella, Eugenio Pérez,
	Florescu, Andreea, Damien, Dmitry Fomichev, Hanna Reitz,
	Alberto Faria, Daniel Henrique Barboza, Cédric Le Goater,
	Bernhard Beschow, Sean Christopherson, Vitaly Kuznetsov,
	gmaglione

On Fri, 27 Jan 2023 at 10:17, Stefan Hajnoczi <stefanha@gmail.com> wrote:
> Please reply to this email by February 6th with your project ideas.

The call for project ideas is now closed. We have enough project ideas
for this internship cycle and I wouldn't want people to spend time on
additional ideas that we're unlikely to have funding for. Thank you
everyone who submitted ideas!

Stefan

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2023-02-17 16:59 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-01-27 15:17 Call for GSoC and Outreachy project ideas for summer 2023 Stefan Hajnoczi
2023-01-27 17:10 ` Warner Losh
2023-01-27 22:01   ` Stefan Hajnoczi
2023-02-08 23:01     ` Warner Losh
2023-02-09  1:25       ` Stefan Hajnoczi
2023-02-05  8:14 ` Eugenio Perez Martin
2023-02-05 13:57   ` Stefan Hajnoczi
2023-02-06 11:52     ` Eugenio Perez Martin
2023-02-06 14:21       ` Stefan Hajnoczi
2023-02-06 16:46         ` Eugenio Perez Martin
2023-02-06 17:21           ` Stefan Hajnoczi
2023-02-06 18:47             ` Eugenio Perez Martin
2023-02-07  1:00               ` Stefan Hajnoczi
2023-02-06 19:50 ` Alberto Faria
2023-02-06 21:21   ` Stefan Hajnoczi
2023-02-07 10:23     ` Alberto Faria
2023-02-07 10:29       ` Alberto Faria
2023-02-07 14:32         ` Stefan Hajnoczi
2023-02-17 16:23 ` Stefano Garzarella
2023-02-17 16:53   ` Stefan Hajnoczi
2023-02-17 16:56     ` Stefano Garzarella
2023-02-17 16:42 ` German Maglione
2023-02-17 16:56   ` Stefan Hajnoczi
2023-02-17 16:59 ` Stefan Hajnoczi

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.