All of lore.kernel.org
 help / color / mirror / Atom feed
From: Anton Ivanov <anton.ivanov@cambridgegreys.com>
To: Johannes Berg <johannes@sipsolutions.net>,
	linux-um@lists.infradead.org,
	virtualization@lists.linux-foundation.org, qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] custom virt-io support (in user-mode-linux)
Date: Wed, 22 May 2019 15:00:32 +0100	[thread overview]
Message-ID: <f21ae7ac-ae56-71e6-cebd-f97c8912f5e1@cambridgegreys.com> (raw)
In-Reply-To: <8b30e5cea2692d62fd7f486fc98effdb589a1412.camel@sipsolutions.net>



On 22/05/2019 14:46, Johannes Berg wrote:
> Hi Anton,
> 
>>> I'm thinking about adding virt-io support to UML, but the tricky part is
>>> that while I want to use the virt-io basics (because it's a nice
>>> interface from the 'inside'), I don't actually want the stock drivers
>>> that are part of the kernel now (like virtio-net etc.) but rather
>>> something that integrates with wifi (probably building on hwsim).
> 
>> I have looked at using virtio semantics in UML in the past around the
>> point when I wanted to make the recvmmsg/sendmmsg vector drivers common
>> in UML and QEMU. It is certainly possible,
>>
>> I went for the native approach at the end though.
> 
> Hmm. I'm not sure what you mean by either :-)
> 
> Is there any commonality between the vector drivers? 

I was looking purely from a network driver perspective.

I had two options - either do a direct read/write as it does today or 
implement the ring/king semantics and read/write from that.

I decided to not bother with the latter and read/write directly from/to 
skbs.

> I can't see how
> that'd work without a bus abstraction (like virtio) in qemu? I mean, the
> kernel driver just calls uml_vector_sendmmsg(), which I'd say belongs
> more to the 'outside world', but that can't really be done in qemu?
> 
> Ok, I guess then I see what you mean by 'native' though.
> 
> Similarly, of course, I can implement arbitrary virt-io devices - just
> the kernel side doesn't call a function like uml_vector_sendmmsg()
> directly, but instead the virt-io model, and the model calls the
> function, which essentially is the same just with a (convenient)
> abstraction layer.
> 
> But this leaves the fundamental fact the model code ("vector_user.c" or
> a similar "virtio_user.c") is still part of the build.
> 
> I guess what I'm thinking is have something like "virtio_user_rpc.c"
> that uses some appropriate RPC to interact with the real model. IOW,
> rather than having all the model-specific logic actually be here (like
> vector_user.c actually knows how to send network packets over a real
> socket fd), try to call out to some RPC that contains the real model.
> 
> Now that I thought about it further, I guess my question boils down to
> "did anyone ever think about doing RPC for Virt-IO instead of putting
> the entire device model into the hypervisor/emulator/...".

Virtio in general no. UML specifically - yes. I have thought of mapping 
out all key device calls to RPCs for a few applications. The issue is 
that it is fairly difficult to make all of this function cleanly without 
blocking in strange places.

You may probably want to look at the UML UBD driver. That is an example 
of moving out all processing to an external thread and talking to it via 
a request/response API. While it still expects shared memory and needs 
access to UML address space the model should be more amenable to 
replacing various calls with RPCs as you have now left the rest of the 
kernel to run while you are processing the RPC. It also provides you 
with RPC completion interrupts, etc as a side effect.

So you basically have UML -> Thread -> RPCs -> Model?

> 
> johannes
> 
> 
> _______________________________________________
> linux-um mailing list
> linux-um@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-um
> 

-- 
Anton R. Ivanov
Cambridgegreys Limited. Registered in England. Company Number 10273661
https://www.cambridgegreys.com/


WARNING: multiple messages have this Message-ID (diff)
From: Anton Ivanov <anton.ivanov@cambridgegreys.com>
To: Johannes Berg <johannes@sipsolutions.net>,
	linux-um@lists.infradead.org,
	virtualization@lists.linux-foundation.org, qemu-devel@nongnu.org
Subject: Re: custom virt-io support (in user-mode-linux)
Date: Wed, 22 May 2019 15:00:32 +0100	[thread overview]
Message-ID: <f21ae7ac-ae56-71e6-cebd-f97c8912f5e1@cambridgegreys.com> (raw)
In-Reply-To: <8b30e5cea2692d62fd7f486fc98effdb589a1412.camel@sipsolutions.net>



On 22/05/2019 14:46, Johannes Berg wrote:
> Hi Anton,
> 
>>> I'm thinking about adding virt-io support to UML, but the tricky part is
>>> that while I want to use the virt-io basics (because it's a nice
>>> interface from the 'inside'), I don't actually want the stock drivers
>>> that are part of the kernel now (like virtio-net etc.) but rather
>>> something that integrates with wifi (probably building on hwsim).
> 
>> I have looked at using virtio semantics in UML in the past around the
>> point when I wanted to make the recvmmsg/sendmmsg vector drivers common
>> in UML and QEMU. It is certainly possible,
>>
>> I went for the native approach at the end though.
> 
> Hmm. I'm not sure what you mean by either :-)
> 
> Is there any commonality between the vector drivers? 

I was looking purely from a network driver perspective.

I had two options - either do a direct read/write as it does today or 
implement the ring/king semantics and read/write from that.

I decided to not bother with the latter and read/write directly from/to 
skbs.

> I can't see how
> that'd work without a bus abstraction (like virtio) in qemu? I mean, the
> kernel driver just calls uml_vector_sendmmsg(), which I'd say belongs
> more to the 'outside world', but that can't really be done in qemu?
> 
> Ok, I guess then I see what you mean by 'native' though.
> 
> Similarly, of course, I can implement arbitrary virt-io devices - just
> the kernel side doesn't call a function like uml_vector_sendmmsg()
> directly, but instead the virt-io model, and the model calls the
> function, which essentially is the same just with a (convenient)
> abstraction layer.
> 
> But this leaves the fundamental fact the model code ("vector_user.c" or
> a similar "virtio_user.c") is still part of the build.
> 
> I guess what I'm thinking is have something like "virtio_user_rpc.c"
> that uses some appropriate RPC to interact with the real model. IOW,
> rather than having all the model-specific logic actually be here (like
> vector_user.c actually knows how to send network packets over a real
> socket fd), try to call out to some RPC that contains the real model.
> 
> Now that I thought about it further, I guess my question boils down to
> "did anyone ever think about doing RPC for Virt-IO instead of putting
> the entire device model into the hypervisor/emulator/...".

Virtio in general no. UML specifically - yes. I have thought of mapping 
out all key device calls to RPCs for a few applications. The issue is 
that it is fairly difficult to make all of this function cleanly without 
blocking in strange places.

You may probably want to look at the UML UBD driver. That is an example 
of moving out all processing to an external thread and talking to it via 
a request/response API. While it still expects shared memory and needs 
access to UML address space the model should be more amenable to 
replacing various calls with RPCs as you have now left the rest of the 
kernel to run while you are processing the RPC. It also provides you 
with RPC completion interrupts, etc as a side effect.

So you basically have UML -> Thread -> RPCs -> Model?

> 
> johannes
> 
> 
> _______________________________________________
> linux-um mailing list
> linux-um@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-um
> 

-- 
Anton R. Ivanov
Cambridgegreys Limited. Registered in England. Company Number 10273661
https://www.cambridgegreys.com/

_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um


  reply	other threads:[~2019-05-22 14:56 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-05-22 13:02 [Qemu-devel] custom virt-io support (in user-mode-linux) Johannes Berg
2019-05-22 13:02 ` Johannes Berg
2019-05-22 13:28 ` Anton Ivanov
2019-05-22 13:28 ` [Qemu-devel] " Anton Ivanov
2019-05-22 13:28   ` Anton Ivanov
2019-05-22 13:46   ` [Qemu-devel] " Johannes Berg
2019-05-22 13:46     ` Johannes Berg
2019-05-22 13:46     ` Johannes Berg
2019-05-22 14:00     ` Anton Ivanov [this message]
2019-05-22 14:00       ` Anton Ivanov
2019-05-22 14:00     ` Anton Ivanov
2019-05-23 11:59 ` [Qemu-devel] " Stefan Hajnoczi
2019-05-23 11:59   ` Stefan Hajnoczi
2019-05-23 11:59   ` Stefan Hajnoczi
2019-05-23 14:25   ` Johannes Berg
2019-05-23 14:25     ` Johannes Berg
2019-05-23 14:41     ` Stefan Hajnoczi
2019-05-23 14:41       ` Stefan Hajnoczi
2019-05-24  9:54       ` Johannes Berg
2019-05-24  9:54         ` Johannes Berg

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=f21ae7ac-ae56-71e6-cebd-f97c8912f5e1@cambridgegreys.com \
    --to=anton.ivanov@cambridgegreys.com \
    --cc=johannes@sipsolutions.net \
    --cc=linux-um@lists.infradead.org \
    --cc=qemu-devel@nongnu.org \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.