qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Jan Kiszka <jan.kiszka@siemens.com>
To: Liang Yan <LYan@suse.com>, qemu-devel <qemu-devel@nongnu.org>
Cc: Jailhouse <jailhouse-dev@googlegroups.com>,
	Claudio Fontana <claudio.fontana@gmail.com>,
	"Michael S . Tsirkin" <mst@redhat.com>,
	Markus Armbruster <armbru@redhat.com>,
	Hannes Reinecke <hare@suse.de>,
	Stefan Hajnoczi <stefanha@redhat.com>
Subject: Re: [RFC][PATCH 0/3] IVSHMEM version 2 device for QEMU
Date: Mon, 2 Dec 2019 07:16:57 +0100	[thread overview]
Message-ID: <0c6969db-848f-f05b-2dc0-589cb422aa56@siemens.com> (raw)
In-Reply-To: <fb213f9e-8bd8-6c33-7a6e-47dda982903d@siemens.com>

On 27.11.19 18:19, Jan Kiszka wrote:
> Hi Liang,
> 
> On 27.11.19 16:28, Liang Yan wrote:
>>
>>
>> On 11/11/19 7:57 AM, Jan Kiszka wrote:
>>> To get the ball rolling after my presentation of the topic at KVM Forum
>>> [1] and many fruitful discussions around it, this is a first concrete
>>> code series. As discussed, I'm starting with the IVSHMEM implementation
>>> of a QEMU device and server. It's RFC because, besides specification and
>>> implementation details, there will still be some decisions needed about
>>> how to integrate the new version best into the existing code bases.
>>>
>>> If you want to play with this, the basic setup of the shared memory
>>> device is described in patch 1 and 3. UIO driver and also the
>>> virtio-ivshmem prototype can be found at
>>>
>>>      
>>> http://git.kiszka.org/?p=linux.git;a=shortlog;h=refs/heads/queues/ivshmem2 
>>>
>>>
>>> Accessing the device via UIO is trivial enough. If you want to use it
>>> for virtio, this is additionally to the description in patch 3 needed on
>>> the virtio console backend side:
>>>
>>>      modprobe uio_ivshmem
>>>      echo "1af4 1110 1af4 1100 ffc003 ffffff" > 
>>> /sys/bus/pci/drivers/uio_ivshmem/new_id
>>>      linux/tools/virtio/virtio-ivshmem-console /dev/uio0
>>>
>>> And for virtio block:
>>>
>>>      echo "1af4 1110 1af4 1100 ffc002 ffffff" > 
>>> /sys/bus/pci/drivers/uio_ivshmem/new_id
>>>      linux/tools/virtio/virtio-ivshmem-console /dev/uio0 
>>> /path/to/disk.img
>>>
>>> After that, you can start the QEMU frontend instance with the
>>> virtio-ivshmem driver installed which can use the new /dev/hvc* or
>>> /dev/vda* as usual.
>>>
>>> Any feedback welcome!
>>
>> Hi, Jan,
>>
>> I have been playing your code for last few weeks, mostly study and test,
>> of course. Really nice work. I have a few questions here:
>>
>> First, qemu part looks good, I tried test between couple VMs, and device
>> could pop up correctly for all of them, but I had some problems when
>> trying load driver. For example, if set up two VMs, vm1 and vm2, start
>> ivshmem server as you suggested. vm1 could load uio_ivshmem and
>> virtio_ivshmem correctly, vm2 could load uio_ivshmem but could not show
>> up "/dev/uio0", virtio_ivshmem could not be loaded at all, these still
>> exist even I switch the load sequence of vm1 and vm2, and sometimes
>> reset "virtio_ivshmem" could crash both vm1 and vm2. Not quite sure this
>> is bug or "Ivshmem Mode" issue, but I went through ivshmem-server code,
>> did not related information.
> 
> If we are only talking about one ivshmem link and vm1 is the master, 
> there is not role for virtio_ivshmem on it as backend. That is purely a 
> frontend driver. Vice versa for vm2: If you want to use its ivshmem 
> instance as virtio frontend, uio_ivshmem plays no role.
> 
> The "crash" is would be interesting to understand: Do you see kernel 
> panics of the guests? Or are they stuck? Or are the QEMU instances 
> stuck? Do you know that you can debug the guest kernels via gdb (and 
> gdb-scripts of the kernel)?
> 
>>
>> I started some code work recently, such as fix code style issues and
>> some work based on above testing, however I know you are also working on
>> RFC V2, beside the protocol between server-client and client-client is
>> not finalized yet either, things may change, so much appreciate if you
>> could squeeze me into your develop schedule and share with me some
>> plans, :-)  Maybe I could send some pull request in your github repo?
> 
> I'm currently working on a refresh of the Jailhouse queue and the kernel 
> patches to incorporate just two smaller changes:
> 
>   - use Siemens device ID
>   - drop "features" register from ivshmem device
> 
> I have not yet touched the QEMU code for that so far, thus no conflict 
> yet. I would wait for your patches then.
> 
> If it helps us to work on this together, I can push things to github as 
> well. Will drop you a note when done. We should just present the outcome 
> frequently as new series to the list.

I've updated my queues, mostly small changes, mostly to the kernel bits. 
Besides the already announced places, you can also find them as PR 
targets on

https://github.com/siemens/qemu/commits/wip/ivshmem2
https://github.com/siemens/linux/commits/queues/ivshmem2

To give the whole thing broader coverage, I will now also move forward 
and integrate the current state into Jailhouse - at the risk of having 
to rework the interface there once again. But there are a number of 
users already requiring the extended features (or even using them), plus 
this gives a nice test coverage of key components and properties.

Jan

-- 
Siemens AG, Corporate Technology, CT RDA IOT SES-DE
Corporate Competence Center Embedded Linux


  reply	other threads:[~2019-12-02  6:17 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-11-11 12:57 [RFC][PATCH 0/3] IVSHMEM version 2 device for QEMU Jan Kiszka
2019-11-11 12:57 ` [RFC][PATCH 1/3] hw/misc: Add implementation of ivshmem revision 2 device Jan Kiszka
2019-11-11 12:57 ` [RFC][PATCH 2/3] docs/specs: Add specification of ivshmem device revision 2 Jan Kiszka
2019-11-11 13:45   ` Michael S. Tsirkin
2019-11-11 13:59     ` Jan Kiszka
2019-11-11 15:08       ` Michael S. Tsirkin
2019-11-11 15:27         ` Daniel P. Berrangé
2019-11-11 15:42           ` Jan Kiszka
2019-11-11 16:14             ` Michael S. Tsirkin
2019-11-11 16:25               ` Jan Kiszka
2019-11-11 16:11           ` Michael S. Tsirkin
2019-11-11 16:38             ` Jan Kiszka
2019-11-12  8:04               ` Michael S. Tsirkin
2019-11-20 18:15                 ` Jan Kiszka
2019-12-05 11:14   ` Markus Armbruster
2019-12-05 21:29     ` Jan Kiszka
2019-12-06 10:08       ` Markus Armbruster
2019-11-11 12:57 ` [RFC][PATCH 3/3] contrib: Add server for ivshmem " Jan Kiszka
2019-11-12  0:56 ` [RFC][PATCH 0/3] IVSHMEM version 2 device for QEMU no-reply
2019-11-27 15:28 ` Liang Yan
2019-11-27 17:19   ` Jan Kiszka
2019-12-02  6:16     ` Jan Kiszka [this message]
     [not found]       ` <877b0cd9-d1c5-00c9-c4b6-567c67740962@suse.com>
2019-12-03  7:14         ` Jan Kiszka

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0c6969db-848f-f05b-2dc0-589cb422aa56@siemens.com \
    --to=jan.kiszka@siemens.com \
    --cc=LYan@suse.com \
    --cc=armbru@redhat.com \
    --cc=claudio.fontana@gmail.com \
    --cc=hare@suse.de \
    --cc=jailhouse-dev@googlegroups.com \
    --cc=mst@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).