From: Alyssa Ross <hi@alyssa.is>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: Nikos Dragazis <ndragazis@arrikto.com>, qemu-devel@nongnu.org
Subject: Re: Testing the virtio-vhost-user QEMU patch
Date: Fri, 24 Jul 2020 21:56:53 +0000
Message-ID: <87lfj8efve.fsf@alyssa.is> (raw)
In-Reply-To: <20200724123254.GB333683@stefanha-x1.localdomain>
Stefan Hajnoczi <stefanha@redhat.com> writes:
> On Fri, Jul 24, 2020 at 10:58:45AM +0000, Alyssa Ross wrote:
>> Alyssa Ross <hi@alyssa.is> writes:
>>
>> > Stefan Hajnoczi <stefanha@redhat.com> writes:
>> >
>> >> On Tue, Jul 21, 2020 at 07:14:38AM +0000, Alyssa Ross wrote:
>> >>> Hi -- I hope it's okay me reaching out like this.
>> >>>
>> >>> I've been trying to test out the virtio-vhost-user implementation that's
>> >>> been posted to this list a couple of times, but have been unable to get
>> >>> it to boot a kernel following the steps listed either on
>> >>> <https://wiki.qemu.org/Features/VirtioVhostUser> or
>> >>> <https://ndragazis.github.io/dpdk-vhost-vvu-demo.html>.
>> >>>
>> >>> Specifically, the kernel appears to be unable to write to the
>> >>> virtio-vhost-user device's PCI registers. I've included the full panic
>> >>> output from the kernel at the end of this message. The panic is
>> >>> reproducible with two different kernels I tried (with different configs
>> >>> and versions). I tried both versions of the virtio-vhost-user I was
>> >>> able to find[1][2], and both exhibited the same behaviour.
>> >>>
>> >>> Is this a known issue? Am I doing something wrong?
>> >>
>> >> Hi,
>> >> Unfortunately I'm not sure what the issue is. This is an early
>> >> virtio-pci register access before a driver for any specific device type
>> >> (net, blk, vhost-user, etc) comes into play.
>> >
>> > Small update here: I tried on another computer, and it worked. Made
>> > sure that it was exactly the same QEMU binary, command line, and VM
>> > disk/initrd/kernel, so I think I can fairly confidently say the panic
>> > depends on what hardware QEMU is running on. I set -cpu value to the
>> > same on both as well (SandyBridge).
>> >
>> > I also discovered that it works on my primary computer (the one it
>> > panicked on before) with KVM disabled.
>> >
>> > Note that I've only got so far as finding that it boots on the other
>> > machine -- I haven't verified yet that it actually works.
>> >
>> > Bad host CPU: Intel(R) Core(TM) i5-2520M CPU @ 2.50GHz
>> > Good host CPU: AMD EPYC 7401P 24-Core Processor
>> >
>> > May I ask what host CPUs other people have tested this on? Having more
>> > data would probably be useful. Could it be an AMD vs. Intel thing?
>>
>> I think I've figured it out!
>>
>> Sandy Bridge and Ivy Bridge hosts encounter this panic because the
>> "additional resources" bar size is too big, at 1 << 36. If I change
>> this to 1 << 35, no more kernel panic.
>>
>> Skylake and later are fine with 1 << 36. In between Ivy Bridge and
>> Skylake were Haswell and Broadwell, but I couldn't find anybody who was
>> able to help me test on either of those, so I don't know what they do.
>>
>> Perhaps related, the hosts that produce panics all seem to have a
>> physical address size of 36 bits, while the hosts that work have larger
>> physical address sizes, as reported by lscpu.
>
> I have run it successfully on Broadwell but never tried 64GB or larger
> shared memory resources.
To clarify, I haven't been using big shared memory resources either --
this has all been about getting the backend VM to start at all. The
panic happens at boot, and the 1 << 36 BAR allocation comes from here,
during realization:
https://github.com/ndragazis/qemu/blob/f9ab08c0c8/hw/virtio/virtio-vhost-user-pci.c#L291
next prev parent reply index
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-07-21 7:14 Alyssa Ross
2020-07-21 8:30 ` Stefan Hajnoczi
2020-07-21 16:02 ` Alyssa Ross
2020-07-23 22:27 ` Alyssa Ross
2020-07-24 10:58 ` Alyssa Ross
2020-07-24 12:32 ` Stefan Hajnoczi
2020-07-24 21:56 ` Alyssa Ross [this message]
2020-07-27 10:00 ` Stefan Hajnoczi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87lfj8efve.fsf@alyssa.is \
--to=hi@alyssa.is \
--cc=ndragazis@arrikto.com \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
QEMU-Devel Archive on lore.kernel.org
Archives are clonable:
git clone --mirror https://lore.kernel.org/qemu-devel/0 qemu-devel/git/0.git
git clone --mirror https://lore.kernel.org/qemu-devel/1 qemu-devel/git/1.git
git clone --mirror https://lore.kernel.org/qemu-devel/2 qemu-devel/git/2.git
# If you have public-inbox 1.1+ installed, you may
# initialize and index your mirror using the following commands:
public-inbox-init -V2 qemu-devel qemu-devel/ https://lore.kernel.org/qemu-devel \
qemu-devel@nongnu.org
public-inbox-index qemu-devel
Example config snippet for mirrors
Newsgroup available over NNTP:
nntp://nntp.lore.kernel.org/org.nongnu.qemu-devel
AGPL code for this site: git clone https://public-inbox.org/public-inbox.git