QEMU-Devel Archive on lore.kernel.org
 help / color / Atom feed
From: Stefan Hajnoczi <stefanha@redhat.com>
To: Alyssa Ross <hi@alyssa.is>
Cc: Nikos Dragazis <ndragazis@arrikto.com>, qemu-devel@nongnu.org
Subject: Re: Testing the virtio-vhost-user QEMU patch
Date: Mon, 27 Jul 2020 11:00:39 +0100
Message-ID: <20200727100039.GE380177@stefanha-x1.localdomain> (raw)
In-Reply-To: <87lfj8efve.fsf@alyssa.is>


[-- Attachment #1: Type: text/plain, Size: 3850 bytes --]

On Fri, Jul 24, 2020 at 09:56:53PM +0000, Alyssa Ross wrote:
> Stefan Hajnoczi <stefanha@redhat.com> writes:
> 
> > On Fri, Jul 24, 2020 at 10:58:45AM +0000, Alyssa Ross wrote:
> >> Alyssa Ross <hi@alyssa.is> writes:
> >> 
> >> > Stefan Hajnoczi <stefanha@redhat.com> writes:
> >> >
> >> >> On Tue, Jul 21, 2020 at 07:14:38AM +0000, Alyssa Ross wrote:
> >> >>> Hi -- I hope it's okay me reaching out like this.
> >> >>> 
> >> >>> I've been trying to test out the virtio-vhost-user implementation that's
> >> >>> been posted to this list a couple of times, but have been unable to get
> >> >>> it to boot a kernel following the steps listed either on
> >> >>> <https://wiki.qemu.org/Features/VirtioVhostUser> or
> >> >>> <https://ndragazis.github.io/dpdk-vhost-vvu-demo.html>.
> >> >>> 
> >> >>> Specifically, the kernel appears to be unable to write to the
> >> >>> virtio-vhost-user device's PCI registers.  I've included the full panic
> >> >>> output from the kernel at the end of this message.  The panic is
> >> >>> reproducible with two different kernels I tried (with different configs
> >> >>> and versions).  I tried both versions of the virtio-vhost-user I was
> >> >>> able to find[1][2], and both exhibited the same behaviour.
> >> >>> 
> >> >>> Is this a known issue?  Am I doing something wrong?
> >> >>
> >> >> Hi,
> >> >> Unfortunately I'm not sure what the issue is. This is an early
> >> >> virtio-pci register access before a driver for any specific device type
> >> >> (net, blk, vhost-user, etc) comes into play.
> >> >
> >> > Small update here: I tried on another computer, and it worked.  Made
> >> > sure that it was exactly the same QEMU binary, command line, and VM
> >> > disk/initrd/kernel, so I think I can fairly confidently say the panic
> >> > depends on what hardware QEMU is running on.  I set -cpu value to the
> >> > same on both as well (SandyBridge).
> >> >
> >> > I also discovered that it works on my primary computer (the one it
> >> > panicked on before) with KVM disabled.
> >> >
> >> > Note that I've only got so far as finding that it boots on the other
> >> > machine -- I haven't verified yet that it actually works.
> >> >
> >> > Bad host CPU:  Intel(R) Core(TM) i5-2520M CPU @ 2.50GHz
> >> > Good host CPU: AMD EPYC 7401P 24-Core Processor
> >> >
> >> > May I ask what host CPUs other people have tested this on?  Having more
> >> > data would probably be useful.  Could it be an AMD vs. Intel thing?
> >> 
> >> I think I've figured it out!
> >> 
> >> Sandy Bridge and Ivy Bridge hosts encounter this panic because the
> >> "additional resources" bar size is too big, at 1 << 36.  If I change
> >> this to 1 << 35, no more kernel panic.
> >> 
> >> Skylake and later are fine with 1 << 36.  In between Ivy Bridge and
> >> Skylake were Haswell and Broadwell, but I couldn't find anybody who was
> >> able to help me test on either of those, so I don't know what they do.
> >> 
> >> Perhaps related, the hosts that produce panics all seem to have a
> >> physical address size of 36 bits, while the hosts that work have larger
> >> physical address sizes, as reported by lscpu.
> >
> > I have run it successfully on Broadwell but never tried 64GB or larger
> > shared memory resources.
> 
> To clarify, I haven't been using big shared memory resources either --
> this has all been about getting the backend VM to start at all.  The
> panic happens at boot, and the 1 << 36 BAR allocation comes from here,
> during realization:
> https://github.com/ndragazis/qemu/blob/f9ab08c0c8/hw/virtio/virtio-vhost-user-pci.c#L291

Okay, then that worked on Broadwell :)

Intel(R) Core(TM) i7-5600U CPU @ 2.60GHz
https://ark.intel.com/content/www/us/en/ark/products/85215/intel-core-i7-5600u-processor-4m-cache-up-to-3-20-ghz.html

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

      reply index

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-07-21  7:14 Alyssa Ross
2020-07-21  8:30 ` Stefan Hajnoczi
2020-07-21 16:02   ` Alyssa Ross
2020-07-23 22:27   ` Alyssa Ross
2020-07-24 10:58     ` Alyssa Ross
2020-07-24 12:32       ` Stefan Hajnoczi
2020-07-24 21:56         ` Alyssa Ross
2020-07-27 10:00           ` Stefan Hajnoczi [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200727100039.GE380177@stefanha-x1.localdomain \
    --to=stefanha@redhat.com \
    --cc=hi@alyssa.is \
    --cc=ndragazis@arrikto.com \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

QEMU-Devel Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/qemu-devel/0 qemu-devel/git/0.git
	git clone --mirror https://lore.kernel.org/qemu-devel/1 qemu-devel/git/1.git
	git clone --mirror https://lore.kernel.org/qemu-devel/2 qemu-devel/git/2.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 qemu-devel qemu-devel/ https://lore.kernel.org/qemu-devel \
		qemu-devel@nongnu.org
	public-inbox-index qemu-devel

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.nongnu.qemu-devel


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git