qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* Seeing a problem in multi cpu runs where memory mapped pcie device register reads are returning incorrect values
@ 2020-07-01 18:26 Mark Wood-Patrick
  2020-07-02 11:59 ` mwoodpatrick
  2020-07-12 17:54 ` Mark Wood-Patrick
  0 siblings, 2 replies; 3+ messages in thread
From: Mark Wood-Patrick @ 2020-07-01 18:26 UTC (permalink / raw)
  To: qemu-devel; +Cc: Mark Wood-Patrick

[-- Attachment #1: Type: text/plain, Size: 978 bytes --]

Background
I have a test environment which runs QEMU 4.2 with a plugin that runs two copies of a PCIE device simulator on a CentOS 7.5 host with an Ubuntu 18.04 guest. When running with a single QEMU CPU using:

     -cpu kvm64,+lahf_lm -M q35,kernel-irqchip=off -device intel-iommu,intremap=on

Our tests run fine. But when running with multiple cpu's:

    -cpu kvm64,+lahf_lm -M q35,kernel-irqchip=off -device intel-iommu,intremap=on -smp 2,sockets=1,cores=2

The values retuned are correct  all the way up the call stack and in KVM_EXIT_MMIO in kvm_cpu_exec (qemu-4.2.0/accel/kvm/kvm-all.c:2365)  but the value returned to the device driver which initiated the read is 0.

Question
Is anyone else running QEMU 4.2 in multi cpu mode? Is anyone getting incorrect reads from memory mapped device registers  when running in this mode? I would appreciate any pointers on how best to debug the flow from KVM_EXIT_MMIO back to the device driver running on the guest


[-- Attachment #2: Type: text/html, Size: 4243 bytes --]

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Seeing a problem in multi cpu runs where memory mapped pcie device register reads are returning incorrect values
  2020-07-01 18:26 Seeing a problem in multi cpu runs where memory mapped pcie device register reads are returning incorrect values Mark Wood-Patrick
@ 2020-07-02 11:59 ` mwoodpatrick
  2020-07-12 17:54 ` Mark Wood-Patrick
  1 sibling, 0 replies; 3+ messages in thread
From: mwoodpatrick @ 2020-07-02 11:59 UTC (permalink / raw)
  To: qemu-devel, kvm

Background
==========

I have a test environment which runs QEMU 4.2 with a plugin that runs two
copies of a PCIE device simulator on a CentOS 7.5 host with an Ubuntu 18.04
guest. When running with a single QEMU CPU using:

     -cpu kvm64,+lahf_lm -M q35,kernel-irqchip=off -device
intel-iommu,intremap=on

Our tests run fine. 

But when running with multiple cpu’s:

    -cpu kvm64,+lahf_lm -M q35,kernel-irqchip=off -device
intel-iommu,intremap=on -smp 2,sockets=1,cores=2

Some mmio reads to the simulated devices BAR 0 registers by our device
driver running on the guest are returning are returning incorrect values. 

Running QEMU under gdb I see that the read request is reaching our simulated
device correctly and that the correct result is being returned by the
simulator. Using gdb I have tracked the return value all the way back up the
call stack and the correct value is arriving in KVM_EXIT_MMIO
in kvm_cpu_exec (qemu-4.2.0/accel/kvm/kvm-all.c:2365)  but the value
returned to the device driver which initiated the read is 0.

Question
========

Is anyone else running QEMU 4.2 in multi cpu mode? Is anyone getting
incorrect reads from memory mapped device registers  when running in this
mode? I would appreciate any pointers on how best to debug the flow from
KVM_EXIT_MMIO back to the device driver running on the guest
              



^ permalink raw reply	[flat|nested] 3+ messages in thread

* RE: Seeing a problem in multi cpu runs where memory mapped pcie device register reads are returning incorrect values
  2020-07-01 18:26 Seeing a problem in multi cpu runs where memory mapped pcie device register reads are returning incorrect values Mark Wood-Patrick
  2020-07-02 11:59 ` mwoodpatrick
@ 2020-07-12 17:54 ` Mark Wood-Patrick
  1 sibling, 0 replies; 3+ messages in thread
From: Mark Wood-Patrick @ 2020-07-12 17:54 UTC (permalink / raw)
  To: qemu-devel; +Cc: stefanha, Mark Wood-Patrick

[-- Attachment #1: Type: text/plain, Size: 1274 bytes --]



From: Mark Wood-Patrick <mwoodpatrick@nvidia.com>
Sent: Wednesday, July 1, 2020 11:26 AM
To: qemu-devel@nongnu.org
Cc: Mark Wood-Patrick <mwoodpatrick@nvidia.com>
Subject: Seeing a problem in multi cpu runs where memory mapped pcie device register reads are returning incorrect values

Background
I have a test environment which runs QEMU 4.2 with a plugin that runs two copies of a PCIE device simulator on a CentOS 7.5 host with an Ubuntu 18.04 guest. When running with a single QEMU CPU using:

     -cpu kvm64,+lahf_lm -M q35,kernel-irqchip=off -device intel-iommu,intremap=on

Our tests run fine. But when running with multiple cpu's:

    -cpu kvm64,+lahf_lm -M q35,kernel-irqchip=off -device intel-iommu,intremap=on -smp 2,sockets=1,cores=2

The values retuned are correct  all the way up the call stack and in KVM_EXIT_MMIO in kvm_cpu_exec (qemu-4.2.0/accel/kvm/kvm-all.c:2365)  but the value returned to the device driver which initiated the read is 0.

Question
Is anyone else running QEMU 4.2 in multi cpu mode? Is anyone getting incorrect reads from memory mapped device registers  when running in this mode? I would appreciate any pointers on how best to debug the flow from KVM_EXIT_MMIO back to the device driver running on the guest


[-- Attachment #2: Type: text/html, Size: 4691 bytes --]

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2020-07-12 18:00 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-01 18:26 Seeing a problem in multi cpu runs where memory mapped pcie device register reads are returning incorrect values Mark Wood-Patrick
2020-07-02 11:59 ` mwoodpatrick
2020-07-12 17:54 ` Mark Wood-Patrick

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).