xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Scott Davis <scott.davis@starlab.io>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"paul@xen.org" <paul@xen.org>
Subject: Re: [RFC PATCH] iommu: make no-quarantine mean no-quarantine
Date: Fri, 30 Apr 2021 19:27:51 +0000	[thread overview]
Message-ID: <52860A0A-D1D0-427D-ADE6-0876FC0897D3@starlab.io> (raw)
In-Reply-To: <ec0cc346-3d56-afec-7414-bce81e9eea1d@suse.com>

On 4/30/21, 3:15 AM, Jan Beulich wrote:
> So far you didn't tell us what the actual crash was. I guess it's not
> even clear to me whether it's Xen or qemu that did crash for you. But
> I have to also admit that until now it wasn't really clear to me that
> you ran Xen _under_ qemu - instead I was assuming there was an
> interaction problem with a qemu serving a guest.

I explained this in my OP, sorry if it was not clear:

> Background: I am setting up a QEMU-based development and testing environment
> for the Crucible team at Star Lab that includes emulated PCIe devices for
> passthrough and hotplug. I encountered an issue with `xl pci-assignable-add`
> that causes the host QEMU to rapidly allocate memory until getting 
> OOM-killed.

As soon as Xen writes the IQT register, the host QEMU process locks up,
starts allocating several hundred MB/sec, and is soon OOM-killed by the
host kernel.

On 4/30/21, 3:15 AM, Jan Beulich wrote:
> Interesting. This then leaves the question whether we submit a bogus
> command, or whether qemu can't deal (correctly) with a valid one here.

I did some extra debugging to inspect the index values being written to
IQT as well as the invalidation descriptors themselves and everything
appeared fine to me on Xen's end. In fact, the descriptor written by
queue_invalidate_context_sync upon map into dom_io is entirely identical
to the one it writes upon unmap from dom0, which works without issue.
This point towards a QEMU bug to me:

(gdb) c
Thread 1 hit Breakpoint 4, queue_invalidate_context_sync (...) at qinval.c:101
(gdb) bt
#0  queue_invalidate_context_sync (...) at qinval.c:85
#1  flush_context_qi (...) at qinval.c:341
#2  iommu_flush_context_device (...) at iommu.c:400
#3  domain_context_unmap_one (...) at iommu.c:1606
#4  domain_context_unmap (...) at iommu.c:1671
#5  reassign_device_ownership (...) at iommu.c:2396
#6  intel_iommu_assign_device (...) at iommu.c:2476
#7  assign_device (...) at pci.c:1545
#8  iommu_do_pci_domctl (...) at pci.c:1732
#9  iommu_do_domctl (...) at iommu.c:539
...
(gdb) print index
$2 = 552
(gdb) print qinval_entry->q.cc_inv_dsc
$3 = {
  lo = {
    type = 1,
    granu = 3,
    res_1 = 0,
    did = 0,
    sid = 256,
    fm = 0,
    res_2 = 0
  },
  hi = {
    res = 0
  }
}
(gdb) c
Thread 1 hit Breakpoint 5, qinval_next_index (...) at qinval.c:58
(gdb) bt
#0  qinval_next_index (...) at qinval.c:58
#1  queue_invalidate_wait (...) at qinval.c:159
#2  invalidate_sync (...) at qinval.c:207
#3  queue_invalidate_context_sync (...) at qinval.c:106
...
(gdb) print tail
$4 = 553
(gdb) c
Thread 1 hit Breakpoint 5, qinval_next_index (...) at qinval.c:58
(gdb) bt
#0  qinval_next_index (...) at qinval.c:58
#3  queue_invalidate_iotlb_sync (...) at qinval.c:120
#4  flush_iotlb_qi (...) at qinval.c:376
#5  iommu_flush_iotlb_dsi (...) at iommu.c:499
#6  domain_context_unmap_one (...) at iommu.c:1611
#7  domain_context_unmap (...) at iommu.c:1671
...
(gdb) print tail
$5 = 554
(gdb) c
Thread 1 hit Breakpoint 5, qinval_next_index (...) at qinval.c:58
(gdb) bt
#0  qinval_next_index (...) at qinval.c:58
#1  queue_invalidate_wait (...) at qinval.c:159
#2  invalidate_sync (...) at qinval.c:207
#3  queue_invalidate_iotlb_sync (...) at qinval.c:143
...
(gdb) print tail
$6 = 555
(gdb) c
Thread 1 hit Breakpoint 5, qinval_next_index (...) at qinval.c:58
(gdb) bt
#0  qinval_next_index (...) at qinval.c:58
#1  queue_invalidate_context_sync (...) at qinval.c:86
#2  flush_context_qi (...) at qinval.c:341
#3  iommu_flush_context_device (...) at iommu.c:400
#4  domain_context_mapping_one (...) at iommu.c:1436
#5  domain_context_mapping (...) at iommu.c:1510
#6  reassign_device_ownership (...) at iommu.c:2412
...
(gdb) print tail
$7 = 556
(gdb) c
Thread 1 hit Breakpoint 4, queue_invalidate_context_sync (...) at qinval.c:101
(gdb) print index
$8 = 556
(gdb) print qinval_entry->q.cc_inv_dsc
$9 = {
  lo = {
    type = 1,
    granu = 3,
    res_1 = 0,
    did = 0,
    sid = 256,
    fm = 0,
    res_2 = 0
  },
  hi = {
    res = 0
  }
}
(gdb) c
Continuing.
Remote connection closed

With output from dom0 and Xen like:

[   31.002214] e1000e 0000:01:00.0 eth1: removed PHC
[   31.694270] e1000e: eth1 NIC Link is Down
[   31.717849] pciback 0000:01:00.0: seizing device
[   31.719464] Already setup the GSI :20
(XEN) [   83.572804] [VT-D]d0:PCIe: unmap 0000:01:00.0
(XEN) [  808.092310] [VT-D]d32753:PCIe: map 0000:01:00.0

Good day,
Scott


      reply	other threads:[~2021-04-30 19:28 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-26 17:25 [RFC PATCH] iommu: make no-quarantine mean no-quarantine Scott Davis
2021-04-27  6:56 ` Jan Beulich
2021-04-27 22:00   ` Scott Davis
2021-04-28  6:15     ` Jan Beulich
2021-04-28  7:19       ` Paul Durrant
2021-04-28  8:49         ` Jan Beulich
2021-04-28  8:51           ` Paul Durrant
2021-04-29 21:04         ` Scott Davis
2021-04-30  7:15           ` Jan Beulich
2021-04-30 19:27             ` Scott Davis [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=52860A0A-D1D0-427D-ADE6-0876FC0897D3@starlab.io \
    --to=scott.davis@starlab.io \
    --cc=andrew.cooper3@citrix.com \
    --cc=george.dunlap@citrix.com \
    --cc=ian.jackson@eu.citrix.com \
    --cc=jbeulich@suse.com \
    --cc=julien@xen.org \
    --cc=paul@xen.org \
    --cc=sstabellini@kernel.org \
    --cc=wl@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).