Hi Elliott,

I hope you're well. 

I'm Kelly, the community manager at the Xen Project. 

I can see you've recently engaged with our community with some issues you'd like help with.
We love the fact you are participating in our project, however, our developers aren't able to help if you do not provide the specific details. 

As an open-source project, our developers are committed to helping and contributing as much as possible. We welcome you to continue participating, however, please refrain from requesting help without providing the necessary details as this takes up a lot of our community's time to analyze what is possible and what assistance you might need. 

I'd recommend providing logs or specific information so the community can help you further.

If you'd like to chat more, let me know.

Many thanks,
Kelly Choi

Community Manager
Xen Project 


On Mon, Mar 18, 2024 at 7:42 PM Elliott Mitchell <ehem+xen@m5p.com> wrote:
I sent a ping on this about 2 weeks ago.  Since the plan is to move x86
IOMMU under general x86, the other x86 maintainers should be aware of
this:

On Mon, Feb 12, 2024 at 03:23:00PM -0800, Elliott Mitchell wrote:
> On Thu, Jan 25, 2024 at 12:24:53PM -0800, Elliott Mitchell wrote:
> > Apparently this was first noticed with 4.14, but more recently I've been
> > able to reproduce the issue:
> >
> > https://bugs.debian.org/988477
> >
> > The original observation features MD-RAID1 using a pair of Samsung
> > SATA-attached flash devices.  The main line shows up in `xl dmesg`:
> >
> > (XEN) AMD-Vi: IO_PAGE_FAULT: DDDD:bb:dd.f d0 addr ffffff???????000 flags 0x8 I
> >
> > Where the device points at the SATA controller.  I've ended up
> > reproducing this with some noticable differences.
> >
> > A major goal of RAID is to have different devices fail at different
> > times.  Hence my initial run had a Samsung device plus a device from
> > another reputable flash manufacturer.
> >
> > I initially noticed this due to messages in domain 0's dmesg about
> > errors from the SATA device.  Wasn't until rather later that I noticed
> > the IOMMU warnings in Xen's dmesg (perhaps post-domain 0 messages should
> > be duplicated into domain 0's dmesg?).
> >
> > All of the failures consistently pointed at the Samsung device.  Due to
> > the expectation it would fail first (lower quality offering with
> > lesser guarantees), I proceeded to replace it with a NVMe device.
> >
> > With some monitoring I discovered the NVMe device was now triggering
> > IOMMU errors, though not nearly as many as the Samsung SATA device did.
> > As such looks like AMD-Vi plus MD-RAID1 appears to be exposing some sort
> > of IOMMU issue with Xen.
> >
> >
> > All I can do is offer speculation about the underlying cause.  There
> > does seem to be a pattern of higher-performance flash storage devices
> > being more severely effected.
> >
> > I was speculating about the issue being the MD-RAID1 driver abusing
> > Linux's DMA infrastructure in some fashion.
> >
> > Upon further consideration, I'm wondering if this is perhaps a latency
> > issue.  I imagine there is some sort of flush after the IOMMU tables are
> > modified.  Perhaps the Samsung SATA (and all NVMe) devices were trying to
> > execute commands before reloading the IOMMU tables is complete.
>
> Ping!
>
> The recipe seems to be Linux MD RAID1, plus Samsung SATA or any NVMe.
>
> To make it explicit, when I tried Crucial SATA + Samsung SATA.  IOMMU
> errors matched the Samsung SATA (a number of times the SATA driver
> complained).
>
> As stated, I'm speculating lower latency devices starting to execute
> commands before IOMMU tables have finished reloading.  When originally
> implemented fast flash devices were rare.

Both reproductions of this issue I'm aware of were on systems with AMD
processors.  I'm doubtul suspicion of flash storage hardware is unique
to owners of AMD systems.  As a result while this /could/ also effect
Intel systems, the lack of reports /suggests/ otherwise.

I've noticed two things when glancing at the original report.  LVM is not
in use here, so that doesn't seem to effect the problem.  The Phenom II
the original reporter tested as not having the issue might have lacked
proper BIOS support, hence IOMMU not being functional.

This being a latency issue is *speculation*, but would explain the
pattern of devices being effected.

This is rather serious as it can lead to data loss (phew!  glad I just
barely dodged this outcome).


--
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445