All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Longpeng (Mike, Cloud Infrastructure Service Product Dept.)"  <longpeng2@huawei.com>
To: Lu Baolu <baolu.lu@linux.intel.com>,
	Alex Williamson <alex.williamson@redhat.com>,
	Nadav Amit <nadav.amit@gmail.com>
Cc: "dwmw2@infradead.org" <dwmw2@infradead.org>,
	"joro@8bytes.org" <joro@8bytes.org>,
	"will@kernel.org" <will@kernel.org>,
	"iommu@lists.linux-foundation.org"
	<iommu@lists.linux-foundation.org>,
	LKML <linux-kernel@vger.kernel.org>,
	"Gonglei (Arei)" <arei.gonglei@huawei.com>,
	chenjiashang <chenjiashang@huawei.com>,
	"Subo (Subo,
	Cloud Infrastructure Service Product Dept.)"  <subo7@huawei.com>
Subject: RE: A problem of Intel IOMMU hardware ?
Date: Thu, 18 Mar 2021 04:46:40 +0000	[thread overview]
Message-ID: <a0ca6dd974be42878a8f51b0a7bbe00f@huawei.com> (raw)
In-Reply-To: <87a5f90a-d1ea-fe7a-2577-fdfdf25f8fd7@linux.intel.com>

Hi guys,

I provide more information, please see below

> -----Original Message-----
> From: Lu Baolu [mailto:baolu.lu@linux.intel.com]
> Sent: Thursday, March 18, 2021 10:59 AM
> To: Alex Williamson <alex.williamson@redhat.com>
> Cc: baolu.lu@linux.intel.com; Longpeng (Mike, Cloud Infrastructure Service Product
> Dept.) <longpeng2@huawei.com>; dwmw2@infradead.org; joro@8bytes.org;
> will@kernel.org; iommu@lists.linux-foundation.org; LKML
> <linux-kernel@vger.kernel.org>; Gonglei (Arei) <arei.gonglei@huawei.com>;
> chenjiashang <chenjiashang@huawei.com>
> Subject: Re: A problem of Intel IOMMU hardware ?
> 
> Hi Alex,
> 
> On 3/17/21 11:18 PM, Alex Williamson wrote:
> >>>           {MAP,   0x0, 0xc0000000}, --------------------------------- (b)
> >>>                   use GDB to pause at here, and then DMA read
> >>> IOVA=0,
> >> IOVA 0 seems to be a special one. Have you verified with other
> >> addresses than IOVA 0?
> > It is???  That would be a problem.
> >
> 
> No problem from hardware point of view as far as I can see. Just thought about
> software might handle it specially.
> 

We simplify the reproducer, use the following map/unmap sequences can also 
reproduce the problem.

1. use 2M hugetlbfs to mmap 4G memory

2. run the while loop:
While (1) {
    DMA MAP (0, 0xa0000) - - - - - - - - - - - - - -(a)
    DMA UNMAP (0, 0xa0000) - - - - - - - - - - - (b)
          Operation-1 : dump DMAR table
	DMA MAP (0, 0xc0000000) - - - - - - - - - - -(c)
          Operation-2 :
             use GDB to pause at here, then DMA read IOVA=0,
             sometimes DMA success (as expected),
             but sometimes DMA error (report not-present).
          Operation-3 : dump DMAR table
          Operation-4 (when DMA error) : please see below
    DMA UNMAP (0, 0xc0000000) - - - - - - - - -(d)
}

The DMAR table of Operation-1 is (only show the entries about IOVA 0):

PML4: 0x      1a34fbb003
  PDPE: 0x      1a34fbb003
   PDE: 0x      1a34fbf003
    PTE: 0x               0

And the table of Operation-3 is:

PML4: 0x      1a34fbb003
  PDPE: 0x      1a34fbb003
   PDE: 0x       15ec00883 < - - 2M superpage

So we can see the IOVA 0 is mapped, but the DMA read is error:

dmar_fault: 131757 callbacks suppressed
DRHD: handling fault status reg 402
[DMA Read] Request device [86:05.6] fault addr 0 [fault reason 06] PTE Read access is not set
[DMA Read] Request device [86:05.6] fault addr 0 [fault reason 06] PTE Read access is not set
DRHD: handling fault status reg 600
DRHD: handling fault status reg 602
[DMA Read] Request device [86:05.6] fault addr 0 [fault reason 06] PTE Read access is not set
[DMA Read] Request device [86:05.6] fault addr 0 [fault reason 06] PTE Read access is not set
[DMA Read] Request device [86:05.6] fault addr 0 [fault reason 06] PTE Read access is not set

NOTE, the magical thing happen...(*Operation-4*) we write the PTE
of Operation-1 from 0 to 0x3 which means can Read/Write, and then
we trigger DMA read again, it success and return the data of HPA 0 !!

Why we modify the older page table would make sense ? As we
have discussed previously, the cache flush part of the driver is correct,
it call flush_iotlb after (b) and no need to flush after (c). But the result
of the experiment shows the older page table or older caches is effective
actually.

Any ideas ?

> Best regards,
> baolu

WARNING: multiple messages have this Message-ID (diff)
From: "Longpeng (Mike, Cloud Infrastructure Service Product Dept.)" <longpeng2@huawei.com>
To: Lu Baolu <baolu.lu@linux.intel.com>,
	Alex Williamson <alex.williamson@redhat.com>,
	Nadav Amit <nadav.amit@gmail.com>
Cc: chenjiashang <chenjiashang@huawei.com>,
	"Subo \(Subo,
	Cloud Infrastructure Service Product Dept.\)" <subo7@huawei.com>,
	"dwmw2@infradead.org" <dwmw2@infradead.org>,
	LKML <linux-kernel@vger.kernel.org>,
	"iommu@lists.linux-foundation.org"
	<iommu@lists.linux-foundation.org>,
	"Gonglei \(Arei\)" <arei.gonglei@huawei.com>,
	"will@kernel.org" <will@kernel.org>
Subject: RE: A problem of Intel IOMMU hardware ?
Date: Thu, 18 Mar 2021 04:46:40 +0000	[thread overview]
Message-ID: <a0ca6dd974be42878a8f51b0a7bbe00f@huawei.com> (raw)
In-Reply-To: <87a5f90a-d1ea-fe7a-2577-fdfdf25f8fd7@linux.intel.com>

Hi guys,

I provide more information, please see below

> -----Original Message-----
> From: Lu Baolu [mailto:baolu.lu@linux.intel.com]
> Sent: Thursday, March 18, 2021 10:59 AM
> To: Alex Williamson <alex.williamson@redhat.com>
> Cc: baolu.lu@linux.intel.com; Longpeng (Mike, Cloud Infrastructure Service Product
> Dept.) <longpeng2@huawei.com>; dwmw2@infradead.org; joro@8bytes.org;
> will@kernel.org; iommu@lists.linux-foundation.org; LKML
> <linux-kernel@vger.kernel.org>; Gonglei (Arei) <arei.gonglei@huawei.com>;
> chenjiashang <chenjiashang@huawei.com>
> Subject: Re: A problem of Intel IOMMU hardware ?
> 
> Hi Alex,
> 
> On 3/17/21 11:18 PM, Alex Williamson wrote:
> >>>           {MAP,   0x0, 0xc0000000}, --------------------------------- (b)
> >>>                   use GDB to pause at here, and then DMA read
> >>> IOVA=0,
> >> IOVA 0 seems to be a special one. Have you verified with other
> >> addresses than IOVA 0?
> > It is???  That would be a problem.
> >
> 
> No problem from hardware point of view as far as I can see. Just thought about
> software might handle it specially.
> 

We simplify the reproducer, use the following map/unmap sequences can also 
reproduce the problem.

1. use 2M hugetlbfs to mmap 4G memory

2. run the while loop:
While (1) {
    DMA MAP (0, 0xa0000) - - - - - - - - - - - - - -(a)
    DMA UNMAP (0, 0xa0000) - - - - - - - - - - - (b)
          Operation-1 : dump DMAR table
	DMA MAP (0, 0xc0000000) - - - - - - - - - - -(c)
          Operation-2 :
             use GDB to pause at here, then DMA read IOVA=0,
             sometimes DMA success (as expected),
             but sometimes DMA error (report not-present).
          Operation-3 : dump DMAR table
          Operation-4 (when DMA error) : please see below
    DMA UNMAP (0, 0xc0000000) - - - - - - - - -(d)
}

The DMAR table of Operation-1 is (only show the entries about IOVA 0):

PML4: 0x      1a34fbb003
  PDPE: 0x      1a34fbb003
   PDE: 0x      1a34fbf003
    PTE: 0x               0

And the table of Operation-3 is:

PML4: 0x      1a34fbb003
  PDPE: 0x      1a34fbb003
   PDE: 0x       15ec00883 < - - 2M superpage

So we can see the IOVA 0 is mapped, but the DMA read is error:

dmar_fault: 131757 callbacks suppressed
DRHD: handling fault status reg 402
[DMA Read] Request device [86:05.6] fault addr 0 [fault reason 06] PTE Read access is not set
[DMA Read] Request device [86:05.6] fault addr 0 [fault reason 06] PTE Read access is not set
DRHD: handling fault status reg 600
DRHD: handling fault status reg 602
[DMA Read] Request device [86:05.6] fault addr 0 [fault reason 06] PTE Read access is not set
[DMA Read] Request device [86:05.6] fault addr 0 [fault reason 06] PTE Read access is not set
[DMA Read] Request device [86:05.6] fault addr 0 [fault reason 06] PTE Read access is not set

NOTE, the magical thing happen...(*Operation-4*) we write the PTE
of Operation-1 from 0 to 0x3 which means can Read/Write, and then
we trigger DMA read again, it success and return the data of HPA 0 !!

Why we modify the older page table would make sense ? As we
have discussed previously, the cache flush part of the driver is correct,
it call flush_iotlb after (b) and no need to flush after (c). But the result
of the experiment shows the older page table or older caches is effective
actually.

Any ideas ?

> Best regards,
> baolu
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

  reply	other threads:[~2021-03-18  4:47 UTC|newest]

Thread overview: 50+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-17  3:16 A problem of Intel IOMMU hardware ? Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-03-17  3:16 ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-03-17  5:16 ` Lu Baolu
2021-03-17  5:16   ` Lu Baolu
2021-03-17  9:40   ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-03-17  9:40     ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-03-17 15:18   ` Alex Williamson
2021-03-17 15:18     ` Alex Williamson
2021-03-18  2:58     ` Lu Baolu
2021-03-18  2:58       ` Lu Baolu
2021-03-18  4:46       ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.) [this message]
2021-03-18  4:46         ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-03-18  7:48         ` Nadav Amit
2021-03-18  7:48           ` Nadav Amit
2021-03-17  5:46 ` Nadav Amit
2021-03-17  5:46   ` Nadav Amit
2021-03-17  9:35   ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-03-17  9:35     ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-03-17 18:12     ` Nadav Amit
2021-03-17 18:12       ` Nadav Amit
2021-03-18  3:03       ` Lu Baolu
2021-03-18  3:03         ` Lu Baolu
2021-03-18  8:20       ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-03-18  8:20         ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-03-18  8:27         ` Tian, Kevin
2021-03-18  8:27           ` Tian, Kevin
2021-03-18  8:38           ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-03-18  8:38             ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-03-18  8:43             ` Tian, Kevin
2021-03-18  8:43               ` Tian, Kevin
2021-03-18  8:54               ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-03-18  8:54                 ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-03-18  8:56             ` Tian, Kevin
2021-03-18  8:56               ` Tian, Kevin
2021-03-18  9:25               ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-03-18  9:25                 ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-03-18 16:46                 ` Nadav Amit
2021-03-18 16:46                   ` Nadav Amit
2021-03-21 23:51                   ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-03-21 23:51                     ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-03-22  0:27                   ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-03-22  0:27                     ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-03-27  2:31                   ` Lu Baolu
2021-03-27  2:31                     ` Lu Baolu
2021-03-27  4:36                     ` Nadav Amit
2021-03-27  4:36                       ` Nadav Amit
2021-03-27  5:27                       ` Lu Baolu
2021-03-27  5:27                         ` Lu Baolu
2021-03-19  0:15               ` Lu Baolu
2021-03-19  0:15                 ` Lu Baolu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a0ca6dd974be42878a8f51b0a7bbe00f@huawei.com \
    --to=longpeng2@huawei.com \
    --cc=alex.williamson@redhat.com \
    --cc=arei.gonglei@huawei.com \
    --cc=baolu.lu@linux.intel.com \
    --cc=chenjiashang@huawei.com \
    --cc=dwmw2@infradead.org \
    --cc=iommu@lists.linux-foundation.org \
    --cc=joro@8bytes.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=nadav.amit@gmail.com \
    --cc=subo7@huawei.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.