From: Stefano Stabellini <sstabellini@kernel.org>
To: Claire Chang <tientzu@chromium.org>
Cc: Rob Herring <robh+dt@kernel.org>,
mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>,
Will Deacon <will@kernel.org>,
Frank Rowand <frowand.list@gmail.com>,
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
boris.ostrovsky@oracle.com, jgross@suse.com,
Christoph Hellwig <hch@lst.de>,
Marek Szyprowski <m.szyprowski@samsung.com>,
benh@kernel.crashing.org, paulus@samba.org,
"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
grant.likely@arm.com, xypron.glpk@gmx.de,
Thierry Reding <treding@nvidia.com>,
mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org,
Greg KH <gregkh@linuxfoundation.org>,
Saravana Kannan <saravanak@google.com>,
"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
heikki.krogerus@linux.intel.com,
Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
Randy Dunlap <rdunlap@infradead.org>,
Dan Williams <dan.j.williams@intel.com>,
Bartosz Golaszewski <bgolaszewski@baylibre.com>,
linux-devicetree <devicetree@vger.kernel.org>,
lkml <linux-kernel@vger.kernel.org>,
linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org,
Nicolas Boichat <drinkcat@chromium.org>,
Jim Quinlan <james.quinlan@broadcom.com>,
tfiga@chromium.org, bskeggs@redhat.com, bhelgaas@google.com,
chris@chris-wilson.co.uk, daniel@ffwll.ch, airlied@linux.ie,
dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
jani.nikula@linux.intel.com, jxgao@google.com,
joonas.lahtinen@linux.intel.com, linux-pci@vger.kernel.org,
maarten.lankhorst@linux.intel.com, matthew.auld@intel.com,
rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v12 06/12] swiotlb: Use is_swiotlb_force_bounce for swiotlb data bouncing
Date: Wed, 16 Jun 2021 17:47:07 -0700 (PDT) [thread overview]
Message-ID: <alpine.DEB.2.21.2106161711030.24906@sstabellini-ThinkPad-T480s> (raw)
In-Reply-To: <20210616062157.953777-7-tientzu@chromium.org>
On Wed, 16 Jun 2021, Claire Chang wrote:
> Propagate the swiotlb_force into io_tlb_default_mem->force_bounce and
> use it to determine whether to bounce the data or not. This will be
> useful later to allow for different pools.
>
> Signed-off-by: Claire Chang <tientzu@chromium.org>
> ---
> include/linux/swiotlb.h | 11 +++++++++++
> kernel/dma/direct.c | 2 +-
> kernel/dma/direct.h | 2 +-
> kernel/dma/swiotlb.c | 4 ++++
> 4 files changed, 17 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
> index dd1c30a83058..8d8855c77d9a 100644
> --- a/include/linux/swiotlb.h
> +++ b/include/linux/swiotlb.h
> @@ -84,6 +84,7 @@ extern enum swiotlb_force swiotlb_force;
> * unmap calls.
> * @debugfs: The dentry to debugfs.
> * @late_alloc: %true if allocated using the page allocator
> + * @force_bounce: %true if swiotlb bouncing is forced
> */
> struct io_tlb_mem {
> phys_addr_t start;
> @@ -94,6 +95,7 @@ struct io_tlb_mem {
> spinlock_t lock;
> struct dentry *debugfs;
> bool late_alloc;
> + bool force_bounce;
> struct io_tlb_slot {
> phys_addr_t orig_addr;
> size_t alloc_size;
> @@ -109,6 +111,11 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
> return mem && paddr >= mem->start && paddr < mem->end;
> }
>
> +static inline bool is_swiotlb_force_bounce(struct device *dev)
> +{
> + return dev->dma_io_tlb_mem->force_bounce;
> +}
> void __init swiotlb_exit(void);
> unsigned int swiotlb_max_segment(void);
> size_t swiotlb_max_mapping_size(struct device *dev);
> @@ -120,6 +127,10 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
> {
> return false;
> }
> +static inline bool is_swiotlb_force_bounce(struct device *dev)
> +{
> + return false;
> +}
> static inline void swiotlb_exit(void)
> {
> }
> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> index 7a88c34d0867..a92465b4eb12 100644
> --- a/kernel/dma/direct.c
> +++ b/kernel/dma/direct.c
> @@ -496,7 +496,7 @@ size_t dma_direct_max_mapping_size(struct device *dev)
> {
> /* If SWIOTLB is active, use its maximum mapping size */
> if (is_swiotlb_active(dev) &&
> - (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE))
> + (dma_addressing_limited(dev) || is_swiotlb_force_bounce(dev)))
> return swiotlb_max_mapping_size(dev);
> return SIZE_MAX;
> }
> diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h
> index 13e9e7158d94..4632b0f4f72e 100644
> --- a/kernel/dma/direct.h
> +++ b/kernel/dma/direct.h
> @@ -87,7 +87,7 @@ static inline dma_addr_t dma_direct_map_page(struct device *dev,
> phys_addr_t phys = page_to_phys(page) + offset;
> dma_addr_t dma_addr = phys_to_dma(dev, phys);
>
> - if (unlikely(swiotlb_force == SWIOTLB_FORCE))
> + if (is_swiotlb_force_bounce(dev))
> return swiotlb_map(dev, phys, size, dir, attrs);
>
> if (unlikely(!dma_capable(dev, dma_addr, size, true))) {
Should we also make the same change in
drivers/xen/swiotlb-xen.c:xen_swiotlb_map_page ?
If I make that change, I can see that everything is working as
expected for a restricted-dma device with Linux running as dom0 on Xen.
However, is_swiotlb_force_bounce returns non-zero even for normal
non-restricted-dma devices. That shouldn't happen, right?
It looks like struct io_tlb_slot is not zeroed on allocation.
Adding memset(mem, 0x0, struct_size) in swiotlb_late_init_with_tbl
solves the issue.
With those two changes, the series passes my tests and you can add my
tested-by.
WARNING: multiple messages have this Message-ID (diff)
From: Stefano Stabellini <sstabellini@kernel.org>
To: Claire Chang <tientzu@chromium.org>
Cc: heikki.krogerus@linux.intel.com,
thomas.hellstrom@linux.intel.com, peterz@infradead.org,
joonas.lahtinen@linux.intel.com, dri-devel@lists.freedesktop.org,
chris@chris-wilson.co.uk, grant.likely@arm.com, paulus@samba.org,
Frank Rowand <frowand.list@gmail.com>,
mingo@kernel.org, Marek Szyprowski <m.szyprowski@samsung.com>,
sstabellini@kernel.org, Saravana Kannan <saravanak@google.com>,
Joerg Roedel <joro@8bytes.org>,
"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
Christoph Hellwig <hch@lst.de>,
Bartosz Golaszewski <bgolaszewski@baylibre.com>,
bskeggs@redhat.com, linux-pci@vger.kernel.org,
xen-devel@lists.xenproject.org,
Thierry Reding <treding@nvidia.com>,
intel-gfx@lists.freedesktop.org, matthew.auld@intel.com,
linux-devicetree <devicetree@vger.kernel.org>,
jxgao@google.com, daniel@ffwll.ch, Will Deacon <will@kernel.org>,
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
maarten.lankhorst@linux.intel.com, airlied@linux.ie,
Dan Williams <dan.j.williams@intel.com>,
linuxppc-dev@lists.ozlabs.org, jani.nikula@linux.intel.com,
Rob Herring <robh+dt@kernel.org>,
rodrigo.vivi@intel.com, bhelgaas@google.com,
boris.ostrovsky@oracle.com,
Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
jgross@suse.com, Nicolas Boichat <drinkcat@chromium.org>,
Greg KH <gregkh@linuxfoundation.org>,
Randy Dunlap <rdunlap@infradead.org>,
lkml <linux-kernel@vger.kernel.org>,
tfiga@chromium.org,
"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
Jim Quinlan <james.quinlan@broadcom.com>,
xypron.glpk@gmx.de, Robin Murphy <robin.murphy@arm.com>,
bauerman@linux.ibm.com
Subject: Re: [PATCH v12 06/12] swiotlb: Use is_swiotlb_force_bounce for swiotlb data bouncing
Date: Wed, 16 Jun 2021 17:47:07 -0700 (PDT) [thread overview]
Message-ID: <alpine.DEB.2.21.2106161711030.24906@sstabellini-ThinkPad-T480s> (raw)
In-Reply-To: <20210616062157.953777-7-tientzu@chromium.org>
On Wed, 16 Jun 2021, Claire Chang wrote:
> Propagate the swiotlb_force into io_tlb_default_mem->force_bounce and
> use it to determine whether to bounce the data or not. This will be
> useful later to allow for different pools.
>
> Signed-off-by: Claire Chang <tientzu@chromium.org>
> ---
> include/linux/swiotlb.h | 11 +++++++++++
> kernel/dma/direct.c | 2 +-
> kernel/dma/direct.h | 2 +-
> kernel/dma/swiotlb.c | 4 ++++
> 4 files changed, 17 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
> index dd1c30a83058..8d8855c77d9a 100644
> --- a/include/linux/swiotlb.h
> +++ b/include/linux/swiotlb.h
> @@ -84,6 +84,7 @@ extern enum swiotlb_force swiotlb_force;
> * unmap calls.
> * @debugfs: The dentry to debugfs.
> * @late_alloc: %true if allocated using the page allocator
> + * @force_bounce: %true if swiotlb bouncing is forced
> */
> struct io_tlb_mem {
> phys_addr_t start;
> @@ -94,6 +95,7 @@ struct io_tlb_mem {
> spinlock_t lock;
> struct dentry *debugfs;
> bool late_alloc;
> + bool force_bounce;
> struct io_tlb_slot {
> phys_addr_t orig_addr;
> size_t alloc_size;
> @@ -109,6 +111,11 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
> return mem && paddr >= mem->start && paddr < mem->end;
> }
>
> +static inline bool is_swiotlb_force_bounce(struct device *dev)
> +{
> + return dev->dma_io_tlb_mem->force_bounce;
> +}
> void __init swiotlb_exit(void);
> unsigned int swiotlb_max_segment(void);
> size_t swiotlb_max_mapping_size(struct device *dev);
> @@ -120,6 +127,10 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
> {
> return false;
> }
> +static inline bool is_swiotlb_force_bounce(struct device *dev)
> +{
> + return false;
> +}
> static inline void swiotlb_exit(void)
> {
> }
> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> index 7a88c34d0867..a92465b4eb12 100644
> --- a/kernel/dma/direct.c
> +++ b/kernel/dma/direct.c
> @@ -496,7 +496,7 @@ size_t dma_direct_max_mapping_size(struct device *dev)
> {
> /* If SWIOTLB is active, use its maximum mapping size */
> if (is_swiotlb_active(dev) &&
> - (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE))
> + (dma_addressing_limited(dev) || is_swiotlb_force_bounce(dev)))
> return swiotlb_max_mapping_size(dev);
> return SIZE_MAX;
> }
> diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h
> index 13e9e7158d94..4632b0f4f72e 100644
> --- a/kernel/dma/direct.h
> +++ b/kernel/dma/direct.h
> @@ -87,7 +87,7 @@ static inline dma_addr_t dma_direct_map_page(struct device *dev,
> phys_addr_t phys = page_to_phys(page) + offset;
> dma_addr_t dma_addr = phys_to_dma(dev, phys);
>
> - if (unlikely(swiotlb_force == SWIOTLB_FORCE))
> + if (is_swiotlb_force_bounce(dev))
> return swiotlb_map(dev, phys, size, dir, attrs);
>
> if (unlikely(!dma_capable(dev, dma_addr, size, true))) {
Should we also make the same change in
drivers/xen/swiotlb-xen.c:xen_swiotlb_map_page ?
If I make that change, I can see that everything is working as
expected for a restricted-dma device with Linux running as dom0 on Xen.
However, is_swiotlb_force_bounce returns non-zero even for normal
non-restricted-dma devices. That shouldn't happen, right?
It looks like struct io_tlb_slot is not zeroed on allocation.
Adding memset(mem, 0x0, struct_size) in swiotlb_late_init_with_tbl
solves the issue.
With those two changes, the series passes my tests and you can add my
tested-by.
WARNING: multiple messages have this Message-ID (diff)
From: Stefano Stabellini <sstabellini@kernel.org>
To: Claire Chang <tientzu@chromium.org>
Cc: heikki.krogerus@linux.intel.com,
thomas.hellstrom@linux.intel.com, peterz@infradead.org,
benh@kernel.crashing.org, joonas.lahtinen@linux.intel.com,
dri-devel@lists.freedesktop.org, chris@chris-wilson.co.uk,
grant.likely@arm.com, paulus@samba.org,
Frank Rowand <frowand.list@gmail.com>,
mingo@kernel.org, sstabellini@kernel.org,
Saravana Kannan <saravanak@google.com>,
mpe@ellerman.id.au,
"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
Christoph Hellwig <hch@lst.de>,
Bartosz Golaszewski <bgolaszewski@baylibre.com>,
bskeggs@redhat.com, linux-pci@vger.kernel.org,
xen-devel@lists.xenproject.org,
Thierry Reding <treding@nvidia.com>,
intel-gfx@lists.freedesktop.org, matthew.auld@intel.com,
linux-devicetree <devicetree@vger.kernel.org>,
jxgao@google.com, daniel@ffwll.ch, Will Deacon <will@kernel.org>,
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
maarten.lankhorst@linux.intel.com, airlied@linux.ie,
Dan Williams <dan.j.williams@intel.com>,
linuxppc-dev@lists.ozlabs.org, jani.nikula@linux.intel.com,
Rob Herring <robh+dt@kernel.org>,
rodrigo.vivi@intel.com, bhelgaas@google.com,
boris.ostrovsky@oracle.com,
Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
jgross@suse.com, Nicolas Boichat <drinkcat@chromium.org>,
Greg KH <gregkh@linuxfoundation.org>,
Randy Dunlap <rdunlap@infradead.org>,
lkml <linux-kernel@vger.kernel.org>,
"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
Jim Quinlan <james.quinlan@broadcom.com>,
xypron.glpk@gmx.de, Robin Murphy <robin.murphy@arm.com>,
bauerman@linux.ibm.com
Subject: Re: [PATCH v12 06/12] swiotlb: Use is_swiotlb_force_bounce for swiotlb data bouncing
Date: Wed, 16 Jun 2021 17:47:07 -0700 (PDT) [thread overview]
Message-ID: <alpine.DEB.2.21.2106161711030.24906@sstabellini-ThinkPad-T480s> (raw)
In-Reply-To: <20210616062157.953777-7-tientzu@chromium.org>
On Wed, 16 Jun 2021, Claire Chang wrote:
> Propagate the swiotlb_force into io_tlb_default_mem->force_bounce and
> use it to determine whether to bounce the data or not. This will be
> useful later to allow for different pools.
>
> Signed-off-by: Claire Chang <tientzu@chromium.org>
> ---
> include/linux/swiotlb.h | 11 +++++++++++
> kernel/dma/direct.c | 2 +-
> kernel/dma/direct.h | 2 +-
> kernel/dma/swiotlb.c | 4 ++++
> 4 files changed, 17 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
> index dd1c30a83058..8d8855c77d9a 100644
> --- a/include/linux/swiotlb.h
> +++ b/include/linux/swiotlb.h
> @@ -84,6 +84,7 @@ extern enum swiotlb_force swiotlb_force;
> * unmap calls.
> * @debugfs: The dentry to debugfs.
> * @late_alloc: %true if allocated using the page allocator
> + * @force_bounce: %true if swiotlb bouncing is forced
> */
> struct io_tlb_mem {
> phys_addr_t start;
> @@ -94,6 +95,7 @@ struct io_tlb_mem {
> spinlock_t lock;
> struct dentry *debugfs;
> bool late_alloc;
> + bool force_bounce;
> struct io_tlb_slot {
> phys_addr_t orig_addr;
> size_t alloc_size;
> @@ -109,6 +111,11 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
> return mem && paddr >= mem->start && paddr < mem->end;
> }
>
> +static inline bool is_swiotlb_force_bounce(struct device *dev)
> +{
> + return dev->dma_io_tlb_mem->force_bounce;
> +}
> void __init swiotlb_exit(void);
> unsigned int swiotlb_max_segment(void);
> size_t swiotlb_max_mapping_size(struct device *dev);
> @@ -120,6 +127,10 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
> {
> return false;
> }
> +static inline bool is_swiotlb_force_bounce(struct device *dev)
> +{
> + return false;
> +}
> static inline void swiotlb_exit(void)
> {
> }
> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> index 7a88c34d0867..a92465b4eb12 100644
> --- a/kernel/dma/direct.c
> +++ b/kernel/dma/direct.c
> @@ -496,7 +496,7 @@ size_t dma_direct_max_mapping_size(struct device *dev)
> {
> /* If SWIOTLB is active, use its maximum mapping size */
> if (is_swiotlb_active(dev) &&
> - (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE))
> + (dma_addressing_limited(dev) || is_swiotlb_force_bounce(dev)))
> return swiotlb_max_mapping_size(dev);
> return SIZE_MAX;
> }
> diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h
> index 13e9e7158d94..4632b0f4f72e 100644
> --- a/kernel/dma/direct.h
> +++ b/kernel/dma/direct.h
> @@ -87,7 +87,7 @@ static inline dma_addr_t dma_direct_map_page(struct device *dev,
> phys_addr_t phys = page_to_phys(page) + offset;
> dma_addr_t dma_addr = phys_to_dma(dev, phys);
>
> - if (unlikely(swiotlb_force == SWIOTLB_FORCE))
> + if (is_swiotlb_force_bounce(dev))
> return swiotlb_map(dev, phys, size, dir, attrs);
>
> if (unlikely(!dma_capable(dev, dma_addr, size, true))) {
Should we also make the same change in
drivers/xen/swiotlb-xen.c:xen_swiotlb_map_page ?
If I make that change, I can see that everything is working as
expected for a restricted-dma device with Linux running as dom0 on Xen.
However, is_swiotlb_force_bounce returns non-zero even for normal
non-restricted-dma devices. That shouldn't happen, right?
It looks like struct io_tlb_slot is not zeroed on allocation.
Adding memset(mem, 0x0, struct_size) in swiotlb_late_init_with_tbl
solves the issue.
With those two changes, the series passes my tests and you can add my
tested-by.
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
WARNING: multiple messages have this Message-ID (diff)
From: Stefano Stabellini <sstabellini@kernel.org>
To: Claire Chang <tientzu@chromium.org>
Cc: heikki.krogerus@linux.intel.com,
thomas.hellstrom@linux.intel.com, peterz@infradead.org,
dri-devel@lists.freedesktop.org, chris@chris-wilson.co.uk,
grant.likely@arm.com, paulus@samba.org,
Frank Rowand <frowand.list@gmail.com>,
mingo@kernel.org, Marek Szyprowski <m.szyprowski@samsung.com>,
sstabellini@kernel.org, Saravana Kannan <saravanak@google.com>,
mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>,
"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
Christoph Hellwig <hch@lst.de>,
Bartosz Golaszewski <bgolaszewski@baylibre.com>,
bskeggs@redhat.com, linux-pci@vger.kernel.org,
xen-devel@lists.xenproject.org,
Thierry Reding <treding@nvidia.com>,
intel-gfx@lists.freedesktop.org, matthew.auld@intel.com,
linux-devicetree <devicetree@vger.kernel.org>,
jxgao@google.com, Will Deacon <will@kernel.org>,
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
airlied@linux.ie, Dan Williams <dan.j.williams@intel.com>,
linuxppc-dev@lists.ozlabs.org, Rob Herring <robh+dt@kernel.org>,
rodrigo.vivi@intel.com, bhelgaas@google.com,
boris.ostrovsky@oracle.com,
Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
jgross@suse.com, Nicolas Boichat <drinkcat@chromium.org>,
Greg KH <gregkh@linuxfoundation.org>,
Randy Dunlap <rdunlap@infradead.org>,
lkml <linux-kernel@vger.kernel.org>,
tfiga@chromium.org,
"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
Jim Quinlan <james.quinlan@broadcom.com>,
xypron.glpk@gmx.de, Robin Murphy <robin.murphy@arm.com>,
bauerman@linux.ibm.com
Subject: Re: [PATCH v12 06/12] swiotlb: Use is_swiotlb_force_bounce for swiotlb data bouncing
Date: Wed, 16 Jun 2021 17:47:07 -0700 (PDT) [thread overview]
Message-ID: <alpine.DEB.2.21.2106161711030.24906@sstabellini-ThinkPad-T480s> (raw)
In-Reply-To: <20210616062157.953777-7-tientzu@chromium.org>
On Wed, 16 Jun 2021, Claire Chang wrote:
> Propagate the swiotlb_force into io_tlb_default_mem->force_bounce and
> use it to determine whether to bounce the data or not. This will be
> useful later to allow for different pools.
>
> Signed-off-by: Claire Chang <tientzu@chromium.org>
> ---
> include/linux/swiotlb.h | 11 +++++++++++
> kernel/dma/direct.c | 2 +-
> kernel/dma/direct.h | 2 +-
> kernel/dma/swiotlb.c | 4 ++++
> 4 files changed, 17 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
> index dd1c30a83058..8d8855c77d9a 100644
> --- a/include/linux/swiotlb.h
> +++ b/include/linux/swiotlb.h
> @@ -84,6 +84,7 @@ extern enum swiotlb_force swiotlb_force;
> * unmap calls.
> * @debugfs: The dentry to debugfs.
> * @late_alloc: %true if allocated using the page allocator
> + * @force_bounce: %true if swiotlb bouncing is forced
> */
> struct io_tlb_mem {
> phys_addr_t start;
> @@ -94,6 +95,7 @@ struct io_tlb_mem {
> spinlock_t lock;
> struct dentry *debugfs;
> bool late_alloc;
> + bool force_bounce;
> struct io_tlb_slot {
> phys_addr_t orig_addr;
> size_t alloc_size;
> @@ -109,6 +111,11 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
> return mem && paddr >= mem->start && paddr < mem->end;
> }
>
> +static inline bool is_swiotlb_force_bounce(struct device *dev)
> +{
> + return dev->dma_io_tlb_mem->force_bounce;
> +}
> void __init swiotlb_exit(void);
> unsigned int swiotlb_max_segment(void);
> size_t swiotlb_max_mapping_size(struct device *dev);
> @@ -120,6 +127,10 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
> {
> return false;
> }
> +static inline bool is_swiotlb_force_bounce(struct device *dev)
> +{
> + return false;
> +}
> static inline void swiotlb_exit(void)
> {
> }
> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> index 7a88c34d0867..a92465b4eb12 100644
> --- a/kernel/dma/direct.c
> +++ b/kernel/dma/direct.c
> @@ -496,7 +496,7 @@ size_t dma_direct_max_mapping_size(struct device *dev)
> {
> /* If SWIOTLB is active, use its maximum mapping size */
> if (is_swiotlb_active(dev) &&
> - (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE))
> + (dma_addressing_limited(dev) || is_swiotlb_force_bounce(dev)))
> return swiotlb_max_mapping_size(dev);
> return SIZE_MAX;
> }
> diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h
> index 13e9e7158d94..4632b0f4f72e 100644
> --- a/kernel/dma/direct.h
> +++ b/kernel/dma/direct.h
> @@ -87,7 +87,7 @@ static inline dma_addr_t dma_direct_map_page(struct device *dev,
> phys_addr_t phys = page_to_phys(page) + offset;
> dma_addr_t dma_addr = phys_to_dma(dev, phys);
>
> - if (unlikely(swiotlb_force == SWIOTLB_FORCE))
> + if (is_swiotlb_force_bounce(dev))
> return swiotlb_map(dev, phys, size, dir, attrs);
>
> if (unlikely(!dma_capable(dev, dma_addr, size, true))) {
Should we also make the same change in
drivers/xen/swiotlb-xen.c:xen_swiotlb_map_page ?
If I make that change, I can see that everything is working as
expected for a restricted-dma device with Linux running as dom0 on Xen.
However, is_swiotlb_force_bounce returns non-zero even for normal
non-restricted-dma devices. That shouldn't happen, right?
It looks like struct io_tlb_slot is not zeroed on allocation.
Adding memset(mem, 0x0, struct_size) in swiotlb_late_init_with_tbl
solves the issue.
With those two changes, the series passes my tests and you can add my
tested-by.
WARNING: multiple messages have this Message-ID (diff)
From: Stefano Stabellini <sstabellini@kernel.org>
To: Claire Chang <tientzu@chromium.org>
Cc: heikki.krogerus@linux.intel.com,
thomas.hellstrom@linux.intel.com, peterz@infradead.org,
benh@kernel.crashing.org, dri-devel@lists.freedesktop.org,
chris@chris-wilson.co.uk, grant.likely@arm.com, paulus@samba.org,
Frank Rowand <frowand.list@gmail.com>,
mingo@kernel.org, Marek Szyprowski <m.szyprowski@samsung.com>,
sstabellini@kernel.org, Saravana Kannan <saravanak@google.com>,
mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>,
"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
Christoph Hellwig <hch@lst.de>,
Bartosz Golaszewski <bgolaszewski@baylibre.com>,
bskeggs@redhat.com, linux-pci@vger.kernel.org,
xen-devel@lists.xenproject.org,
Thierry Reding <treding@nvidia.com>,
intel-gfx@lists.freedesktop.org, matthew.auld@intel.com,
linux-devicetree <devicetree@vger.kernel.org>,
jxgao@google.com, Will Deacon <will@kernel.org>,
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
airlied@linux.ie, Dan Williams <dan.j.williams@intel.com>,
linuxppc-dev@lists.ozlabs.org, Rob Herring <robh+dt@kernel.org>,
bhelgaas@google.com, boris.ostrovsky@oracle.com,
Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
jgross@suse.com, Nicolas Boichat <drinkcat@chromium.org>,
Greg KH <gregkh@linuxfoundation.org>,
Randy Dunlap <rdunlap@infradead.org>,
lkml <linux-kernel@vger.kernel.org>,
tfiga@chromium.org,
"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
Jim Quinlan <james.quinlan@broadcom.com>,
xypron.glpk@gmx.de, Robin Murphy <robin.murphy@arm.com>,
bauerman@linux.ibm.com
Subject: Re: [Intel-gfx] [PATCH v12 06/12] swiotlb: Use is_swiotlb_force_bounce for swiotlb data bouncing
Date: Wed, 16 Jun 2021 17:47:07 -0700 (PDT) [thread overview]
Message-ID: <alpine.DEB.2.21.2106161711030.24906@sstabellini-ThinkPad-T480s> (raw)
In-Reply-To: <20210616062157.953777-7-tientzu@chromium.org>
On Wed, 16 Jun 2021, Claire Chang wrote:
> Propagate the swiotlb_force into io_tlb_default_mem->force_bounce and
> use it to determine whether to bounce the data or not. This will be
> useful later to allow for different pools.
>
> Signed-off-by: Claire Chang <tientzu@chromium.org>
> ---
> include/linux/swiotlb.h | 11 +++++++++++
> kernel/dma/direct.c | 2 +-
> kernel/dma/direct.h | 2 +-
> kernel/dma/swiotlb.c | 4 ++++
> 4 files changed, 17 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
> index dd1c30a83058..8d8855c77d9a 100644
> --- a/include/linux/swiotlb.h
> +++ b/include/linux/swiotlb.h
> @@ -84,6 +84,7 @@ extern enum swiotlb_force swiotlb_force;
> * unmap calls.
> * @debugfs: The dentry to debugfs.
> * @late_alloc: %true if allocated using the page allocator
> + * @force_bounce: %true if swiotlb bouncing is forced
> */
> struct io_tlb_mem {
> phys_addr_t start;
> @@ -94,6 +95,7 @@ struct io_tlb_mem {
> spinlock_t lock;
> struct dentry *debugfs;
> bool late_alloc;
> + bool force_bounce;
> struct io_tlb_slot {
> phys_addr_t orig_addr;
> size_t alloc_size;
> @@ -109,6 +111,11 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
> return mem && paddr >= mem->start && paddr < mem->end;
> }
>
> +static inline bool is_swiotlb_force_bounce(struct device *dev)
> +{
> + return dev->dma_io_tlb_mem->force_bounce;
> +}
> void __init swiotlb_exit(void);
> unsigned int swiotlb_max_segment(void);
> size_t swiotlb_max_mapping_size(struct device *dev);
> @@ -120,6 +127,10 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
> {
> return false;
> }
> +static inline bool is_swiotlb_force_bounce(struct device *dev)
> +{
> + return false;
> +}
> static inline void swiotlb_exit(void)
> {
> }
> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> index 7a88c34d0867..a92465b4eb12 100644
> --- a/kernel/dma/direct.c
> +++ b/kernel/dma/direct.c
> @@ -496,7 +496,7 @@ size_t dma_direct_max_mapping_size(struct device *dev)
> {
> /* If SWIOTLB is active, use its maximum mapping size */
> if (is_swiotlb_active(dev) &&
> - (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE))
> + (dma_addressing_limited(dev) || is_swiotlb_force_bounce(dev)))
> return swiotlb_max_mapping_size(dev);
> return SIZE_MAX;
> }
> diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h
> index 13e9e7158d94..4632b0f4f72e 100644
> --- a/kernel/dma/direct.h
> +++ b/kernel/dma/direct.h
> @@ -87,7 +87,7 @@ static inline dma_addr_t dma_direct_map_page(struct device *dev,
> phys_addr_t phys = page_to_phys(page) + offset;
> dma_addr_t dma_addr = phys_to_dma(dev, phys);
>
> - if (unlikely(swiotlb_force == SWIOTLB_FORCE))
> + if (is_swiotlb_force_bounce(dev))
> return swiotlb_map(dev, phys, size, dir, attrs);
>
> if (unlikely(!dma_capable(dev, dma_addr, size, true))) {
Should we also make the same change in
drivers/xen/swiotlb-xen.c:xen_swiotlb_map_page ?
If I make that change, I can see that everything is working as
expected for a restricted-dma device with Linux running as dom0 on Xen.
However, is_swiotlb_force_bounce returns non-zero even for normal
non-restricted-dma devices. That shouldn't happen, right?
It looks like struct io_tlb_slot is not zeroed on allocation.
Adding memset(mem, 0x0, struct_size) in swiotlb_late_init_with_tbl
solves the issue.
With those two changes, the series passes my tests and you can add my
tested-by.
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
next prev parent reply other threads:[~2021-06-17 0:47 UTC|newest]
Thread overview: 97+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-06-16 6:21 [PATCH v12 00/12] Restricted DMA Claire Chang
2021-06-16 6:21 ` [Intel-gfx] " Claire Chang
2021-06-16 6:21 ` Claire Chang
2021-06-16 6:21 ` Claire Chang
2021-06-16 6:21 ` Claire Chang
2021-06-16 6:21 ` [PATCH v12 01/12] swiotlb: Refactor swiotlb init functions Claire Chang
2021-06-16 6:21 ` [Intel-gfx] " Claire Chang
2021-06-16 6:21 ` Claire Chang
2021-06-16 6:21 ` Claire Chang
2021-06-16 6:21 ` Claire Chang
2021-06-16 6:21 ` [PATCH v12 02/12] swiotlb: Refactor swiotlb_create_debugfs Claire Chang
2021-06-16 6:21 ` [Intel-gfx] " Claire Chang
2021-06-16 6:21 ` Claire Chang
2021-06-16 6:21 ` Claire Chang
2021-06-16 6:21 ` Claire Chang
2021-06-16 6:21 ` [PATCH v12 03/12] swiotlb: Set dev->dma_io_tlb_mem to the swiotlb pool used Claire Chang
2021-06-16 6:21 ` [Intel-gfx] " Claire Chang
2021-06-16 6:21 ` Claire Chang
2021-06-16 6:21 ` Claire Chang
2021-06-16 6:21 ` Claire Chang
2021-06-16 6:21 ` [PATCH v12 04/12] swiotlb: Update is_swiotlb_buffer to add a struct device argument Claire Chang
2021-06-16 6:21 ` [Intel-gfx] " Claire Chang
2021-06-16 6:21 ` Claire Chang
2021-06-16 6:21 ` Claire Chang
2021-06-16 6:21 ` Claire Chang
2021-06-16 6:21 ` [PATCH v12 05/12] swiotlb: Update is_swiotlb_active " Claire Chang
2021-06-16 6:21 ` [Intel-gfx] " Claire Chang
2021-06-16 6:21 ` Claire Chang
2021-06-16 6:21 ` Claire Chang
2021-06-16 6:21 ` Claire Chang
2021-06-16 6:21 ` [PATCH v12 06/12] swiotlb: Use is_swiotlb_force_bounce for swiotlb data bouncing Claire Chang
2021-06-16 6:21 ` [Intel-gfx] " Claire Chang
2021-06-16 6:21 ` Claire Chang
2021-06-16 6:21 ` Claire Chang
2021-06-16 6:21 ` Claire Chang
2021-06-16 7:39 ` Christoph Hellwig
2021-06-16 7:39 ` [Intel-gfx] " Christoph Hellwig
2021-06-16 7:39 ` Christoph Hellwig
2021-06-16 7:39 ` Christoph Hellwig
2021-06-17 0:47 ` Stefano Stabellini [this message]
2021-06-17 0:47 ` Stefano Stabellini
2021-06-17 0:47 ` [Intel-gfx] " Stefano Stabellini
2021-06-17 0:47 ` Stefano Stabellini
2021-06-17 0:47 ` Stefano Stabellini
2021-06-17 0:47 ` Stefano Stabellini
2021-06-16 6:21 ` [PATCH v12 07/12] swiotlb: Move alloc_size to swiotlb_find_slots Claire Chang
2021-06-16 6:21 ` [Intel-gfx] " Claire Chang
2021-06-16 6:21 ` Claire Chang
2021-06-16 6:21 ` Claire Chang
2021-06-16 6:21 ` Claire Chang
2021-06-16 6:21 ` [PATCH v12 08/12] swiotlb: Refactor swiotlb_tbl_unmap_single Claire Chang
2021-06-16 6:21 ` [Intel-gfx] " Claire Chang
2021-06-16 6:21 ` Claire Chang
2021-06-16 6:21 ` Claire Chang
2021-06-16 6:21 ` Claire Chang
2021-06-16 6:21 ` [PATCH v12 09/12] swiotlb: Add restricted DMA alloc/free support Claire Chang
2021-06-16 6:21 ` [Intel-gfx] " Claire Chang
2021-06-16 6:21 ` Claire Chang
2021-06-16 6:21 ` Claire Chang
2021-06-16 6:21 ` Claire Chang
2021-06-16 7:39 ` Christoph Hellwig
2021-06-16 7:39 ` [Intel-gfx] " Christoph Hellwig
2021-06-16 7:39 ` Christoph Hellwig
2021-06-16 7:39 ` Christoph Hellwig
2021-06-16 6:21 ` [PATCH v12 10/12] swiotlb: Add restricted DMA pool initialization Claire Chang
2021-06-16 6:21 ` [Intel-gfx] " Claire Chang
2021-06-16 6:21 ` Claire Chang
2021-06-16 6:21 ` Claire Chang
2021-06-16 6:21 ` Claire Chang
2021-06-16 6:21 ` [PATCH v12 11/12] dt-bindings: of: Add restricted DMA pool Claire Chang
2021-06-16 6:21 ` [Intel-gfx] " Claire Chang
2021-06-16 6:21 ` Claire Chang
2021-06-16 6:21 ` Claire Chang
2021-06-16 6:21 ` Claire Chang
2021-06-17 0:09 ` Stefano Stabellini
2021-06-17 0:09 ` Stefano Stabellini
2021-06-17 0:09 ` [Intel-gfx] " Stefano Stabellini
2021-06-17 0:09 ` Stefano Stabellini
2021-06-17 0:09 ` Stefano Stabellini
2021-06-17 0:09 ` Stefano Stabellini
2021-06-16 6:21 ` [PATCH v12 12/12] of: Add plumbing for " Claire Chang
2021-06-16 6:21 ` [Intel-gfx] " Claire Chang
2021-06-16 6:21 ` Claire Chang
2021-06-16 6:21 ` Claire Chang
2021-06-16 6:21 ` Claire Chang
2021-06-16 7:35 ` [Intel-gfx] ✗ Fi.CI.BUILD: failure for Restricted DMA Patchwork
2021-06-16 12:08 ` [PATCH v12 00/12] " Will Deacon
2021-06-16 12:08 ` [Intel-gfx] " Will Deacon
2021-06-16 12:08 ` Will Deacon
2021-06-16 12:08 ` Will Deacon
2021-06-16 12:08 ` Will Deacon
2021-06-17 6:40 ` Claire Chang
2021-06-17 6:40 ` Claire Chang
2021-06-17 6:40 ` [Intel-gfx] " Claire Chang
2021-06-17 6:40 ` Claire Chang
2021-06-17 6:40 ` Claire Chang
2021-06-17 6:40 ` Claire Chang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=alpine.DEB.2.21.2106161711030.24906@sstabellini-ThinkPad-T480s \
--to=sstabellini@kernel.org \
--cc=airlied@linux.ie \
--cc=andriy.shevchenko@linux.intel.com \
--cc=bauerman@linux.ibm.com \
--cc=benh@kernel.crashing.org \
--cc=bgolaszewski@baylibre.com \
--cc=bhelgaas@google.com \
--cc=boris.ostrovsky@oracle.com \
--cc=bskeggs@redhat.com \
--cc=chris@chris-wilson.co.uk \
--cc=dan.j.williams@intel.com \
--cc=daniel@ffwll.ch \
--cc=devicetree@vger.kernel.org \
--cc=dri-devel@lists.freedesktop.org \
--cc=drinkcat@chromium.org \
--cc=frowand.list@gmail.com \
--cc=grant.likely@arm.com \
--cc=gregkh@linuxfoundation.org \
--cc=hch@lst.de \
--cc=heikki.krogerus@linux.intel.com \
--cc=intel-gfx@lists.freedesktop.org \
--cc=iommu@lists.linux-foundation.org \
--cc=james.quinlan@broadcom.com \
--cc=jani.nikula@linux.intel.com \
--cc=jgross@suse.com \
--cc=joonas.lahtinen@linux.intel.com \
--cc=joro@8bytes.org \
--cc=jxgao@google.com \
--cc=konrad.wilk@oracle.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pci@vger.kernel.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=m.szyprowski@samsung.com \
--cc=maarten.lankhorst@linux.intel.com \
--cc=matthew.auld@intel.com \
--cc=mingo@kernel.org \
--cc=mpe@ellerman.id.au \
--cc=paulus@samba.org \
--cc=peterz@infradead.org \
--cc=rafael.j.wysocki@intel.com \
--cc=rdunlap@infradead.org \
--cc=robh+dt@kernel.org \
--cc=robin.murphy@arm.com \
--cc=rodrigo.vivi@intel.com \
--cc=saravanak@google.com \
--cc=tfiga@chromium.org \
--cc=thomas.hellstrom@linux.intel.com \
--cc=tientzu@chromium.org \
--cc=treding@nvidia.com \
--cc=will@kernel.org \
--cc=xen-devel@lists.xenproject.org \
--cc=xypron.glpk@gmx.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.