* [swiotlb PATCH v3 0/3] Add support for DMA writable pages being writable by the network stack.
@ 2016-11-09 15:19 Alexander Duyck
2016-11-09 15:20 ` [swiotlb PATCH v3 1/3] swiotlb: remove unused swiotlb_map_sg and swiotlb_unmap_sg functions Alexander Duyck
` (3 more replies)
0 siblings, 4 replies; 6+ messages in thread
From: Alexander Duyck @ 2016-11-09 15:19 UTC (permalink / raw)
To: linux-mm, konrad.wilk; +Cc: netdev, linux-kernel
This patch series is a subset of the patches originally submitted with the
above patch title. Specifically all of these patches relate to the
swiotlb.
I wasn't sure if I needed to resubmit this series or not. I see that v2 is
currently sitting in the for-linus-4.9 branch of the swiotlb git repo. If
no updates are required for the previous set then this patch set can be
ignored since most of the changes are just cosmetic.
v1: Split out changes DMA_ERROR_CODE fix for swiotlb-xen
Minor fixes based on issues found by kernel build bot
Few minor changes for issues found on code review
Added Acked-by for patches that were acked and not changed
v2: Added a few more Acked-by
Added swiotlb_unmap_sg to functions dropped in patch 1, dropped Acked-by
Submitting patches to mm instead of net-next
v3: Split patch set, first 3 to swiotlb, remaining 23 still to mm
Minor clean-ups for swiotlb code, mostly cosmetic
Replaced my patch with the one originally submitted by Christoph Hellwig
---
Alexander Duyck (2):
swiotlb-xen: Enforce return of DMA_ERROR_CODE in mapping function
swiotlb: Add support for DMA_ATTR_SKIP_CPU_SYNC
Christoph Hellwig (1):
swiotlb: remove unused swiotlb_map_sg and swiotlb_unmap_sg functions
arch/arm/xen/mm.c | 1 -
arch/x86/xen/pci-swiotlb-xen.c | 1 -
drivers/xen/swiotlb-xen.c | 19 +++++---------
include/linux/swiotlb.h | 14 +++--------
include/xen/swiotlb-xen.h | 3 --
lib/swiotlb.c | 53 +++++++++++++++++-----------------------
6 files changed, 33 insertions(+), 58 deletions(-)
--
^ permalink raw reply [flat|nested] 6+ messages in thread
* [swiotlb PATCH v3 1/3] swiotlb: remove unused swiotlb_map_sg and swiotlb_unmap_sg functions
2016-11-09 15:19 [swiotlb PATCH v3 0/3] Add support for DMA writable pages being writable by the network stack Alexander Duyck
@ 2016-11-09 15:20 ` Alexander Duyck
2016-11-09 15:20 ` [swiotlb PATCH v3 2/3] swiotlb-xen: Enforce return of DMA_ERROR_CODE in mapping function Alexander Duyck
` (2 subsequent siblings)
3 siblings, 0 replies; 6+ messages in thread
From: Alexander Duyck @ 2016-11-09 15:20 UTC (permalink / raw)
To: linux-mm, konrad.wilk; +Cc: netdev, Christoph Hellwig, linux-kernel
From: Christoph Hellwig <hch@lst.de>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
include/linux/swiotlb.h | 8 --------
lib/swiotlb.c | 16 ----------------
2 files changed, 24 deletions(-)
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 5f81f8a..f0d2589 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -73,14 +73,6 @@ extern void swiotlb_unmap_page(struct device *hwdev, dma_addr_t dev_addr,
unsigned long attrs);
extern int
-swiotlb_map_sg(struct device *hwdev, struct scatterlist *sg, int nents,
- enum dma_data_direction dir);
-
-extern void
-swiotlb_unmap_sg(struct device *hwdev, struct scatterlist *sg, int nents,
- enum dma_data_direction dir);
-
-extern int
swiotlb_map_sg_attrs(struct device *hwdev, struct scatterlist *sgl, int nelems,
enum dma_data_direction dir,
unsigned long attrs);
diff --git a/lib/swiotlb.c b/lib/swiotlb.c
index 22e13a0..5005316 100644
--- a/lib/swiotlb.c
+++ b/lib/swiotlb.c
@@ -910,14 +910,6 @@ swiotlb_map_sg_attrs(struct device *hwdev, struct scatterlist *sgl, int nelems,
}
EXPORT_SYMBOL(swiotlb_map_sg_attrs);
-int
-swiotlb_map_sg(struct device *hwdev, struct scatterlist *sgl, int nelems,
- enum dma_data_direction dir)
-{
- return swiotlb_map_sg_attrs(hwdev, sgl, nelems, dir, 0);
-}
-EXPORT_SYMBOL(swiotlb_map_sg);
-
/*
* Unmap a set of streaming mode DMA translations. Again, cpu read rules
* concerning calls here are the same as for swiotlb_unmap_page() above.
@@ -938,14 +930,6 @@ swiotlb_unmap_sg_attrs(struct device *hwdev, struct scatterlist *sgl,
}
EXPORT_SYMBOL(swiotlb_unmap_sg_attrs);
-void
-swiotlb_unmap_sg(struct device *hwdev, struct scatterlist *sgl, int nelems,
- enum dma_data_direction dir)
-{
- return swiotlb_unmap_sg_attrs(hwdev, sgl, nelems, dir, 0);
-}
-EXPORT_SYMBOL(swiotlb_unmap_sg);
-
/*
* Make physical memory consistent for a set of streaming mode DMA translations
* after a transfer.
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [swiotlb PATCH v3 2/3] swiotlb-xen: Enforce return of DMA_ERROR_CODE in mapping function
2016-11-09 15:19 [swiotlb PATCH v3 0/3] Add support for DMA writable pages being writable by the network stack Alexander Duyck
2016-11-09 15:20 ` [swiotlb PATCH v3 1/3] swiotlb: remove unused swiotlb_map_sg and swiotlb_unmap_sg functions Alexander Duyck
@ 2016-11-09 15:20 ` Alexander Duyck
2016-11-09 15:20 ` [swiotlb PATCH v3 3/3] swiotlb: Add support for DMA_ATTR_SKIP_CPU_SYNC Alexander Duyck
2016-11-09 21:23 ` [swiotlb PATCH v3 0/3] Add support for DMA writable pages being writable by the network stack Konrad Rzeszutek Wilk
3 siblings, 0 replies; 6+ messages in thread
From: Alexander Duyck @ 2016-11-09 15:20 UTC (permalink / raw)
To: linux-mm, konrad.wilk; +Cc: netdev, linux-kernel
The mapping function should always return DMA_ERROR_CODE when a mapping has
failed as this is what the DMA API expects when a DMA error has occurred.
The current function for mapping a page in Xen was returning either
DMA_ERROR_CODE or 0 depending on where it failed.
On x86 DMA_ERROR_CODE is 0, but on other architectures such as ARM it is
~0. We need to make sure we return the same error value if either the
mapping failed or the device is not capable of accessing the mapping.
If we are returning DMA_ERROR_CODE as our error value we can drop the
function for checking the error code as the default is to compare the
return value against DMA_ERROR_CODE if no function is defined.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
---
v1: Added this patch which was part of an earlier patch.
v3: Undid changes to xen_swiotlb_map_page and only changed return value
arch/arm/xen/mm.c | 1 -
arch/x86/xen/pci-swiotlb-xen.c | 1 -
drivers/xen/swiotlb-xen.c | 9 +--------
include/xen/swiotlb-xen.h | 3 ---
4 files changed, 1 insertion(+), 13 deletions(-)
diff --git a/arch/arm/xen/mm.c b/arch/arm/xen/mm.c
index d062f08..bd62d94 100644
--- a/arch/arm/xen/mm.c
+++ b/arch/arm/xen/mm.c
@@ -186,7 +186,6 @@ struct dma_map_ops *xen_dma_ops;
EXPORT_SYMBOL(xen_dma_ops);
static struct dma_map_ops xen_swiotlb_dma_ops = {
- .mapping_error = xen_swiotlb_dma_mapping_error,
.alloc = xen_swiotlb_alloc_coherent,
.free = xen_swiotlb_free_coherent,
.sync_single_for_cpu = xen_swiotlb_sync_single_for_cpu,
diff --git a/arch/x86/xen/pci-swiotlb-xen.c b/arch/x86/xen/pci-swiotlb-xen.c
index 0e98e5d..a9fafb5 100644
--- a/arch/x86/xen/pci-swiotlb-xen.c
+++ b/arch/x86/xen/pci-swiotlb-xen.c
@@ -19,7 +19,6 @@
int xen_swiotlb __read_mostly;
static struct dma_map_ops xen_swiotlb_dma_ops = {
- .mapping_error = xen_swiotlb_dma_mapping_error,
.alloc = xen_swiotlb_alloc_coherent,
.free = xen_swiotlb_free_coherent,
.sync_single_for_cpu = xen_swiotlb_sync_single_for_cpu,
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 87e6035..c36caa5 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -418,7 +418,7 @@ dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page,
*/
if (!dma_capable(dev, dev_addr, size)) {
swiotlb_tbl_unmap_single(dev, map, size, dir);
- dev_addr = 0;
+ return DMA_ERROR_CODE;
}
return dev_addr;
}
@@ -648,13 +648,6 @@ xen_swiotlb_sync_sg_for_device(struct device *hwdev, struct scatterlist *sg,
}
EXPORT_SYMBOL_GPL(xen_swiotlb_sync_sg_for_device);
-int
-xen_swiotlb_dma_mapping_error(struct device *hwdev, dma_addr_t dma_addr)
-{
- return !dma_addr;
-}
-EXPORT_SYMBOL_GPL(xen_swiotlb_dma_mapping_error);
-
/*
* Return whether the given device DMA address mask can be supported
* properly. For example, if your device can only drive the low 24-bits
diff --git a/include/xen/swiotlb-xen.h b/include/xen/swiotlb-xen.h
index 7c35e27..a0083be 100644
--- a/include/xen/swiotlb-xen.h
+++ b/include/xen/swiotlb-xen.h
@@ -51,9 +51,6 @@ xen_swiotlb_sync_sg_for_device(struct device *hwdev, struct scatterlist *sg,
int nelems, enum dma_data_direction dir);
extern int
-xen_swiotlb_dma_mapping_error(struct device *hwdev, dma_addr_t dma_addr);
-
-extern int
xen_swiotlb_dma_supported(struct device *hwdev, u64 mask);
extern int
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [swiotlb PATCH v3 3/3] swiotlb: Add support for DMA_ATTR_SKIP_CPU_SYNC
2016-11-09 15:19 [swiotlb PATCH v3 0/3] Add support for DMA writable pages being writable by the network stack Alexander Duyck
2016-11-09 15:20 ` [swiotlb PATCH v3 1/3] swiotlb: remove unused swiotlb_map_sg and swiotlb_unmap_sg functions Alexander Duyck
2016-11-09 15:20 ` [swiotlb PATCH v3 2/3] swiotlb-xen: Enforce return of DMA_ERROR_CODE in mapping function Alexander Duyck
@ 2016-11-09 15:20 ` Alexander Duyck
2016-11-09 21:23 ` [swiotlb PATCH v3 0/3] Add support for DMA writable pages being writable by the network stack Konrad Rzeszutek Wilk
3 siblings, 0 replies; 6+ messages in thread
From: Alexander Duyck @ 2016-11-09 15:20 UTC (permalink / raw)
To: linux-mm, konrad.wilk; +Cc: netdev, linux-kernel
As a first step to making DMA_ATTR_SKIP_CPU_SYNC apply to architectures
beyond just ARM I need to make it so that the swiotlb will respect the
flag. In order to do that I also need to update the swiotlb-xen since it
heavily makes use of the functionality.
In addition I am applying the attribute to the unmap calls in the case of
map_single or map_sg has to later destroy a buffer because the device is
not able to access the DMA region.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
---
v1: Found different fix for avoiding lines longer than 80 characters
Dropped code that moved section to a label at end of function.
Split out mapping error fix to separate patch.
v3: Used 0 where applying DMA_ATTR_SKIP_CPU_SYNC is redundant
Applied DMA_ATTR_SKIP_CPU_SYNC to attr instead of ORing in parameter
Unwrap a few lines that are more readable as a single line
Updated patch to work with changes in xen_swiotlb_map_page code flow
drivers/xen/swiotlb-xen.c | 10 ++++++----
include/linux/swiotlb.h | 6 ++++--
lib/swiotlb.c | 37 ++++++++++++++++++++++---------------
3 files changed, 32 insertions(+), 21 deletions(-)
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index c36caa5..b0d5d27 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -405,7 +405,7 @@ dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page,
*/
trace_swiotlb_bounced(dev, dev_addr, size, swiotlb_force);
- map = swiotlb_tbl_map_single(dev, start_dma_addr, phys, size, dir);
+ map = swiotlb_tbl_map_single(dev, start_dma_addr, phys, size, dir, attrs);
if (map == SWIOTLB_MAP_ERROR)
return DMA_ERROR_CODE;
@@ -417,7 +417,8 @@ dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page,
* Ensure that the address returned is DMA'ble
*/
if (!dma_capable(dev, dev_addr, size)) {
- swiotlb_tbl_unmap_single(dev, map, size, dir);
+ attrs |= DMA_ATTR_SKIP_CPU_SYNC;
+ swiotlb_tbl_unmap_single(dev, map, size, dir, attrs);
return DMA_ERROR_CODE;
}
return dev_addr;
@@ -444,7 +445,7 @@ static void xen_unmap_single(struct device *hwdev, dma_addr_t dev_addr,
/* NOTE: We use dev_addr here, not paddr! */
if (is_xen_swiotlb_buffer(dev_addr)) {
- swiotlb_tbl_unmap_single(hwdev, paddr, size, dir);
+ swiotlb_tbl_unmap_single(hwdev, paddr, size, dir, attrs);
return;
}
@@ -557,11 +558,12 @@ xen_swiotlb_map_sg_attrs(struct device *hwdev, struct scatterlist *sgl,
start_dma_addr,
sg_phys(sg),
sg->length,
- dir);
+ dir, attrs);
if (map == SWIOTLB_MAP_ERROR) {
dev_warn(hwdev, "swiotlb buffer is full\n");
/* Don't panic here, we expect map_sg users
to do proper error handling. */
+ attrs |= DMA_ATTR_SKIP_CPU_SYNC;
xen_swiotlb_unmap_sg_attrs(hwdev, sgl, i, dir,
attrs);
sg_dma_len(sgl) = 0;
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index f0d2589..183f37c 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -44,11 +44,13 @@ enum dma_sync_target {
extern phys_addr_t swiotlb_tbl_map_single(struct device *hwdev,
dma_addr_t tbl_dma_addr,
phys_addr_t phys, size_t size,
- enum dma_data_direction dir);
+ enum dma_data_direction dir,
+ unsigned long attrs);
extern void swiotlb_tbl_unmap_single(struct device *hwdev,
phys_addr_t tlb_addr,
- size_t size, enum dma_data_direction dir);
+ size_t size, enum dma_data_direction dir,
+ unsigned long attrs);
extern void swiotlb_tbl_sync_single(struct device *hwdev,
phys_addr_t tlb_addr,
diff --git a/lib/swiotlb.c b/lib/swiotlb.c
index 5005316..1fa0491 100644
--- a/lib/swiotlb.c
+++ b/lib/swiotlb.c
@@ -425,7 +425,8 @@ static void swiotlb_bounce(phys_addr_t orig_addr, phys_addr_t tlb_addr,
phys_addr_t swiotlb_tbl_map_single(struct device *hwdev,
dma_addr_t tbl_dma_addr,
phys_addr_t orig_addr, size_t size,
- enum dma_data_direction dir)
+ enum dma_data_direction dir,
+ unsigned long attrs)
{
unsigned long flags;
phys_addr_t tlb_addr;
@@ -526,7 +527,8 @@ found:
*/
for (i = 0; i < nslots; i++)
io_tlb_orig_addr[index+i] = orig_addr + (i << IO_TLB_SHIFT);
- if (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL)
+ if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
+ (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL))
swiotlb_bounce(orig_addr, tlb_addr, size, DMA_TO_DEVICE);
return tlb_addr;
@@ -539,18 +541,19 @@ EXPORT_SYMBOL_GPL(swiotlb_tbl_map_single);
static phys_addr_t
map_single(struct device *hwdev, phys_addr_t phys, size_t size,
- enum dma_data_direction dir)
+ enum dma_data_direction dir, unsigned long attrs)
{
dma_addr_t start_dma_addr = phys_to_dma(hwdev, io_tlb_start);
- return swiotlb_tbl_map_single(hwdev, start_dma_addr, phys, size, dir);
+ return swiotlb_tbl_map_single(hwdev, start_dma_addr, phys, size, dir, attrs);
}
/*
* dma_addr is the kernel virtual address of the bounce buffer to unmap.
*/
void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
- size_t size, enum dma_data_direction dir)
+ size_t size, enum dma_data_direction dir,
+ unsigned long attrs)
{
unsigned long flags;
int i, count, nslots = ALIGN(size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT;
@@ -561,6 +564,7 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
* First, sync the memory before unmapping the entry
*/
if (orig_addr != INVALID_PHYS_ADDR &&
+ !(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
((dir == DMA_FROM_DEVICE) || (dir == DMA_BIDIRECTIONAL)))
swiotlb_bounce(orig_addr, tlb_addr, size, DMA_FROM_DEVICE);
@@ -654,7 +658,7 @@ swiotlb_alloc_coherent(struct device *hwdev, size_t size,
* GFP_DMA memory; fall back on map_single(), which
* will grab memory from the lowest available address range.
*/
- phys_addr_t paddr = map_single(hwdev, 0, size, DMA_FROM_DEVICE);
+ phys_addr_t paddr = map_single(hwdev, 0, size, DMA_FROM_DEVICE, 0);
if (paddr == SWIOTLB_MAP_ERROR)
goto err_warn;
@@ -669,7 +673,7 @@ swiotlb_alloc_coherent(struct device *hwdev, size_t size,
/* DMA_TO_DEVICE to avoid memcpy in unmap_single */
swiotlb_tbl_unmap_single(hwdev, paddr,
- size, DMA_TO_DEVICE);
+ size, DMA_TO_DEVICE, 0);
goto err_warn;
}
}
@@ -699,7 +703,7 @@ swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
free_pages((unsigned long)vaddr, get_order(size));
else
/* DMA_TO_DEVICE to avoid memcpy in swiotlb_tbl_unmap_single */
- swiotlb_tbl_unmap_single(hwdev, paddr, size, DMA_TO_DEVICE);
+ swiotlb_tbl_unmap_single(hwdev, paddr, size, DMA_TO_DEVICE, 0);
}
EXPORT_SYMBOL(swiotlb_free_coherent);
@@ -755,7 +759,7 @@ dma_addr_t swiotlb_map_page(struct device *dev, struct page *page,
trace_swiotlb_bounced(dev, dev_addr, size, swiotlb_force);
/* Oh well, have to allocate and map a bounce buffer. */
- map = map_single(dev, phys, size, dir);
+ map = map_single(dev, phys, size, dir, attrs);
if (map == SWIOTLB_MAP_ERROR) {
swiotlb_full(dev, size, dir, 1);
return phys_to_dma(dev, io_tlb_overflow_buffer);
@@ -765,7 +769,8 @@ dma_addr_t swiotlb_map_page(struct device *dev, struct page *page,
/* Ensure that the address returned is DMA'ble */
if (!dma_capable(dev, dev_addr, size)) {
- swiotlb_tbl_unmap_single(dev, map, size, dir);
+ attrs |= DMA_ATTR_SKIP_CPU_SYNC;
+ swiotlb_tbl_unmap_single(dev, map, size, dir, attrs);
return phys_to_dma(dev, io_tlb_overflow_buffer);
}
@@ -782,14 +787,15 @@ EXPORT_SYMBOL_GPL(swiotlb_map_page);
* whatever the device wrote there.
*/
static void unmap_single(struct device *hwdev, dma_addr_t dev_addr,
- size_t size, enum dma_data_direction dir)
+ size_t size, enum dma_data_direction dir,
+ unsigned long attrs)
{
phys_addr_t paddr = dma_to_phys(hwdev, dev_addr);
BUG_ON(dir == DMA_NONE);
if (is_swiotlb_buffer(paddr)) {
- swiotlb_tbl_unmap_single(hwdev, paddr, size, dir);
+ swiotlb_tbl_unmap_single(hwdev, paddr, size, dir, attrs);
return;
}
@@ -809,7 +815,7 @@ void swiotlb_unmap_page(struct device *hwdev, dma_addr_t dev_addr,
size_t size, enum dma_data_direction dir,
unsigned long attrs)
{
- unmap_single(hwdev, dev_addr, size, dir);
+ unmap_single(hwdev, dev_addr, size, dir, attrs);
}
EXPORT_SYMBOL_GPL(swiotlb_unmap_page);
@@ -891,11 +897,12 @@ swiotlb_map_sg_attrs(struct device *hwdev, struct scatterlist *sgl, int nelems,
if (swiotlb_force ||
!dma_capable(hwdev, dev_addr, sg->length)) {
phys_addr_t map = map_single(hwdev, sg_phys(sg),
- sg->length, dir);
+ sg->length, dir, attrs);
if (map == SWIOTLB_MAP_ERROR) {
/* Don't panic here, we expect map_sg users
to do proper error handling. */
swiotlb_full(hwdev, sg->length, dir, 0);
+ attrs |= DMA_ATTR_SKIP_CPU_SYNC;
swiotlb_unmap_sg_attrs(hwdev, sgl, i, dir,
attrs);
sg_dma_len(sgl) = 0;
@@ -925,7 +932,7 @@ swiotlb_unmap_sg_attrs(struct device *hwdev, struct scatterlist *sgl,
BUG_ON(dir == DMA_NONE);
for_each_sg(sgl, sg, nelems, i)
- unmap_single(hwdev, sg->dma_address, sg_dma_len(sg), dir);
+ unmap_single(hwdev, sg->dma_address, sg_dma_len(sg), dir, attrs);
}
EXPORT_SYMBOL(swiotlb_unmap_sg_attrs);
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [swiotlb PATCH v3 0/3] Add support for DMA writable pages being writable by the network stack.
2016-11-09 15:19 [swiotlb PATCH v3 0/3] Add support for DMA writable pages being writable by the network stack Alexander Duyck
` (2 preceding siblings ...)
2016-11-09 15:20 ` [swiotlb PATCH v3 3/3] swiotlb: Add support for DMA_ATTR_SKIP_CPU_SYNC Alexander Duyck
@ 2016-11-09 21:23 ` Konrad Rzeszutek Wilk
2016-11-09 21:29 ` Alexander Duyck
3 siblings, 1 reply; 6+ messages in thread
From: Konrad Rzeszutek Wilk @ 2016-11-09 21:23 UTC (permalink / raw)
To: Alexander Duyck; +Cc: linux-mm, netdev, linux-kernel
On Wed, Nov 09, 2016 at 10:19:57AM -0500, Alexander Duyck wrote:
> This patch series is a subset of the patches originally submitted with the
> above patch title. Specifically all of these patches relate to the
> swiotlb.
>
> I wasn't sure if I needed to resubmit this series or not. I see that v2 is
> currently sitting in the for-linus-4.9 branch of the swiotlb git repo. If
> no updates are required for the previous set then this patch set can be
> ignored since most of the changes are just cosmetic.
I already had tested v2 so if you have patches that you want to put on top
of that please do send them.
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [swiotlb PATCH v3 0/3] Add support for DMA writable pages being writable by the network stack.
2016-11-09 21:23 ` [swiotlb PATCH v3 0/3] Add support for DMA writable pages being writable by the network stack Konrad Rzeszutek Wilk
@ 2016-11-09 21:29 ` Alexander Duyck
0 siblings, 0 replies; 6+ messages in thread
From: Alexander Duyck @ 2016-11-09 21:29 UTC (permalink / raw)
To: Konrad Rzeszutek Wilk; +Cc: Alexander Duyck, linux-mm, Netdev, linux-kernel
On Wed, Nov 9, 2016 at 1:23 PM, Konrad Rzeszutek Wilk
<konrad.wilk@oracle.com> wrote:
> On Wed, Nov 09, 2016 at 10:19:57AM -0500, Alexander Duyck wrote:
>> This patch series is a subset of the patches originally submitted with the
>> above patch title. Specifically all of these patches relate to the
>> swiotlb.
>>
>> I wasn't sure if I needed to resubmit this series or not. I see that v2 is
>> currently sitting in the for-linus-4.9 branch of the swiotlb git repo. If
>> no updates are required for the previous set then this patch set can be
>> ignored since most of the changes are just cosmetic.
>
> I already had tested v2 so if you have patches that you want to put on top
> of that please do send them.
I will rebase and if anything looks like it needs to be urgently fixed
I'll resubmit.
Thanks.
- Alex
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2016-11-09 21:29 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-11-09 15:19 [swiotlb PATCH v3 0/3] Add support for DMA writable pages being writable by the network stack Alexander Duyck
2016-11-09 15:20 ` [swiotlb PATCH v3 1/3] swiotlb: remove unused swiotlb_map_sg and swiotlb_unmap_sg functions Alexander Duyck
2016-11-09 15:20 ` [swiotlb PATCH v3 2/3] swiotlb-xen: Enforce return of DMA_ERROR_CODE in mapping function Alexander Duyck
2016-11-09 15:20 ` [swiotlb PATCH v3 3/3] swiotlb: Add support for DMA_ATTR_SKIP_CPU_SYNC Alexander Duyck
2016-11-09 21:23 ` [swiotlb PATCH v3 0/3] Add support for DMA writable pages being writable by the network stack Konrad Rzeszutek Wilk
2016-11-09 21:29 ` Alexander Duyck
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).