* [PATCH net-next 0/2] Some minor optimization for page pool
@ 2021-08-20 2:06 Yunsheng Lin
2021-08-20 2:06 ` [PATCH net-next 1/2] page_pool: use relexed atomic for release side accounting Yunsheng Lin
2021-08-20 2:06 ` [PATCH net-next 2/2] page_pool: optimize the cpu sync operation when DMA mapping Yunsheng Lin
0 siblings, 2 replies; 5+ messages in thread
From: Yunsheng Lin @ 2021-08-20 2:06 UTC (permalink / raw)
To: davem, kuba; +Cc: hawk, ilias.apalodimas, netdev, linux-kernel
Patch 1: Use relexed atomic for release side accounting
Patch 2: Minor optimize for page_pool_dma_map() function
Yunsheng Lin (2):
page_pool: use relexed atomic for release side accounting
page_pool: optimize the cpu sync operation when DMA mapping
net/core/page_pool.c | 13 +++++++------
1 file changed, 7 insertions(+), 6 deletions(-)
--
2.7.4
^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH net-next 1/2] page_pool: use relexed atomic for release side accounting
2021-08-20 2:06 [PATCH net-next 0/2] Some minor optimization for page pool Yunsheng Lin
@ 2021-08-20 2:06 ` Yunsheng Lin
2021-08-20 2:06 ` [PATCH net-next 2/2] page_pool: optimize the cpu sync operation when DMA mapping Yunsheng Lin
1 sibling, 0 replies; 5+ messages in thread
From: Yunsheng Lin @ 2021-08-20 2:06 UTC (permalink / raw)
To: davem, kuba; +Cc: hawk, ilias.apalodimas, netdev, linux-kernel
There is no need to synchronize the account updating, so
use the relexed atomic to avoid some memory barrier in the
data path.
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
---
net/core/page_pool.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index e140905..1a69784 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -370,7 +370,7 @@ void page_pool_release_page(struct page_pool *pool, struct page *page)
/* This may be the last page returned, releasing the pool, so
* it is not safe to reference pool afterwards.
*/
- count = atomic_inc_return(&pool->pages_state_release_cnt);
+ count = atomic_inc_return_relaxed(&pool->pages_state_release_cnt);
trace_page_pool_state_release(pool, page, count);
}
EXPORT_SYMBOL(page_pool_release_page);
--
2.7.4
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH net-next 2/2] page_pool: optimize the cpu sync operation when DMA mapping
2021-08-20 2:06 [PATCH net-next 0/2] Some minor optimization for page pool Yunsheng Lin
2021-08-20 2:06 ` [PATCH net-next 1/2] page_pool: use relexed atomic for release side accounting Yunsheng Lin
@ 2021-08-20 2:06 ` Yunsheng Lin
2021-08-20 6:10 ` Heiner Kallweit
1 sibling, 1 reply; 5+ messages in thread
From: Yunsheng Lin @ 2021-08-20 2:06 UTC (permalink / raw)
To: davem, kuba; +Cc: hawk, ilias.apalodimas, netdev, linux-kernel
If the DMA_ATTR_SKIP_CPU_SYNC is not set, cpu syncing is
also done in dma_map_page_attrs(), so set the attrs according
to pool->p.flags to avoid calling dma sync function again.
Also mark the dma error as the unlikely case While we are at
it.
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
---
net/core/page_pool.c | 11 ++++++-----
1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index 1a69784..8172045 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -191,8 +191,12 @@ static void page_pool_dma_sync_for_device(struct page_pool *pool,
static bool page_pool_dma_map(struct page_pool *pool, struct page *page)
{
+ unsigned long attrs = DMA_ATTR_SKIP_CPU_SYNC;
dma_addr_t dma;
+ if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV)
+ attrs = 0;
+
/* Setup DMA mapping: use 'struct page' area for storing DMA-addr
* since dma_addr_t can be either 32 or 64 bits and does not always fit
* into page private data (i.e 32bit cpu with 64bit DMA caps)
@@ -200,15 +204,12 @@ static bool page_pool_dma_map(struct page_pool *pool, struct page *page)
*/
dma = dma_map_page_attrs(pool->p.dev, page, 0,
(PAGE_SIZE << pool->p.order),
- pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC);
- if (dma_mapping_error(pool->p.dev, dma))
+ pool->p.dma_dir, attrs);
+ if (unlikely(dma_mapping_error(pool->p.dev, dma)))
return false;
page_pool_set_dma_addr(page, dma);
- if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV)
- page_pool_dma_sync_for_device(pool, page, pool->p.max_len);
-
return true;
}
--
2.7.4
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH net-next 2/2] page_pool: optimize the cpu sync operation when DMA mapping
2021-08-20 2:06 ` [PATCH net-next 2/2] page_pool: optimize the cpu sync operation when DMA mapping Yunsheng Lin
@ 2021-08-20 6:10 ` Heiner Kallweit
2021-08-20 6:29 ` Yunsheng Lin
0 siblings, 1 reply; 5+ messages in thread
From: Heiner Kallweit @ 2021-08-20 6:10 UTC (permalink / raw)
To: Yunsheng Lin, davem, kuba; +Cc: hawk, ilias.apalodimas, netdev, linux-kernel
On 20.08.2021 04:06, Yunsheng Lin wrote:
> If the DMA_ATTR_SKIP_CPU_SYNC is not set, cpu syncing is
> also done in dma_map_page_attrs(), so set the attrs according
> to pool->p.flags to avoid calling dma sync function again.
>
> Also mark the dma error as the unlikely case While we are at
> it.
>
This shouldn't be needed. dma_mapping_error() will be (most likely)
inlined by the compiler, and it includes the unlikely() hint.
> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
> ---
> net/core/page_pool.c | 11 ++++++-----
> 1 file changed, 6 insertions(+), 5 deletions(-)
>
> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> index 1a69784..8172045 100644
> --- a/net/core/page_pool.c
> +++ b/net/core/page_pool.c
> @@ -191,8 +191,12 @@ static void page_pool_dma_sync_for_device(struct page_pool *pool,
>
> static bool page_pool_dma_map(struct page_pool *pool, struct page *page)
> {
> + unsigned long attrs = DMA_ATTR_SKIP_CPU_SYNC;
> dma_addr_t dma;
>
> + if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV)
> + attrs = 0;
> +
> /* Setup DMA mapping: use 'struct page' area for storing DMA-addr
> * since dma_addr_t can be either 32 or 64 bits and does not always fit
> * into page private data (i.e 32bit cpu with 64bit DMA caps)
> @@ -200,15 +204,12 @@ static bool page_pool_dma_map(struct page_pool *pool, struct page *page)
> */
> dma = dma_map_page_attrs(pool->p.dev, page, 0,
> (PAGE_SIZE << pool->p.order),
> - pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC);
> - if (dma_mapping_error(pool->p.dev, dma))
> + pool->p.dma_dir, attrs);
> + if (unlikely(dma_mapping_error(pool->p.dev, dma)))
> return false;
>
> page_pool_set_dma_addr(page, dma);
>
> - if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV)
> - page_pool_dma_sync_for_device(pool, page, pool->p.max_len);
> -
> return true;
> }
>
>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH net-next 2/2] page_pool: optimize the cpu sync operation when DMA mapping
2021-08-20 6:10 ` Heiner Kallweit
@ 2021-08-20 6:29 ` Yunsheng Lin
0 siblings, 0 replies; 5+ messages in thread
From: Yunsheng Lin @ 2021-08-20 6:29 UTC (permalink / raw)
To: Heiner Kallweit, davem, kuba; +Cc: hawk, ilias.apalodimas, netdev, linux-kernel
On 2021/8/20 14:10, Heiner Kallweit wrote:
> On 20.08.2021 04:06, Yunsheng Lin wrote:
>> If the DMA_ATTR_SKIP_CPU_SYNC is not set, cpu syncing is
>> also done in dma_map_page_attrs(), so set the attrs according
>> to pool->p.flags to avoid calling dma sync function again.
>>
>> Also mark the dma error as the unlikely case While we are at
>> it.
>>
> This shouldn't be needed. dma_mapping_error() will be (most likely)
> inlined by the compiler, and it includes the unlikely() hint.
Good point, will remove the unlikely() mark.
Thanks.
>
>> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
>> ---
>> net/core/page_pool.c | 11 ++++++-----
>> 1 file changed, 6 insertions(+), 5 deletions(-)
>>
>> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
>> index 1a69784..8172045 100644
>> --- a/net/core/page_pool.c
>> +++ b/net/core/page_pool.c
>> @@ -191,8 +191,12 @@ static void page_pool_dma_sync_for_device(struct page_pool *pool,
>>
>> static bool page_pool_dma_map(struct page_pool *pool, struct page *page)
>> {
>> + unsigned long attrs = DMA_ATTR_SKIP_CPU_SYNC;
>> dma_addr_t dma;
>>
>> + if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV)
>> + attrs = 0;
>> +
>> /* Setup DMA mapping: use 'struct page' area for storing DMA-addr
>> * since dma_addr_t can be either 32 or 64 bits and does not always fit
>> * into page private data (i.e 32bit cpu with 64bit DMA caps)
>> @@ -200,15 +204,12 @@ static bool page_pool_dma_map(struct page_pool *pool, struct page *page)
>> */
>> dma = dma_map_page_attrs(pool->p.dev, page, 0,
>> (PAGE_SIZE << pool->p.order),
>> - pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC);
>> - if (dma_mapping_error(pool->p.dev, dma))
>> + pool->p.dma_dir, attrs);
>> + if (unlikely(dma_mapping_error(pool->p.dev, dma)))
>> return false;
>>
>> page_pool_set_dma_addr(page, dma);
>>
>> - if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV)
>> - page_pool_dma_sync_for_device(pool, page, pool->p.max_len);
>> -
>> return true;
>> }
>>
>>
>
> .
>
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2021-08-20 6:29 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-20 2:06 [PATCH net-next 0/2] Some minor optimization for page pool Yunsheng Lin
2021-08-20 2:06 ` [PATCH net-next 1/2] page_pool: use relexed atomic for release side accounting Yunsheng Lin
2021-08-20 2:06 ` [PATCH net-next 2/2] page_pool: optimize the cpu sync operation when DMA mapping Yunsheng Lin
2021-08-20 6:10 ` Heiner Kallweit
2021-08-20 6:29 ` Yunsheng Lin
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.