From: Robin Murphy <robin.murphy@arm.com>
To: Ganapatrao Kulkarni <ganapatrao.kulkarni@cavium.com>,
linux-kernel@vger.kernel.org,
linux-arm-kernel@lists.infradead.org,
iommu@lists.linux-foundation.org, linux-mm@kvack.org
Cc: Will.Deacon@arm.com, lorenzo.pieralisi@arm.com,
hanjun.guo@linaro.org, joro@8bytes.org, vbabka@suse.cz,
akpm@linux-foundation.org, mhocko@suse.com,
Tomasz.Nowicki@cavium.com, Robert.Richter@cavium.com,
jnair@caviumnetworks.com, gklkml16@gmail.com
Subject: Re: [PATCH 4/4] iommu/dma, numa: Use NUMA aware memory allocations in __iommu_dma_alloc_pages
Date: Thu, 21 Sep 2017 12:41:11 +0100 [thread overview]
Message-ID: <9d65676f-e4e8-e0a6-602c-361d83ce83c1@arm.com> (raw)
In-Reply-To: <20170921085922.11659-5-ganapatrao.kulkarni@cavium.com>
On 21/09/17 09:59, Ganapatrao Kulkarni wrote:
> Change function __iommu_dma_alloc_pages to allocate memory/pages
> for dma from respective device numa node.
>
> Signed-off-by: Ganapatrao Kulkarni <ganapatrao.kulkarni@cavium.com>
> ---
> drivers/iommu/dma-iommu.c | 17 ++++++++++-------
> 1 file changed, 10 insertions(+), 7 deletions(-)
>
> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> index 9d1cebe..0626b58 100644
> --- a/drivers/iommu/dma-iommu.c
> +++ b/drivers/iommu/dma-iommu.c
> @@ -428,20 +428,21 @@ static void __iommu_dma_free_pages(struct page **pages, int count)
> kvfree(pages);
> }
>
> -static struct page **__iommu_dma_alloc_pages(unsigned int count,
> - unsigned long order_mask, gfp_t gfp)
> +static struct page **__iommu_dma_alloc_pages(struct device *dev,
> + unsigned int count, unsigned long order_mask, gfp_t gfp)
> {
> struct page **pages;
> unsigned int i = 0, array_size = count * sizeof(*pages);
> + int numa_node = dev_to_node(dev);
>
> order_mask &= (2U << MAX_ORDER) - 1;
> if (!order_mask)
> return NULL;
>
> if (array_size <= PAGE_SIZE)
> - pages = kzalloc(array_size, GFP_KERNEL);
> + pages = kzalloc_node(array_size, GFP_KERNEL, numa_node);
> else
> - pages = vzalloc(array_size);
> + pages = vzalloc_node(array_size, numa_node);
kvzalloc{,_node}() didn't exist when this code was first written, but it
does now - since you're touching it you may as well get rid of the whole
if-else and array_size local.
Further nit: some of the indentation below is a bit messed up.
Robin.
> if (!pages)
> return NULL;
>
> @@ -462,8 +463,9 @@ static struct page **__iommu_dma_alloc_pages(unsigned int count,
> unsigned int order = __fls(order_mask);
>
> order_size = 1U << order;
> - page = alloc_pages((order_mask - order_size) ?
> - gfp | __GFP_NORETRY : gfp, order);
> + page = alloc_pages_node(numa_node,
> + (order_mask - order_size) ?
> + gfp | __GFP_NORETRY : gfp, order);
> if (!page)
> continue;
> if (!order)
> @@ -548,7 +550,8 @@ struct page **iommu_dma_alloc(struct device *dev, size_t size, gfp_t gfp,
> alloc_sizes = min_size;
>
> count = PAGE_ALIGN(size) >> PAGE_SHIFT;
> - pages = __iommu_dma_alloc_pages(count, alloc_sizes >> PAGE_SHIFT, gfp);
> + pages = __iommu_dma_alloc_pages(dev, count, alloc_sizes >> PAGE_SHIFT,
> + gfp);
> if (!pages)
> return NULL;
>
>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2017-09-21 11:41 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-09-21 8:59 [PATCH 0/4] numa, iommu/smmu: IOMMU/SMMU driver optimization for NUMA systems Ganapatrao Kulkarni
2017-09-21 8:59 ` [PATCH 1/4] mm: move function alloc_pages_exact_nid out of __meminit Ganapatrao Kulkarni
2017-09-26 13:35 ` Michal Hocko
2017-09-21 8:59 ` [PATCH 2/4] numa, iommu/io-pgtable-arm: Use NUMA aware memory allocation for smmu translation tables Ganapatrao Kulkarni
2017-09-21 11:11 ` Robin Murphy
2017-09-22 15:33 ` Ganapatrao Kulkarni
2017-09-21 8:59 ` [PATCH 3/4] iommu/arm-smmu-v3: Use NUMA memory allocations for stream tables and comamnd queues Ganapatrao Kulkarni
2017-09-21 11:58 ` Robin Murphy
2017-09-21 14:26 ` Christoph Hellwig
2017-09-29 12:13 ` Marek Szyprowski
2017-10-04 13:53 ` Ganapatrao Kulkarni
2017-10-18 13:36 ` Robin Murphy
2017-11-06 9:04 ` Ganapatrao Kulkarni
2017-09-21 8:59 ` [PATCH 4/4] iommu/dma, numa: Use NUMA aware memory allocations in __iommu_dma_alloc_pages Ganapatrao Kulkarni
2017-09-21 11:41 ` Robin Murphy [this message]
2017-09-22 15:44 ` Ganapatrao Kulkarni
2017-10-18 13:28 ` [PATCH 0/4] numa, iommu/smmu: IOMMU/SMMU driver optimization for NUMA systems Will Deacon
2018-08-22 13:44 ` John Garry
2018-08-22 14:56 ` Robin Murphy
2018-08-22 16:07 ` John Garry
2018-08-22 17:57 ` Ganapatrao Kulkarni
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=9d65676f-e4e8-e0a6-602c-361d83ce83c1@arm.com \
--to=robin.murphy@arm.com \
--cc=Robert.Richter@cavium.com \
--cc=Tomasz.Nowicki@cavium.com \
--cc=Will.Deacon@arm.com \
--cc=akpm@linux-foundation.org \
--cc=ganapatrao.kulkarni@cavium.com \
--cc=gklkml16@gmail.com \
--cc=hanjun.guo@linaro.org \
--cc=iommu@lists.linux-foundation.org \
--cc=jnair@caviumnetworks.com \
--cc=joro@8bytes.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.pieralisi@arm.com \
--cc=mhocko@suse.com \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).