From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.0 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 66DB2C433E1 for ; Fri, 14 Aug 2020 10:26:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4A18620781 for ; Fri, 14 Aug 2020 10:26:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726919AbgHNK0g (ORCPT ); Fri, 14 Aug 2020 06:26:36 -0400 Received: from mx2.suse.de ([195.135.220.15]:54358 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726012AbgHNK0f (ORCPT ); Fri, 14 Aug 2020 06:26:35 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id B5BB5AD1A; Fri, 14 Aug 2020 10:26:56 +0000 (UTC) From: Nicolas Saenz Julienne To: amit.pundir@linaro.org, hch@lst.de, linux-kernel@vger.kernel.org, Marek Szyprowski , Robin Murphy Cc: rientjes@google.com, jeremy.linton@arm.com, linux-rpi-kernel@lists.infradead.org, Nicolas Saenz Julienne , iommu@lists.linux-foundation.org Subject: [PATCH v4 1/2] dma-pool: Only allocate from CMA when in same memory zone Date: Fri, 14 Aug 2020 12:26:23 +0200 Message-Id: <20200814102625.25599-2-nsaenzjulienne@suse.de> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200814102625.25599-1-nsaenzjulienne@suse.de> References: <20200814102625.25599-1-nsaenzjulienne@suse.de> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org There is no guarantee to CMA's placement, so allocating a zone specific atomic pool from CMA might return memory from a completely different memory zone. To get around this double check CMA's placement before allocating from it. Signed-off-by: Nicolas Saenz Julienne [hch: rebased, added a fallback to the page allocator, allow dipping into lower CMA pools] Signed-off-by: Christoph Hellwig --- Changes since v3: - Do not use memblock_start_of_DRAM() Changes since v2: - Go back to v1 behavior kernel/dma/pool.c | 31 ++++++++++++++++++++++++++++++- 1 file changed, 30 insertions(+), 1 deletion(-) diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c index 6bc74a2d5127..57f4a0f32a92 100644 --- a/kernel/dma/pool.c +++ b/kernel/dma/pool.c @@ -3,7 +3,9 @@ * Copyright (C) 2012 ARM Ltd. * Copyright (C) 2020 Google LLC */ +#include #include +#include #include #include #include @@ -55,6 +57,29 @@ static void dma_atomic_pool_size_add(gfp_t gfp, size_t size) pool_size_kernel += size; } +static bool cma_in_zone(gfp_t gfp) +{ + unsigned long size; + phys_addr_t end; + struct cma *cma; + + cma = dev_get_cma_area(NULL); + if (!cma) + return false; + + size = cma_get_size(cma); + if (!size) + return false; + + /* CMA can't cross zone boundaries, see cma_activate_area() */ + end = cma_get_base(cma) + size - 1; + if (IS_ENABLED(CONFIG_ZONE_DMA) && (gfp & GFP_DMA)) + return end <= DMA_BIT_MASK(zone_dma_bits); + if (IS_ENABLED(CONFIG_ZONE_DMA32) && (gfp & GFP_DMA32)) + return end <= DMA_BIT_MASK(32); + return true; +} + static int atomic_pool_expand(struct gen_pool *pool, size_t pool_size, gfp_t gfp) { @@ -68,7 +93,11 @@ static int atomic_pool_expand(struct gen_pool *pool, size_t pool_size, do { pool_size = 1 << (PAGE_SHIFT + order); - page = alloc_pages(gfp, order); + if (cma_in_zone(gfp)) + page = dma_alloc_from_contiguous(NULL, 1 << order, + order, false); + if (!page) + page = alloc_pages(gfp, order); } while (!page && order-- > 0); if (!page) goto out; -- 2.28.0