From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5C912C433EF for ; Tue, 5 Oct 2021 08:43:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4290E6136F for ; Tue, 5 Oct 2021 08:43:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233134AbhJEIow (ORCPT ); Tue, 5 Oct 2021 04:44:52 -0400 Received: from 232.88.17.95.dynamic.jazztel.es ([95.17.88.232]:57110 "EHLO suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S232974AbhJEIou (ORCPT ); Tue, 5 Oct 2021 04:44:50 -0400 X-Greylist: delayed 460 seconds by postgrey-1.27 at vger.kernel.org; Tue, 05 Oct 2021 04:44:50 EDT Received: by suse.de (Postfix, from userid 1000) id 92EAEA413; Tue, 5 Oct 2021 10:35:17 +0200 (CEST) Date: Tue, 5 Oct 2021 10:35:17 +0200 From: Oscar Salvador To: Mike Kravetz Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, David Hildenbrand , Michal Hocko , Zi Yan , Muchun Song , Naoya Horiguchi , David Rientjes , "Aneesh Kumar K . V" , Andrew Morton Subject: Re: [PATCH v3 2/5] mm/cma: add cma_pages_valid to determine if pages are in CMA Message-ID: <20211005083516.GA20090@linux> References: <20211001175210.45968-1-mike.kravetz@oracle.com> <20211001175210.45968-3-mike.kravetz@oracle.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20211001175210.45968-3-mike.kravetz@oracle.com> User-Agent: Mutt/1.10.1 (2018-07-13) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Oct 01, 2021 at 10:52:07AM -0700, Mike Kravetz wrote: > +bool cma_pages_valid(struct cma *cma, const struct page *pages, > + unsigned long count) > +{ > + unsigned long pfn; > + > + if (!cma || !pages) > + return false; > + > + pfn = page_to_pfn(pages); > + > + if (pfn < cma->base_pfn || pfn >= cma->base_pfn + cma->count) > + return false; > + > + return true; > +} > + > /** > * cma_release() - release allocated pages > * @cma: Contiguous memory region for which the allocation is performed. > @@ -539,16 +555,13 @@ bool cma_release(struct cma *cma, const struct page *pages, > { > unsigned long pfn; > > - if (!cma || !pages) > + if (!cma_pages_valid(cma, pages, count)) > return false; > > pr_debug("%s(page %p, count %lu)\n", __func__, (void *)pages, count); > > pfn = page_to_pfn(pages); > > - if (pfn < cma->base_pfn || pfn >= cma->base_pfn + cma->count) > - return false; > - Might be worth noting that after this change, the debug statement will not be printed as before. Now, we back off before firing it. You might want to point that out in the changelog in case someone wonders why. -- Oscar Salvador SUSE Labs