From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7EDCC433FE for ; Tue, 31 May 2022 18:17:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346934AbiEaSRy (ORCPT ); Tue, 31 May 2022 14:17:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42836 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346918AbiEaSRs (ORCPT ); Tue, 31 May 2022 14:17:48 -0400 Received: from mail.cybernetics.com (mail.cybernetics.com [173.71.130.66]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 80E3E8CCFA for ; Tue, 31 May 2022 11:17:46 -0700 (PDT) X-ASG-Debug-ID: 1654021064-1cf43917f334b060001-xx1T2L Received: from cybernetics.com ([10.10.4.126]) by mail.cybernetics.com with ESMTP id FnUZ9qI4R7alRl3A; Tue, 31 May 2022 14:17:44 -0400 (EDT) X-Barracuda-Envelope-From: tonyb@cybernetics.com X-ASG-Whitelist: Client DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=cybernetics.com; s=mail; bh=rVEbYmgCF5qNaIoY0xv9QyqFa+k5xItzdRlzexwASwY=; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:References:Cc:To:From: Content-Language:Subject:MIME-Version:Date:Message-ID; b=PXGrGGD2CLPjjbPjqwhg zGiqsmvhXj07rY/Ga/zD8aEmocIXs+Ic868AT/FzjqouS9izAh8h4wg0ckuZd+Ja1jlF7EjM0eYd3 mlXd+sZ5m0ONdtjtAziIP1j0Set+ooCEjaMO/cnHIX+CZIE7D8h6dIEr0gMQUBlyfSMfftPsDw= Received: from [10.157.2.224] (HELO [192.168.200.1]) by cybernetics.com (CommuniGate Pro SMTP 7.1.1) with ESMTPS id 11829214; Tue, 31 May 2022 14:17:44 -0400 Message-ID: Date: Tue, 31 May 2022 14:17:44 -0400 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.9.1 Subject: [PATCH 04/10] dmapool: improve accuracy of debug statistics Content-Language: en-US X-ASG-Orig-Subj: [PATCH 04/10] dmapool: improve accuracy of debug statistics From: Tony Battersby To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: iommu@lists.linux-foundation.org, kernel-team@fb.com, Matthew Wilcox , Keith Busch , Andy Shevchenko , Robin Murphy , Tony Lindgren References: <9b08ab7c-b80b-527d-9adf-7716b0868fbc@cybernetics.com> In-Reply-To: <9b08ab7c-b80b-527d-9adf-7716b0868fbc@cybernetics.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Barracuda-Connect: UNKNOWN[10.10.4.126] X-Barracuda-Start-Time: 1654021064 X-Barracuda-URL: https://10.10.4.122:443/cgi-mod/mark.cgi X-Barracuda-BRTS-Status: 1 X-Virus-Scanned: by bsmtpd at cybernetics.com X-Barracuda-Scan-Msg-Size: 1718 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The "total number of blocks in pool" debug statistic currently does not take the boundary value into account, so it diverges from the "total number of blocks in use" statistic when a boundary is in effect. Add a calculation for the number of blocks per allocation that takes the boundary into account, and use it to replace the inaccurate calculation. This depends on the patch "dmapool: fix boundary comparison" for the calculated blks_per_alloc value to be correct. Signed-off-by: Tony Battersby --- mm/dmapool.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/mm/dmapool.c b/mm/dmapool.c index 782143144a32..9e30f4425dea 100644 --- a/mm/dmapool.c +++ b/mm/dmapool.c @@ -47,6 +47,7 @@ struct dma_pool { /* the pool */ struct device *dev; unsigned int allocation; unsigned int boundary; + unsigned int blks_per_alloc; char name[32]; struct list_head pools; }; @@ -92,8 +93,7 @@ static ssize_t pools_show(struct device *dev, struct device_attribute *attr, cha /* per-pool info, no real statistics yet */ temp = scnprintf(next, size, "%-16s %4zu %4zu %4u %2u\n", pool->name, blocks, - (size_t) pages * - (pool->allocation / pool->size), + (size_t) pages * pool->blks_per_alloc, pool->size, pages); size -= temp; next += temp; @@ -168,6 +168,9 @@ struct dma_pool *dma_pool_create(const char *name, struct device *dev, retval->size = size; retval->boundary = boundary; retval->allocation = allocation; + retval->blks_per_alloc = + (allocation / boundary) * (boundary / size) + + (allocation % boundary) / size; INIT_LIST_HEAD(&retval->pools); -- 2.25.1