From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BDCFEC4708F for ; Mon, 31 May 2021 13:23:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A4B1C613B4 for ; Mon, 31 May 2021 13:23:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231901AbhEaNZE (ORCPT ); Mon, 31 May 2021 09:25:04 -0400 Received: from mail.kernel.org ([198.145.29.99]:55060 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232105AbhEaNUD (ORCPT ); Mon, 31 May 2021 09:20:03 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id AEBB860FE8; Mon, 31 May 2021 13:18:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1622467104; bh=edhSB7ZI02tn89DerEHqQZIsG6KuqYQA4tJyCjf0MtU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=y8mrHg+6fNFQsO4UmCcMsKpAWq4RAHpf9mG+hnFpAOu+uwO7TWz2T7K6ZapHks7jZ NIi0MHJSVujmOL+0BLoNCQtEQgpmc84HUisJTFQGMU50sFvx0bp2WdFTqj8UByE6L/ 2e0cV/lcdc5xLWzCCmyy/8zoJkpGR2xFT6LW8zjk= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Mike Kravetz , Nathan Chancellor , Davidlohr Bueso , Andrew Morton , Nick Desaulniers , Ilie Halip , David Bolvansky , Linus Torvalds Subject: [PATCH 4.4 52/54] hugetlbfs: hugetlb_fault_mutex_hash() cleanup Date: Mon, 31 May 2021 15:14:18 +0200 Message-Id: <20210531130636.696628201@linuxfoundation.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210531130635.070310929@linuxfoundation.org> References: <20210531130635.070310929@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Mike Kravetz commit 552546366a30d88bd1d6f5efe848b2ab50fd57e5 upstream. A new clang diagnostic (-Wsizeof-array-div) warns about the calculation to determine the number of u32's in an array of unsigned longs. Suppress warning by adding parentheses. While looking at the above issue, noticed that the 'address' parameter to hugetlb_fault_mutex_hash is no longer used. So, remove it from the definition and all callers. No functional change. Link: http://lkml.kernel.org/r/20190919011847.18400-1-mike.kravetz@oracle.com Signed-off-by: Mike Kravetz Reported-by: Nathan Chancellor Reviewed-by: Nathan Chancellor Reviewed-by: Davidlohr Bueso Reviewed-by: Andrew Morton Cc: Nick Desaulniers Cc: Ilie Halip Cc: David Bolvansky Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- fs/hugetlbfs/inode.c | 4 ++-- include/linux/hugetlb.h | 2 +- mm/hugetlb.c | 8 ++++---- 3 files changed, 7 insertions(+), 7 deletions(-) --- a/fs/hugetlbfs/inode.c +++ b/fs/hugetlbfs/inode.c @@ -414,7 +414,7 @@ static void remove_inode_hugepages(struc if (next >= end) break; - hash = hugetlb_fault_mutex_hash(h, mapping, next, 0); + hash = hugetlb_fault_mutex_hash(h, mapping, next); mutex_lock(&hugetlb_fault_mutex_table[hash]); lock_page(page); @@ -630,7 +630,7 @@ static long hugetlbfs_fallocate(struct f addr = index * hpage_size; /* mutex taken here, fault path and hole punch */ - hash = hugetlb_fault_mutex_hash(h, mapping, index, addr); + hash = hugetlb_fault_mutex_hash(h, mapping, index); mutex_lock(&hugetlb_fault_mutex_table[hash]); /* See if already present in mapping to avoid alloc/free */ --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -92,7 +92,7 @@ void free_huge_page(struct page *page); void hugetlb_fix_reserve_counts(struct inode *inode, bool restore_reserve); extern struct mutex *hugetlb_fault_mutex_table; u32 hugetlb_fault_mutex_hash(struct hstate *h, struct address_space *mapping, - pgoff_t idx, unsigned long address); + pgoff_t idx); #ifdef CONFIG_ARCH_WANT_HUGE_PMD_SHARE pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud); --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3771,7 +3771,7 @@ backout_unlocked: #ifdef CONFIG_SMP u32 hugetlb_fault_mutex_hash(struct hstate *h, struct address_space *mapping, - pgoff_t idx, unsigned long address) + pgoff_t idx) { unsigned long key[2]; u32 hash; @@ -3779,7 +3779,7 @@ u32 hugetlb_fault_mutex_hash(struct hsta key[0] = (unsigned long) mapping; key[1] = idx; - hash = jhash2((u32 *)&key, sizeof(key)/sizeof(u32), 0); + hash = jhash2((u32 *)&key, sizeof(key)/(sizeof(u32)), 0); return hash & (num_fault_mutexes - 1); } @@ -3789,7 +3789,7 @@ u32 hugetlb_fault_mutex_hash(struct hsta * return 0 and avoid the hashing overhead. */ u32 hugetlb_fault_mutex_hash(struct hstate *h, struct address_space *mapping, - pgoff_t idx, unsigned long address) + pgoff_t idx) { return 0; } @@ -3834,7 +3834,7 @@ int hugetlb_fault(struct mm_struct *mm, * get spurious allocation failures if two CPUs race to instantiate * the same page in the page cache. */ - hash = hugetlb_fault_mutex_hash(h, mapping, idx, address); + hash = hugetlb_fault_mutex_hash(h, mapping, idx); mutex_lock(&hugetlb_fault_mutex_table[hash]); entry = huge_ptep_get(ptep);