From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2BB50C433B4 for ; Wed, 7 Apr 2021 21:40:44 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BE3A961165 for ; Wed, 7 Apr 2021 21:40:43 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BE3A961165 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4BFB26B0073; Wed, 7 Apr 2021 17:40:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 470436B0078; Wed, 7 Apr 2021 17:40:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 311576B007D; Wed, 7 Apr 2021 17:40:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0221.hostedemail.com [216.40.44.221]) by kanga.kvack.org (Postfix) with ESMTP id 10AD26B0073 for ; Wed, 7 Apr 2021 17:40:43 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id B9BD35833 for ; Wed, 7 Apr 2021 21:40:42 +0000 (UTC) X-FDA: 78006890724.08.3239465 Received: from mail-ej1-f48.google.com (mail-ej1-f48.google.com [209.85.218.48]) by imf27.hostedemail.com (Postfix) with ESMTP id 389B180192D5 for ; Wed, 7 Apr 2021 21:40:36 +0000 (UTC) Received: by mail-ej1-f48.google.com with SMTP id s21so5314865eju.8 for ; Wed, 07 Apr 2021 14:40:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=Xcr7lkbzXaoQQpjftuqW3C9GNiBJ/UK2Pd0JhhADfVc=; b=jBIJI4CajgIYTLrsGgHZPCltrspIBa9ahPXL/098Ibl5qMITduKF13NU/xadSZUgsM O1YNdcqhimuxaPNp019e4WXYfA1NppYy9HdRf2pDugPB8pPUjgZyPbmK/0ghJmkNhm7F EgDFbX8l2jlZ43QCH8pyx0clmmhB1jkCqzhXwhUL0GuxPMdDR6+0f7J6Mr1XIOCMrs0Z e9xXvn6axD8MSguM8kbutN45Ffd4DlNyEeOb19t1FweGN+lcll2hi7yk5qoLHl1uV9Vi YBMcJjHJAkCl7h8F8KQzCTvX29hhrJnZH/61TTzy++MturPeVrdPfVCg0BcbMretMnSG rp0g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=Xcr7lkbzXaoQQpjftuqW3C9GNiBJ/UK2Pd0JhhADfVc=; b=RhVmpW4HaZjpmYtohFFK5mkmewXgtNzCLesRCrIdcCz9XOTaOuqtPCydE7Yjq1zMvK i/XYFb8XU0q9G1KSQF3n/nky4jdwHVO+imbDcba4cF78MrGy4dQJ6oMPAuSPBfB8sS6g 4txQJ1V9TPxmU5jFjoDle3bQ67p0LC1gkgXpI0VuJQagyHUQ4l0jyjjdWVKpRX5wH7ug 7Fffj1ZHAv8xAwIUsxz2eYEROJPpnLg/svUy1V4hF+tPSZ3NA+kG+DmY9PrW/9lkdhl+ dAqpBDSf351a9+LBGs3sbE4L7TGg0MPwfRTx8ilG6F/AaOQdC4h978VJiorZgI3hy7F4 hTCg== X-Gm-Message-State: AOAM531v1B8PHubJtOGDYtmxyNUKnUByrwEpaWxbJShtr/qiThA/kNjH pPYqR9TIt751RK9Zc0PIfNv+Wch488Z87bGPE4Q= X-Google-Smtp-Source: ABdhPJwOotE1A406II6Ivvxrg18GdLK2JisXrwWSKVKOsx9wZKJHxA/9CIGYdc+gv/hPf8GUJpeo5ZxcVr98wKeKhhs= X-Received: by 2002:a17:906:4bce:: with SMTP id x14mr6056959ejv.383.1617831641231; Wed, 07 Apr 2021 14:40:41 -0700 (PDT) MIME-Version: 1.0 References: <20210407030548.189104-1-yanfei.xu@windriver.com> <20210407030548.189104-2-yanfei.xu@windriver.com> In-Reply-To: <20210407030548.189104-2-yanfei.xu@windriver.com> From: Yang Shi Date: Wed, 7 Apr 2021 14:40:29 -0700 Message-ID: Subject: Re: [PATCH v2 1/2] mm: khugepaged: use macro to align addresses To: yanfei.xu@windriver.com Cc: Linux MM , Linux Kernel Mailing List Content-Type: text/plain; charset="UTF-8" X-Rspamd-Queue-Id: 389B180192D5 X-Stat-Signature: emiegkfm9wjojix964iyaaibxuiugkun X-Rspamd-Server: rspam02 Received-SPF: none (gmail.com>: No applicable sender policy available) receiver=imf27; identity=mailfrom; envelope-from=""; helo=mail-ej1-f48.google.com; client-ip=209.85.218.48 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1617831636-325195 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Apr 6, 2021 at 8:06 PM wrote: > > From: Yanfei Xu > > We could use macro to deal with the addresses which need to be aligned > to improve readability of codes. Reviewed-by: Yang Shi > > Signed-off-by: Yanfei Xu > --- > mm/khugepaged.c | 27 +++++++++++++-------------- > 1 file changed, 13 insertions(+), 14 deletions(-) > > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > index a7d6cb912b05..a6012b9259a2 100644 > --- a/mm/khugepaged.c > +++ b/mm/khugepaged.c > @@ -517,8 +517,8 @@ int khugepaged_enter_vma_merge(struct vm_area_struct *vma, > if (!hugepage_vma_check(vma, vm_flags)) > return 0; > > - hstart = (vma->vm_start + ~HPAGE_PMD_MASK) & HPAGE_PMD_MASK; > - hend = vma->vm_end & HPAGE_PMD_MASK; > + hstart = ALIGN(vma->vm_start, HPAGE_PMD_SIZE); > + hend = ALIGN_DOWN(vma->vm_end, HPAGE_PMD_SIZE); > if (hstart < hend) > return khugepaged_enter(vma, vm_flags); > return 0; > @@ -979,8 +979,8 @@ static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address, > if (!vma) > return SCAN_VMA_NULL; > > - hstart = (vma->vm_start + ~HPAGE_PMD_MASK) & HPAGE_PMD_MASK; > - hend = vma->vm_end & HPAGE_PMD_MASK; > + hstart = ALIGN(vma->vm_start, HPAGE_PMD_SIZE); > + hend = ALIGN_DOWN(vma->vm_end, HPAGE_PMD_SIZE); > if (address < hstart || address + HPAGE_PMD_SIZE > hend) > return SCAN_ADDRESS_RANGE; > if (!hugepage_vma_check(vma, vma->vm_flags)) > @@ -1070,7 +1070,7 @@ static void collapse_huge_page(struct mm_struct *mm, > struct mmu_notifier_range range; > gfp_t gfp; > > - VM_BUG_ON(address & ~HPAGE_PMD_MASK); > + VM_BUG_ON(!IS_ALIGNED(address, HPAGE_PMD_SIZE)); > > /* Only allocate from the target node */ > gfp = alloc_hugepage_khugepaged_gfpmask() | __GFP_THISNODE; > @@ -1235,7 +1235,7 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, > int node = NUMA_NO_NODE, unmapped = 0; > bool writable = false; > > - VM_BUG_ON(address & ~HPAGE_PMD_MASK); > + VM_BUG_ON(!IS_ALIGNED(address, HPAGE_PMD_SIZE)); > > pmd = mm_find_pmd(mm, address); > if (!pmd) { > @@ -1414,7 +1414,7 @@ static int khugepaged_add_pte_mapped_thp(struct mm_struct *mm, > { > struct mm_slot *mm_slot; > > - VM_BUG_ON(addr & ~HPAGE_PMD_MASK); > + VM_BUG_ON(!IS_ALIGNED(addr, HPAGE_PMD_SIZE)); > > spin_lock(&khugepaged_mm_lock); > mm_slot = get_mm_slot(mm); > @@ -1437,7 +1437,7 @@ static int khugepaged_add_pte_mapped_thp(struct mm_struct *mm, > */ > void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr) > { > - unsigned long haddr = addr & HPAGE_PMD_MASK; > + unsigned long haddr = ALIGN_DOWN(addr, HPAGE_PMD_SIZE); > struct vm_area_struct *vma = find_vma(mm, haddr); > struct page *hpage; > pte_t *start_pte, *pte; > @@ -1584,7 +1584,7 @@ static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff) > if (vma->anon_vma) > continue; > addr = vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT); > - if (addr & ~HPAGE_PMD_MASK) > + if (!IS_ALIGNED(addr, HPAGE_PMD_SIZE)) > continue; > if (vma->vm_end < addr + HPAGE_PMD_SIZE) > continue; > @@ -2070,7 +2070,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, > { > struct mm_slot *mm_slot; > struct mm_struct *mm; > - struct vm_area_struct *vma; > + struct vm_area_struct *vma = NULL; > int progress = 0; > > VM_BUG_ON(!pages); > @@ -2092,7 +2092,6 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, > * Don't wait for semaphore (to avoid long wait times). Just move to > * the next mm on the list. > */ > - vma = NULL; > if (unlikely(!mmap_read_trylock(mm))) > goto breakouterloop_mmap_lock; > if (likely(!khugepaged_test_exit(mm))) > @@ -2112,15 +2111,15 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, > progress++; > continue; > } > - hstart = (vma->vm_start + ~HPAGE_PMD_MASK) & HPAGE_PMD_MASK; > - hend = vma->vm_end & HPAGE_PMD_MASK; > + hstart = ALIGN(vma->vm_start, HPAGE_PMD_SIZE); > + hend = ALIGN_DOWN(vma->vm_end, HPAGE_PMD_SIZE); > if (hstart >= hend) > goto skip; > if (khugepaged_scan.address > hend) > goto skip; > if (khugepaged_scan.address < hstart) > khugepaged_scan.address = hstart; > - VM_BUG_ON(khugepaged_scan.address & ~HPAGE_PMD_MASK); > + VM_BUG_ON(!IS_ALIGNED(khugepaged_scan.address, HPAGE_PMD_SIZE)); > if (shmem_file(vma->vm_file) && !shmem_huge_enabled(vma)) > goto skip; > > -- > 2.27.0 >