From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE, SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 10434C5519F for ; Mon, 16 Nov 2020 10:18:34 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 78CD024650 for ; Mon, 16 Nov 2020 10:18:33 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="iZpYqYUK" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 78CD024650 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E31AC6B005D; Mon, 16 Nov 2020 05:18:31 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C45DB6B006C; Mon, 16 Nov 2020 05:18:31 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AE4D86B006E; Mon, 16 Nov 2020 05:18:31 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0026.hostedemail.com [216.40.44.26]) by kanga.kvack.org (Postfix) with ESMTP id 755E16B005D for ; Mon, 16 Nov 2020 05:18:31 -0500 (EST) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 09F8B181AC9CB for ; Mon, 16 Nov 2020 10:18:31 +0000 (UTC) X-FDA: 77489882022.24.rings04_5a0e2fd27328 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin24.hostedemail.com (Postfix) with ESMTP id EA52C1A4A0 for ; Mon, 16 Nov 2020 10:18:30 +0000 (UTC) X-HE-Tag: rings04_5a0e2fd27328 X-Filterd-Recvd-Size: 11158 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf20.hostedemail.com (Postfix) with ESMTP for ; Mon, 16 Nov 2020 10:18:30 +0000 (UTC) Received: from mail.kernel.org (ip5f5ad5de.dynamic.kabel-deutschland.de [95.90.213.222]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id AE85D2463D; Mon, 16 Nov 2020 10:18:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1605521908; bh=0sQW+g6cEsXpFr6snJ6ztA+4abjwbDH4DFLh7sXWUmc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iZpYqYUKWj4Ys0HiUuVbFKYGIO/asdWAb4LHJsSVXW2lhFlLR9eBXe3lpbTrbFV5J z/dwgwn3ec7EkOHQwkaUI48FwDpr9ETwLF55nvGbW4cSOKq4Mc0ynSgHj4EKCat9P3 fCTClkfqSr0IwGovhiF8WZa+gurUXtD+/g0fOxKM= Received: from mchehab by mail.kernel.org with local (Exim 4.94) (envelope-from ) id 1kebac-00FwEs-N9; Mon, 16 Nov 2020 11:18:26 +0100 From: Mauro Carvalho Chehab To: Andrew Morton Cc: Mauro Carvalho Chehab , "Jonathan Corbet" , "Linux Doc Mailing List" , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Mattew Wilcox Subject: [PATCH v4 25/27] mm: fix kernel-doc markups Date: Mon, 16 Nov 2020 11:18:21 +0100 Message-Id: <80e85dddc92d333bc2159ee8a2294921612e8745.1605521731.git.mchehab+huawei@kernel.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Kernel-doc markups should use this format: identifier - description Fix some issues on mm files: 1) The definition for get_user_pages_locked() doesn't follow it. Also, it expects a short descrpition at the header, followed by a long one, after the parameters. Fix it. 2) Kernel-doc requires that a kernel-doc markup to be immediatly below the function prototype, as otherwise it will rename it. So, move get_pfnblock_flags_mask() description to the right place. 3) Make invalidate_mapping_pagevec() to also follow the expected kernel-doc format. While here, fix a few minor English syntax issues, as suggested by Matthew: will used -> will be used similar with -> similar to Suggested-by: Mattew Wilcox # English fixes Signed-off-by: Mauro Carvalho Chehab --- mm/gup.c | 24 +++++++++++++----------- mm/page_alloc.c | 16 ++++++++-------- mm/truncate.c | 10 ++++++++-- 3 files changed, 29 insertions(+), 21 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 49c4eabca271..f3751bf28326 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1924,66 +1924,68 @@ static long __get_user_pages_remote(struct mm_str= uct *mm, * Or NULL if the caller does not require them. * * This is the same as get_user_pages_remote(), just with a less-flexibl= e * calling convention where we assume that the mm being operated on belo= ngs to * the current task, and doesn't allow passing of a locked parameter. W= e also * obviously don't pass FOLL_REMOTE in here. */ long get_user_pages(unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas) { if (!is_valid_gup_flags(gup_flags)) return -EINVAL; =20 return __gup_longterm_locked(current->mm, start, nr_pages, pages, vmas, gup_flags | FOLL_TOUCH); } EXPORT_SYMBOL(get_user_pages); =20 /** - * get_user_pages_locked() is suitable to replace the form: + * get_user_pages_locked() - variant of get_user_pages() + * + * @start: starting user address + * @nr_pages: number of pages from start to pin + * @gup_flags: flags modifying lookup behaviour + * @pages: array that receives pointers to the pages pinned. + * Should be at least nr_pages long. Or NULL, if caller + * only intends to ensure the pages are faulted in. + * @locked: pointer to lock flag indicating whether lock is held and + * subsequently whether VM_FAULT_RETRY functionality can be + * utilised. Lock must initially be held. + * + * It is suitable to replace the form: * * mmap_read_lock(mm); * do_something() * get_user_pages(mm, ..., pages, NULL); * mmap_read_unlock(mm); * * to: * * int locked =3D 1; * mmap_read_lock(mm); * do_something() * get_user_pages_locked(mm, ..., pages, &locked); * if (locked) * mmap_read_unlock(mm); * - * @start: starting user address - * @nr_pages: number of pages from start to pin - * @gup_flags: flags modifying lookup behaviour - * @pages: array that receives pointers to the pages pinned. - * Should be at least nr_pages long. Or NULL, if caller - * only intends to ensure the pages are faulted in. - * @locked: pointer to lock flag indicating whether lock is held and - * subsequently whether VM_FAULT_RETRY functionality can be - * utilised. Lock must initially be held. - * * We can leverage the VM_FAULT_RETRY functionality in the page fault * paths better by using either get_user_pages_locked() or * get_user_pages_unlocked(). * */ long get_user_pages_locked(unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, int *locked) { /* * FIXME: Current FOLL_LONGTERM behavior is incompatible with * FAULT_FLAG_ALLOW_RETRY because of the FS DAX check requirement on * vmas. As there are no users of this flag in this call we simply * disallow this option for now. */ if (WARN_ON_ONCE(gup_flags & FOLL_LONGTERM)) return -EINVAL; /* * FOLL_PIN must only be set internally by the pin_user_pages*() APIs, * never directly by the caller, so enforce that: diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 63d8d8b72c10..7e4d1e4bdee9 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -478,66 +478,66 @@ static inline bool defer_init(int nid, unsigned lon= g pfn, unsigned long end_pfn) static inline unsigned long *get_pageblock_bitmap(struct page *page, unsigned long pfn) { #ifdef CONFIG_SPARSEMEM return section_to_usemap(__pfn_to_section(pfn)); #else return page_zone(page)->pageblock_flags; #endif /* CONFIG_SPARSEMEM */ } =20 static inline int pfn_to_bitidx(struct page *page, unsigned long pfn) { #ifdef CONFIG_SPARSEMEM pfn &=3D (PAGES_PER_SECTION-1); #else pfn =3D pfn - round_down(page_zone(page)->zone_start_pfn, pageblock_nr_= pages); #endif /* CONFIG_SPARSEMEM */ return (pfn >> pageblock_order) * NR_PAGEBLOCK_BITS; } =20 -/** - * get_pfnblock_flags_mask - Return the requested group of flags for the= pageblock_nr_pages block of pages - * @page: The page within the block of interest - * @pfn: The target page frame number - * @mask: mask of bits that the caller is interested in - * - * Return: pageblock_bits flags - */ static __always_inline unsigned long __get_pfnblock_flags_mask(struct page *page, unsigned long pfn, unsigned long mask) { unsigned long *bitmap; unsigned long bitidx, word_bitidx; unsigned long word; =20 bitmap =3D get_pageblock_bitmap(page, pfn); bitidx =3D pfn_to_bitidx(page, pfn); word_bitidx =3D bitidx / BITS_PER_LONG; bitidx &=3D (BITS_PER_LONG-1); =20 word =3D bitmap[word_bitidx]; return (word >> bitidx) & mask; } =20 +/** + * get_pfnblock_flags_mask - Return the requested group of flags for the= pageblock_nr_pages block of pages + * @page: The page within the block of interest + * @pfn: The target page frame number + * @mask: mask of bits that the caller is interested in + * + * Return: pageblock_bits flags + */ unsigned long get_pfnblock_flags_mask(struct page *page, unsigned long p= fn, unsigned long mask) { return __get_pfnblock_flags_mask(page, pfn, mask); } =20 static __always_inline int get_pfnblock_migratetype(struct page *page, u= nsigned long pfn) { return __get_pfnblock_flags_mask(page, pfn, MIGRATETYPE_MASK); } =20 /** * set_pfnblock_flags_mask - Set the requested group of flags for a page= block_nr_pages block of pages * @page: The page within the block of interest * @flags: The flags to set * @pfn: The target page frame number * @mask: mask of bits that the caller is interested in */ void set_pfnblock_flags_mask(struct page *page, unsigned long flags, unsigned long pfn, diff --git a/mm/truncate.c b/mm/truncate.c index 960edf5803ca..604eaabc6d06 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -620,43 +620,49 @@ static unsigned long __invalidate_mapping_pages(str= uct address_space *mapping, * @start: the offset 'from' which to invalidate * @end: the offset 'to' which to invalidate (inclusive) * * This function only removes the unlocked pages, if you want to * remove all the pages of one inode, you must call truncate_inode_pages= . * * invalidate_mapping_pages() will not block on IO activity. It will not * invalidate pages which are dirty, locked, under writeback or mapped i= nto * pagetables. * * Return: the number of the pages that were invalidated */ unsigned long invalidate_mapping_pages(struct address_space *mapping, pgoff_t start, pgoff_t end) { return __invalidate_mapping_pages(mapping, start, end, NULL); } EXPORT_SYMBOL(invalidate_mapping_pages); =20 /** - * This helper is similar with the above one, except that it accounts fo= r pages - * that are likely on a pagevec and count them in @nr_pagevec, which wil= l used by + * invalidate_mapping_pagevec - This helper is similar to + * invalidate_mapping_pages(), except that it accounts for pages that ar= e + * likely on a pagevec and count them in @nr_pagevec, which will be used= by * the caller. + * + * @mapping: the address_space which holds the pages to invalidate + * @start: the offset 'from' which to invalidate + * @end: the offset 'to' which to invalidate (inclusive) + * */ void invalidate_mapping_pagevec(struct address_space *mapping, pgoff_t start, pgoff_t end, unsigned long *nr_pagevec) { __invalidate_mapping_pages(mapping, start, end, nr_pagevec); } =20 /* * This is like invalidate_complete_page(), except it ignores the page's * refcount. We do this because invalidate_inode_pages2() needs stronge= r * invalidation guarantees, and cannot afford to leave pages behind beca= use * shrink_page_list() has a temp ref on them, or because they're transie= ntly * sitting in the lru_cache_add() pagevecs. */ static int invalidate_complete_page2(struct address_space *mapping, struct page *pa= ge) { unsigned long flags; =20 if (page->mapping !=3D mapping) --=20 2.28.0