From: Anthony Yznaga <anthony.yznaga@oracle.com> To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: willy@infradead.org, corbet@lwn.net, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, rppt@kernel.org, akpm@linux-foundation.org, hughd@google.com, ebiederm@xmission.com, keescook@chromium.org, ardb@kernel.org, nivedita@alum.mit.edu, jroedel@suse.de, masahiroy@kernel.org, nathan@kernel.org, terrelln@fb.com, vincenzo.frascino@arm.com, martin.b.radev@gmail.com, andreyknvl@google.com, daniel.kiper@oracle.com, rafael.j.wysocki@intel.com, dan.j.williams@intel.com, Jonathan.Cameron@huawei.com, bhe@redhat.com, rminnich@gmail.com, ashish.kalra@amd.com, guro@fb.com, hannes@cmpxchg.org, mhocko@kernel.org, iamjoonsoo.kim@lge.com, vbabka@suse.cz, alex.shi@linux.alibaba.com, david@redhat.com, richard.weiyang@gmail.com, vdavydov.dev@gmail.com, graf@amazon.com, jason.zeng@intel.com, lei.l.li@intel.com, daniel.m.jordan@oracle.com, steven.sistare@oracle.com, linux-fsdevel@vger.kernel.org, linux-doc@vger.kernel.org, kexec@lists.infradead.org Subject: [RFC v2 23/43] mm: shmem: introduce shmem_insert_page Date: Tue, 30 Mar 2021 14:35:58 -0700 [thread overview] Message-ID: <1617140178-8773-24-git-send-email-anthony.yznaga@oracle.com> (raw) In-Reply-To: <1617140178-8773-1-git-send-email-anthony.yznaga@oracle.com> The function inserts a page into a shmem file at a specified offset. The page can be a regular PAGE_SIZE page or a transparent huge page. If there is something at the offset (page or swap), the function fails. The function will be used by the next patch. Originally-by: Vladimir Davydov <vdavydov.dev@gmail.com> Signed-off-by: Anthony Yznaga <anthony.yznaga@oracle.com> --- include/linux/shmem_fs.h | 3 ++ mm/shmem.c | 77 ++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 80 insertions(+) diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h index d82b6f396588..3f0dd95efd46 100644 --- a/include/linux/shmem_fs.h +++ b/include/linux/shmem_fs.h @@ -103,6 +103,9 @@ enum sgp_type { extern int shmem_getpage(struct inode *inode, pgoff_t index, struct page **pagep, enum sgp_type sgp); +extern int shmem_insert_page(struct mm_struct *mm, struct inode *inode, + pgoff_t index, struct page *page); + static inline struct page *shmem_read_mapping_page( struct address_space *mapping, pgoff_t index) { diff --git a/mm/shmem.c b/mm/shmem.c index b2db4ed0fbc7..60e4f0ad23b9 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -755,6 +755,83 @@ static void shmem_delete_from_page_cache(struct page *page, void *radswap) BUG_ON(error); } +int shmem_insert_page(struct mm_struct *mm, struct inode *inode, pgoff_t index, + struct page *page) +{ + struct address_space *mapping = inode->i_mapping; + struct shmem_inode_info *info = SHMEM_I(inode); + struct shmem_sb_info *sbinfo = SHMEM_SB(inode->i_sb); + gfp_t gfp = mapping_gfp_mask(mapping); + int err; + int nr; + pgoff_t hindex = index; + bool on_lru = PageLRU(page); + + if (index > (MAX_LFS_FILESIZE >> PAGE_SHIFT)) + return -EFBIG; + + nr = thp_nr_pages(page); +retry: + err = 0; + if (!shmem_inode_acct_block(inode, nr)) + err = -ENOSPC; + if (err) { + int retry = 5; + + /* + * Try to reclaim some space by splitting a huge page + * beyond i_size on the filesystem. + */ + while (retry--) { + int ret; + + ret = shmem_unused_huge_shrink(sbinfo, NULL, 1); + if (ret == SHRINK_STOP) + break; + if (ret) + goto retry; + } + goto failed; + } + + if (!on_lru) { + __SetPageLocked(page); + __SetPageSwapBacked(page); + } else { + lock_page(page); + } + + hindex = round_down(index, nr); + __SetPageReferenced(page); + + err = shmem_add_to_page_cache(page, mapping, hindex, + NULL, gfp & GFP_RECLAIM_MASK, mm); + if (err) + goto out_unlock; + + if (!on_lru) + lru_cache_add(page); + + spin_lock(&info->lock); + info->alloced += nr; + inode->i_blocks += BLOCKS_PER_PAGE << thp_order(page); + shmem_recalc_inode(inode); + spin_unlock(&info->lock); + + flush_dcache_page(page); + SetPageUptodate(page); + set_page_dirty(page); + + unlock_page(page); + return 0; + +out_unlock: + unlock_page(page); + shmem_inode_unacct_blocks(inode, nr); +failed: + return err; +} + /* * Remove swap entry from page cache, free the swap and its page cache. */ -- 1.8.3.1
WARNING: multiple messages have this Message-ID (diff)
From: Anthony Yznaga <anthony.yznaga@oracle.com> To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: willy@infradead.org, corbet@lwn.net, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, rppt@kernel.org, akpm@linux-foundation.org, hughd@google.com, ebiederm@xmission.com, keescook@chromium.org, ardb@kernel.org, nivedita@alum.mit.edu, jroedel@suse.de, masahiroy@kernel.org, nathan@kernel.org, terrelln@fb.com, vincenzo.frascino@arm.com, martin.b.radev@gmail.com, andreyknvl@google.com, daniel.kiper@oracle.com, rafael.j.wysocki@intel.com, dan.j.williams@intel.com, Jonathan.Cameron@huawei.com, bhe@redhat.com, rminnich@gmail.com, ashish.kalra@amd.com, guro@fb.com, hannes@cmpxchg.org, mhocko@kernel.org, iamjoonsoo.kim@lge.com, vbabka@suse.cz, alex.shi@linux.alibaba.com, david@redhat.com, richard.weiyang@gmail.com, vdavydov.dev@gmail.com, graf@amazon.com, jason.zeng@intel.com, lei.l.li@intel.com, daniel.m.jordan@oracle.com, steven.sistare@oracle.com, linux-fsdevel@vger.kernel.org, linux-doc@vger.kernel.org, kexec@lists.infradead.org Subject: [RFC v2 23/43] mm: shmem: introduce shmem_insert_page Date: Tue, 30 Mar 2021 14:35:58 -0700 [thread overview] Message-ID: <1617140178-8773-24-git-send-email-anthony.yznaga@oracle.com> (raw) In-Reply-To: <1617140178-8773-1-git-send-email-anthony.yznaga@oracle.com> The function inserts a page into a shmem file at a specified offset. The page can be a regular PAGE_SIZE page or a transparent huge page. If there is something at the offset (page or swap), the function fails. The function will be used by the next patch. Originally-by: Vladimir Davydov <vdavydov.dev@gmail.com> Signed-off-by: Anthony Yznaga <anthony.yznaga@oracle.com> --- include/linux/shmem_fs.h | 3 ++ mm/shmem.c | 77 ++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 80 insertions(+) diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h index d82b6f396588..3f0dd95efd46 100644 --- a/include/linux/shmem_fs.h +++ b/include/linux/shmem_fs.h @@ -103,6 +103,9 @@ enum sgp_type { extern int shmem_getpage(struct inode *inode, pgoff_t index, struct page **pagep, enum sgp_type sgp); +extern int shmem_insert_page(struct mm_struct *mm, struct inode *inode, + pgoff_t index, struct page *page); + static inline struct page *shmem_read_mapping_page( struct address_space *mapping, pgoff_t index) { diff --git a/mm/shmem.c b/mm/shmem.c index b2db4ed0fbc7..60e4f0ad23b9 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -755,6 +755,83 @@ static void shmem_delete_from_page_cache(struct page *page, void *radswap) BUG_ON(error); } +int shmem_insert_page(struct mm_struct *mm, struct inode *inode, pgoff_t index, + struct page *page) +{ + struct address_space *mapping = inode->i_mapping; + struct shmem_inode_info *info = SHMEM_I(inode); + struct shmem_sb_info *sbinfo = SHMEM_SB(inode->i_sb); + gfp_t gfp = mapping_gfp_mask(mapping); + int err; + int nr; + pgoff_t hindex = index; + bool on_lru = PageLRU(page); + + if (index > (MAX_LFS_FILESIZE >> PAGE_SHIFT)) + return -EFBIG; + + nr = thp_nr_pages(page); +retry: + err = 0; + if (!shmem_inode_acct_block(inode, nr)) + err = -ENOSPC; + if (err) { + int retry = 5; + + /* + * Try to reclaim some space by splitting a huge page + * beyond i_size on the filesystem. + */ + while (retry--) { + int ret; + + ret = shmem_unused_huge_shrink(sbinfo, NULL, 1); + if (ret == SHRINK_STOP) + break; + if (ret) + goto retry; + } + goto failed; + } + + if (!on_lru) { + __SetPageLocked(page); + __SetPageSwapBacked(page); + } else { + lock_page(page); + } + + hindex = round_down(index, nr); + __SetPageReferenced(page); + + err = shmem_add_to_page_cache(page, mapping, hindex, + NULL, gfp & GFP_RECLAIM_MASK, mm); + if (err) + goto out_unlock; + + if (!on_lru) + lru_cache_add(page); + + spin_lock(&info->lock); + info->alloced += nr; + inode->i_blocks += BLOCKS_PER_PAGE << thp_order(page); + shmem_recalc_inode(inode); + spin_unlock(&info->lock); + + flush_dcache_page(page); + SetPageUptodate(page); + set_page_dirty(page); + + unlock_page(page); + return 0; + +out_unlock: + unlock_page(page); + shmem_inode_unacct_blocks(inode, nr); +failed: + return err; +} + /* * Remove swap entry from page cache, free the swap and its page cache. */ -- 1.8.3.1 _______________________________________________ kexec mailing list kexec@lists.infradead.org http://lists.infradead.org/mailman/listinfo/kexec
next prev parent reply other threads:[~2021-03-30 21:28 UTC|newest] Thread overview: 94+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-03-30 21:35 [RFC v2 00/43] PKRAM: Preserved-over-Kexec RAM Anthony Yznaga 2021-03-30 21:35 ` Anthony Yznaga 2021-03-30 21:35 ` [RFC v2 01/43] mm: add PKRAM API stubs and Kconfig Anthony Yznaga 2021-03-30 21:35 ` Anthony Yznaga 2021-03-31 18:43 ` Randy Dunlap 2021-03-31 18:43 ` Randy Dunlap 2021-03-31 20:28 ` Anthony Yznaga 2021-03-31 20:28 ` Anthony Yznaga 2021-03-30 21:35 ` [RFC v2 02/43] mm: PKRAM: implement node load and save functions Anthony Yznaga 2021-03-30 21:35 ` Anthony Yznaga 2021-03-30 21:35 ` [RFC v2 03/43] mm: PKRAM: implement object " Anthony Yznaga 2021-03-30 21:35 ` Anthony Yznaga 2021-03-30 21:35 ` [RFC v2 04/43] mm: PKRAM: implement page stream operations Anthony Yznaga 2021-03-30 21:35 ` Anthony Yznaga 2021-03-30 21:35 ` [RFC v2 05/43] mm: PKRAM: support preserving transparent hugepages Anthony Yznaga 2021-03-30 21:35 ` Anthony Yznaga 2021-03-30 21:35 ` [RFC v2 06/43] mm: PKRAM: implement byte stream operations Anthony Yznaga 2021-03-30 21:35 ` Anthony Yznaga 2021-03-30 21:35 ` [RFC v2 07/43] mm: PKRAM: link nodes by pfn before reboot Anthony Yznaga 2021-03-30 21:35 ` Anthony Yznaga 2021-03-30 21:35 ` [RFC v2 08/43] mm: PKRAM: introduce super block Anthony Yznaga 2021-03-30 21:35 ` Anthony Yznaga 2021-03-30 21:35 ` [RFC v2 09/43] PKRAM: track preserved pages in a physical mapping pagetable Anthony Yznaga 2021-03-30 21:35 ` Anthony Yznaga 2021-03-30 21:35 ` [RFC v2 10/43] PKRAM: pass a list of preserved ranges to the next kernel Anthony Yznaga 2021-03-30 21:35 ` Anthony Yznaga 2021-03-30 21:35 ` [RFC v2 11/43] PKRAM: prepare for adding preserved ranges to memblock reserved Anthony Yznaga 2021-03-30 21:35 ` Anthony Yznaga 2021-03-30 21:35 ` [RFC v2 12/43] mm: PKRAM: reserve preserved memory at boot Anthony Yznaga 2021-03-30 21:35 ` Anthony Yznaga 2021-03-30 21:35 ` [RFC v2 13/43] PKRAM: free the preserved ranges list Anthony Yznaga 2021-03-30 21:35 ` Anthony Yznaga 2021-03-30 21:35 ` [RFC v2 14/43] PKRAM: prevent inadvertent use of a stale superblock Anthony Yznaga 2021-03-30 21:35 ` Anthony Yznaga 2021-03-30 21:35 ` [RFC v2 15/43] PKRAM: provide a way to ban pages from use by PKRAM Anthony Yznaga 2021-03-30 21:35 ` Anthony Yznaga 2021-03-30 21:35 ` [RFC v2 16/43] kexec: PKRAM: prevent kexec clobbering preserved pages in some cases Anthony Yznaga 2021-03-30 21:35 ` Anthony Yznaga 2021-03-30 21:35 ` [RFC v2 17/43] PKRAM: provide a way to check if a memory range has preserved pages Anthony Yznaga 2021-03-30 21:35 ` Anthony Yznaga 2021-03-30 21:35 ` [RFC v2 18/43] kexec: PKRAM: avoid clobbering already " Anthony Yznaga 2021-03-30 21:35 ` Anthony Yznaga 2021-03-30 21:35 ` [RFC v2 19/43] mm: PKRAM: allow preserved memory to be freed from userspace Anthony Yznaga 2021-03-30 21:35 ` Anthony Yznaga 2021-03-30 21:35 ` [RFC v2 20/43] PKRAM: disable feature when running the kdump kernel Anthony Yznaga 2021-03-30 21:35 ` Anthony Yznaga 2021-03-30 21:35 ` [RFC v2 21/43] x86/KASLR: PKRAM: support physical kaslr Anthony Yznaga 2021-03-30 21:35 ` Anthony Yznaga 2021-03-30 21:35 ` [RFC v2 22/43] x86/boot/compressed/64: use 1GB pages for mappings Anthony Yznaga 2021-03-30 21:35 ` Anthony Yznaga 2021-03-30 21:35 ` Anthony Yznaga [this message] 2021-03-30 21:35 ` [RFC v2 23/43] mm: shmem: introduce shmem_insert_page Anthony Yznaga 2021-03-30 21:35 ` [RFC v2 24/43] mm: shmem: enable saving to PKRAM Anthony Yznaga 2021-03-30 21:35 ` Anthony Yznaga 2021-03-30 21:36 ` [RFC v2 25/43] mm: shmem: prevent swapping of PKRAM-enabled tmpfs pages Anthony Yznaga 2021-03-30 21:36 ` Anthony Yznaga 2021-03-30 21:36 ` [RFC v2 26/43] mm: shmem: specify the mm to use when inserting pages Anthony Yznaga 2021-03-30 21:36 ` Anthony Yznaga 2021-03-30 21:36 ` [RFC v2 27/43] mm: shmem: when inserting, handle pages already charged to a memcg Anthony Yznaga 2021-03-30 21:36 ` Anthony Yznaga 2021-03-30 21:36 ` [RFC v2 28/43] x86/mm/numa: add numa_isolate_memblocks() Anthony Yznaga 2021-03-30 21:36 ` Anthony Yznaga 2021-03-30 21:36 ` [RFC v2 29/43] PKRAM: ensure memblocks with preserved pages init'd for numa Anthony Yznaga 2021-03-30 21:36 ` Anthony Yznaga 2021-03-30 21:36 ` [RFC v2 30/43] memblock: PKRAM: mark memblocks that contain preserved pages Anthony Yznaga 2021-03-30 21:36 ` Anthony Yznaga 2021-03-30 21:36 ` [RFC v2 31/43] memblock, mm: defer initialization of " Anthony Yznaga 2021-03-30 21:36 ` Anthony Yznaga 2021-03-30 21:36 ` [RFC v2 32/43] shmem: preserve shmem files a chunk at a time Anthony Yznaga 2021-03-30 21:36 ` Anthony Yznaga 2021-03-30 21:36 ` [RFC v2 33/43] PKRAM: atomically add and remove link pages Anthony Yznaga 2021-03-30 21:36 ` Anthony Yznaga 2021-03-30 21:36 ` [RFC v2 34/43] shmem: PKRAM: multithread preserving and restoring shmem pages Anthony Yznaga 2021-03-30 21:36 ` Anthony Yznaga 2021-03-30 21:36 ` [RFC v2 35/43] shmem: introduce shmem_insert_pages() Anthony Yznaga 2021-03-30 21:36 ` Anthony Yznaga 2021-03-30 21:36 ` [RFC v2 36/43] PKRAM: add support for loading pages in bulk Anthony Yznaga 2021-03-30 21:36 ` Anthony Yznaga 2021-03-30 21:36 ` [RFC v2 37/43] shmem: PKRAM: enable bulk loading of preserved pages into shmem Anthony Yznaga 2021-03-30 21:36 ` Anthony Yznaga 2021-03-30 21:36 ` [RFC v2 38/43] mm: implement splicing a list of pages to the LRU Anthony Yznaga 2021-03-30 21:36 ` Anthony Yznaga 2021-03-30 21:36 ` [RFC v2 39/43] shmem: optimize adding pages to the LRU in shmem_insert_pages() Anthony Yznaga 2021-03-30 21:36 ` Anthony Yznaga 2021-03-30 21:36 ` [RFC v2 40/43] shmem: initial support for adding multiple pages to pagecache Anthony Yznaga 2021-03-30 21:36 ` Anthony Yznaga 2021-03-30 21:36 ` [RFC v2 41/43] XArray: add xas_export_node() and xas_import_node() Anthony Yznaga 2021-03-30 21:36 ` Anthony Yznaga 2021-03-30 21:36 ` [RFC v2 42/43] shmem: reduce time holding xa_lock when inserting pages Anthony Yznaga 2021-03-30 21:36 ` Anthony Yznaga 2021-03-30 21:36 ` [RFC v2 43/43] PKRAM: improve index alignment of pkram_link entries Anthony Yznaga 2021-03-30 21:36 ` Anthony Yznaga 2021-06-05 13:39 ` [RFC v2 00/43] PKRAM: Preserved-over-Kexec RAM Pavel Tatashin 2021-06-05 13:39 ` Pavel Tatashin
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=1617140178-8773-24-git-send-email-anthony.yznaga@oracle.com \ --to=anthony.yznaga@oracle.com \ --cc=Jonathan.Cameron@huawei.com \ --cc=akpm@linux-foundation.org \ --cc=alex.shi@linux.alibaba.com \ --cc=andreyknvl@google.com \ --cc=ardb@kernel.org \ --cc=ashish.kalra@amd.com \ --cc=bhe@redhat.com \ --cc=bp@alien8.de \ --cc=corbet@lwn.net \ --cc=dan.j.williams@intel.com \ --cc=daniel.kiper@oracle.com \ --cc=daniel.m.jordan@oracle.com \ --cc=dave.hansen@linux.intel.com \ --cc=david@redhat.com \ --cc=ebiederm@xmission.com \ --cc=graf@amazon.com \ --cc=guro@fb.com \ --cc=hannes@cmpxchg.org \ --cc=hpa@zytor.com \ --cc=hughd@google.com \ --cc=iamjoonsoo.kim@lge.com \ --cc=jason.zeng@intel.com \ --cc=jroedel@suse.de \ --cc=keescook@chromium.org \ --cc=kexec@lists.infradead.org \ --cc=lei.l.li@intel.com \ --cc=linux-doc@vger.kernel.org \ --cc=linux-fsdevel@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=luto@kernel.org \ --cc=martin.b.radev@gmail.com \ --cc=masahiroy@kernel.org \ --cc=mhocko@kernel.org \ --cc=mingo@redhat.com \ --cc=nathan@kernel.org \ --cc=nivedita@alum.mit.edu \ --cc=peterz@infradead.org \ --cc=rafael.j.wysocki@intel.com \ --cc=richard.weiyang@gmail.com \ --cc=rminnich@gmail.com \ --cc=rppt@kernel.org \ --cc=steven.sistare@oracle.com \ --cc=terrelln@fb.com \ --cc=tglx@linutronix.de \ --cc=vbabka@suse.cz \ --cc=vdavydov.dev@gmail.com \ --cc=vincenzo.frascino@arm.com \ --cc=willy@infradead.org \ --cc=x86@kernel.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.