From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AABBBC433FE for ; Mon, 7 Dec 2020 11:33:44 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1912223340 for ; Mon, 7 Dec 2020 11:33:44 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1912223340 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A20DE8D0008; Mon, 7 Dec 2020 06:33:43 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9FBBF8D0001; Mon, 7 Dec 2020 06:33:43 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 875158D0008; Mon, 7 Dec 2020 06:33:43 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0071.hostedemail.com [216.40.44.71]) by kanga.kvack.org (Postfix) with ESMTP id 72C1C8D0001 for ; Mon, 7 Dec 2020 06:33:43 -0500 (EST) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 34233181AEF1F for ; Mon, 7 Dec 2020 11:33:43 +0000 (UTC) X-FDA: 77566276326.10.cast28_2515f70273de Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin10.hostedemail.com (Postfix) with ESMTP id 1378A16A0B9 for ; Mon, 7 Dec 2020 11:33:43 +0000 (UTC) X-HE-Tag: cast28_2515f70273de X-Filterd-Recvd-Size: 14406 Received: from mail-pg1-f194.google.com (mail-pg1-f194.google.com [209.85.215.194]) by imf33.hostedemail.com (Postfix) with ESMTP for ; Mon, 7 Dec 2020 11:33:42 +0000 (UTC) Received: by mail-pg1-f194.google.com with SMTP id m9so8659432pgb.4 for ; Mon, 07 Dec 2020 03:33:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=/44otMrMxzCUiGk0V1t2L81SCVx385WHUCyFll3f9p0=; b=EiialdWRB3FHs1vDA+YkzfnQD6VJz6N4qmPV73L07sHuG/P4ErRRe6bm61f9g5hr9H +bV+1omLT2lXgkipsrER7t4rQ/QroKwlhp9ASMHS41/YxSwuudP0PtPd/urhOkA/EO9+ TiWzoGcwtXpte4GC/vTXZi503fNXbWeHFok7IbUc5PmkUX4gvlyg1b9Q+jbOSz3ZunLJ gdl947A+Eh0NIoZr03LVyIBhMu8U/PPJJg7YRgXc0swqDOVuRoWZ1LhJGdtkIFQaMPvG Dygv6S2XVQrEGFMtvhF+fVgdh1BjHZ9kopYhtI9kkXo7XYMujVSmvjq+i89HNmdDCg1L X81A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=/44otMrMxzCUiGk0V1t2L81SCVx385WHUCyFll3f9p0=; b=QW9fsoYCQIOd/Y5CxOFMX7D7bo4qthRnTola9+3pAouay+NBtIdRDpLZXo/+VEKUIK FSSic5ycPUhvoqn7BpU2JLkpzxmOwwhi6T145lmPKFVwu98J0bqX54EYM3amCGezj6sa 6qDXZS7s87r1iIjlzTGc3Lve5EVIz2ZzkiVKeplN+GbfhX1oNDVcCS9j7HmMJ6GcYwuf I1pauYldrO1lJiSoMAFwmRHtfhw9Kbz58igmezuZlaFhKT+J3RRxw+5xselOdGFgw4QK f6iMkNXlij6nZFHU/oli7Fkrhj3AhzVH1Ef0MNyv7s656Miu0o3eZAarOXtVyPfsRAtw mnXA== X-Gm-Message-State: AOAM533KhNslMSdS+qLopjEoDX+NDDkJiY/lPSTj9T7rO3Cc4GU1Nygj hjPW+veSPXJ9XPHyEw7QavVv/kBrIF4= X-Google-Smtp-Source: ABdhPJxsuCuuWYmFuc/z0htdn8lG7NF7yPaaVzD61TY34e7sLxSOK8H64w92t88zlRNSfCJLilKEiQ== X-Received: by 2002:aa7:8b15:0:b029:196:59ad:ab93 with SMTP id f21-20020aa78b150000b029019659adab93mr15263676pfd.16.1607340821419; Mon, 07 Dec 2020 03:33:41 -0800 (PST) Received: from localhost.localdomain ([203.205.141.39]) by smtp.gmail.com with ESMTPSA id d4sm14219822pfo.127.2020.12.07.03.33.38 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 07 Dec 2020 03:33:40 -0800 (PST) From: yulei.kernel@gmail.com X-Google-Original-From: yuleixzhang@tencent.com To: linux-mm@kvack.org, akpm@linux-foundation.org, linux-fsdevel@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, naoya.horiguchi@nec.com, viro@zeniv.linux.org.uk, pbonzini@redhat.com Cc: joao.m.martins@oracle.com, rdunlap@infradead.org, sean.j.christopherson@intel.com, xiaoguangrong.eric@gmail.com, kernellwp@gmail.com, lihaiwei.kernel@gmail.com, Yulei Zhang , Xiao Guangrong Subject: [RFC V2 05/37] dmemfs: support mmap for dmemfs Date: Mon, 7 Dec 2020 19:30:58 +0800 Message-Id: <556903717e3d0b0fc0b9583b709f4b34be2154cb.1607332046.git.yuleixzhang@tencent.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Yulei Zhang This patch adds mmap support. Note the file will be extended if it's beyond mmap's offset, that drops the requirement of write() operation, however, it has not supported cutting file down yet. Signed-off-by: Xiao Guangrong Signed-off-by: Yulei Zhang --- fs/dmemfs/inode.c | 343 +++++++++++++++++++++++++++++++++++++++++++++= +++++- include/linux/dmem.h | 10 ++ 2 files changed, 351 insertions(+), 2 deletions(-) diff --git a/fs/dmemfs/inode.c b/fs/dmemfs/inode.c index 0aa3d3b..7b6e51d 100644 --- a/fs/dmemfs/inode.c +++ b/fs/dmemfs/inode.c @@ -26,6 +26,7 @@ #include #include #include +#include =20 MODULE_AUTHOR("Tencent Corporation"); MODULE_LICENSE("GPL v2"); @@ -102,7 +103,255 @@ static int dmemfs_mkdir(struct inode *dir, struct d= entry *dentry, .getattr =3D simple_getattr, }; =20 +static unsigned long dmem_pgoff_to_index(struct inode *inode, pgoff_t pg= off) +{ + struct super_block *sb =3D inode->i_sb; + + return pgoff >> (sb->s_blocksize_bits - PAGE_SHIFT); +} + +static void *dmem_addr_to_entry(struct inode *inode, phys_addr_t addr) +{ + struct super_block *sb =3D inode->i_sb; + + addr >>=3D sb->s_blocksize_bits; + return xa_mk_value(addr); +} + +static phys_addr_t dmem_entry_to_addr(struct inode *inode, void *entry) +{ + struct super_block *sb =3D inode->i_sb; + + WARN_ON(!xa_is_value(entry)); + return xa_to_value(entry) << sb->s_blocksize_bits; +} + +static unsigned long +dmem_addr_to_pfn(struct inode *inode, phys_addr_t addr, pgoff_t pgoff, + unsigned int fault_shift) +{ + struct super_block *sb =3D inode->i_sb; + unsigned long pfn =3D addr >> PAGE_SHIFT; + unsigned long mask; + + mask =3D (1UL << ((unsigned int)sb->s_blocksize_bits - fault_shift)) - = 1; + mask <<=3D fault_shift - PAGE_SHIFT; + + return pfn + (pgoff & mask); +} + +static inline unsigned long dmem_page_size(struct inode *inode) +{ + return inode->i_sb->s_blocksize; +} + +static int check_inode_size(struct inode *inode, loff_t offset) +{ + WARN_ON_ONCE(!rcu_read_lock_held()); + + if (offset >=3D i_size_read(inode)) + return -EINVAL; + + return 0; +} + +static unsigned +dmemfs_find_get_entries(struct address_space *mapping, unsigned long sta= rt, + unsigned int nr_entries, void **entries, + unsigned long *indices) +{ + XA_STATE(xas, &mapping->i_pages, start); + + void *entry; + unsigned int ret =3D 0; + + if (!nr_entries) + return 0; + + rcu_read_lock(); + + xas_for_each(&xas, entry, ULONG_MAX) { + if (xas_retry(&xas, entry)) + continue; + + if (xa_is_value(entry)) + goto export; + + if (unlikely(entry !=3D xas_reload(&xas))) + goto retry; + +export: + indices[ret] =3D xas.xa_index; + entries[ret] =3D entry; + if (++ret =3D=3D nr_entries) + break; + continue; +retry: + xas_reset(&xas); + } + rcu_read_unlock(); + return ret; +} + +static void *find_radix_entry_or_next(struct address_space *mapping, + unsigned long start, + unsigned long *eindex) +{ + void *entry =3D NULL; + + dmemfs_find_get_entries(mapping, start, 1, &entry, eindex); + return entry; +} + +/* + * find the entry in radix tree based on @index, create it if + * it does not exist + * + * return the entry with rcu locked, otherwise ERR_PTR() + * is returned + */ +static void * +radix_get_create_entry(struct vm_area_struct *vma, unsigned long fault_a= ddr, + struct inode *inode, pgoff_t pgoff) +{ + struct address_space *mapping =3D inode->i_mapping; + unsigned long eindex, index; + loff_t offset; + phys_addr_t addr; + gfp_t gfp_masks =3D mapping_gfp_mask(mapping) & ~__GFP_HIGHMEM; + void *entry; + unsigned int try_dpages, dpages; + int ret; + +retry: + offset =3D ((loff_t)pgoff << PAGE_SHIFT); + index =3D dmem_pgoff_to_index(inode, pgoff); + rcu_read_lock(); + ret =3D check_inode_size(inode, offset); + if (ret) { + rcu_read_unlock(); + return ERR_PTR(ret); + } + + try_dpages =3D dmem_pgoff_to_index(inode, (i_size_read(inode) - offset) + >> PAGE_SHIFT); + entry =3D find_radix_entry_or_next(mapping, index, &eindex); + if (entry) { + WARN_ON(!xa_is_value(entry)); + if (eindex =3D=3D index) + return entry; + + WARN_ON(eindex <=3D index); + try_dpages =3D eindex - index; + } + rcu_read_unlock(); + + /* entry does not exist, create it */ + addr =3D dmem_alloc_pages_vma(vma, fault_addr, try_dpages, &dpages); + if (!addr) { + /* + * do not return -ENOMEM as that will trigger OOM, + * it is useless for reclaiming dmem page + */ + ret =3D -EINVAL; + goto exit; + } + + try_dpages =3D dpages; + while (dpages) { + rcu_read_lock(); + ret =3D check_inode_size(inode, offset); + if (ret) + goto unlock_rcu; + + entry =3D dmem_addr_to_entry(inode, addr); + entry =3D xa_store(&mapping->i_pages, index, entry, gfp_masks); + if (!xa_is_err(entry)) { + addr +=3D inode->i_sb->s_blocksize; + offset +=3D inode->i_sb->s_blocksize; + dpages--; + mapping->nrexceptional++; + index++; + } + +unlock_rcu: + rcu_read_unlock(); + if (ret) + break; + } + + if (dpages) + dmem_free_pages(addr, dpages); + + /* we have created some entries, let's retry it */ + if (ret =3D=3D -EEXIST || try_dpages !=3D dpages) + goto retry; +exit: + return ERR_PTR(ret); +} + +static void radix_put_entry(void) +{ + rcu_read_unlock(); +} + +static vm_fault_t dmemfs_fault(struct vm_fault *vmf) +{ + struct vm_area_struct *vma =3D vmf->vma; + struct inode *inode =3D file_inode(vma->vm_file); + phys_addr_t addr; + void *entry; + int ret; + + if (vmf->pgoff > (MAX_LFS_FILESIZE >> PAGE_SHIFT)) + return VM_FAULT_SIGBUS; + + entry =3D radix_get_create_entry(vma, (unsigned long)vmf->address, + inode, vmf->pgoff); + if (IS_ERR(entry)) { + ret =3D PTR_ERR(entry); + goto exit; + } + + addr =3D dmem_entry_to_addr(inode, entry); + ret =3D vmf_insert_pfn(vma, (unsigned long)vmf->address, + dmem_addr_to_pfn(inode, addr, vmf->pgoff, + PAGE_SHIFT)); + radix_put_entry(); + +exit: + return ret; +} + +static unsigned long dmemfs_pagesize(struct vm_area_struct *vma) +{ + return dmem_page_size(file_inode(vma->vm_file)); +} + +static const struct vm_operations_struct dmemfs_vm_ops =3D { + .fault =3D dmemfs_fault, + .pagesize =3D dmemfs_pagesize, +}; + +int dmemfs_file_mmap(struct file *file, struct vm_area_struct *vma) +{ + struct inode *inode =3D file_inode(file); + + if (vma->vm_pgoff & ((dmem_page_size(inode) - 1) >> PAGE_SHIFT)) + return -EINVAL; + + if (!(vma->vm_flags & VM_SHARED)) + return -EINVAL; + + vma->vm_flags |=3D VM_PFNMAP; + + file_accessed(file); + vma->vm_ops =3D &dmemfs_vm_ops; + return 0; +} + static const struct file_operations dmemfs_file_operations =3D { + .mmap =3D dmemfs_file_mmap, }; =20 static int dmemfs_parse_param(struct fs_context *fc, struct fs_parameter= *param) @@ -180,9 +429,86 @@ static int dmemfs_statfs(struct dentry *dentry, stru= ct kstatfs *buf) return 0; } =20 +/* + * should make sure the dmem page in the dropped region is not + * being mapped by any process + */ +static void inode_drop_dpages(struct inode *inode, loff_t start, loff_t = end) +{ + struct address_space *mapping =3D inode->i_mapping; + struct pagevec pvec; + unsigned long istart, iend, indices[PAGEVEC_SIZE]; + int i; + + /* we never use normap page */ + WARN_ON(mapping->nrpages); + + /* if no dpage is allocated for the inode */ + if (!mapping->nrexceptional) + return; + + istart =3D dmem_pgoff_to_index(inode, start >> PAGE_SHIFT); + iend =3D dmem_pgoff_to_index(inode, end >> PAGE_SHIFT); + pagevec_init(&pvec); + while (istart < iend) { + pvec.nr =3D dmemfs_find_get_entries(mapping, istart, + min(iend - istart, + (unsigned long)PAGEVEC_SIZE), + (void **)pvec.pages, + indices); + if (!pvec.nr) + break; + + for (i =3D 0; i < pagevec_count(&pvec); i++) { + phys_addr_t addr; + + istart =3D indices[i]; + if (istart >=3D iend) + break; + + xa_erase(&mapping->i_pages, istart); + mapping->nrexceptional--; + + addr =3D dmem_entry_to_addr(inode, pvec.pages[i]); + dmem_free_page(addr); + } + + /* + * only exception entries in pagevec, it's safe to + * reinit it + */ + pagevec_reinit(&pvec); + cond_resched(); + istart++; + } +} + +static void dmemfs_evict_inode(struct inode *inode) +{ + /* no VMA works on it */ + WARN_ON(!RB_EMPTY_ROOT(&inode->i_data.i_mmap.rb_root)); + + inode_drop_dpages(inode, 0, LLONG_MAX); + clear_inode(inode); +} + +/* + * Display the mount options in /proc/mounts. + */ +static int dmemfs_show_options(struct seq_file *m, struct dentry *root) +{ + struct dmemfs_fs_info *fsi =3D root->d_sb->s_fs_info; + + if (check_dpage_size(fsi->mount_opts.dpage_size)) + seq_printf(m, ",pagesize=3D%lx", fsi->mount_opts.dpage_size); + return 0; +} + static const struct super_operations dmemfs_ops =3D { .statfs =3D dmemfs_statfs, + .evict_inode =3D dmemfs_evict_inode, .drop_inode =3D generic_delete_inode, + .show_options =3D dmemfs_show_options, }; =20 static int @@ -190,6 +516,7 @@ static int dmemfs_statfs(struct dentry *dentry, struc= t kstatfs *buf) { struct inode *inode; struct dmemfs_fs_info *fsi =3D sb->s_fs_info; + int ret; =20 sb->s_maxbytes =3D MAX_LFS_FILESIZE; sb->s_blocksize =3D fsi->mount_opts.dpage_size; @@ -198,11 +525,17 @@ static int dmemfs_statfs(struct dentry *dentry, str= uct kstatfs *buf) sb->s_op =3D &dmemfs_ops; sb->s_time_gran =3D 1; =20 + ret =3D dmem_alloc_init(sb->s_blocksize_bits); + if (ret) + return ret; + inode =3D dmemfs_get_inode(sb, NULL, S_IFDIR); sb->s_root =3D d_make_root(inode); - if (!sb->s_root) - return -ENOMEM; =20 + if (!sb->s_root) { + dmem_alloc_uinit(); + return -ENOMEM; + } return 0; } =20 @@ -238,7 +571,13 @@ int dmemfs_init_fs_context(struct fs_context *fc) =20 static void dmemfs_kill_sb(struct super_block *sb) { + bool has_inode =3D !!sb->s_root; + kill_litter_super(sb); + + /* do not uninit dmem allocator if mount failed */ + if (has_inode) + dmem_alloc_uinit(); } =20 static struct file_system_type dmemfs_fs_type =3D { diff --git a/include/linux/dmem.h b/include/linux/dmem.h index 476a82e..8682d63 100644 --- a/include/linux/dmem.h +++ b/include/linux/dmem.h @@ -10,6 +10,16 @@ int dmem_alloc_init(unsigned long dpage_shift); void dmem_alloc_uinit(void); =20 +phys_addr_t +dmem_alloc_pages_nodemask(int nid, nodemask_t *nodemask, unsigned int tr= y_max, + unsigned int *result_nr); + +phys_addr_t +dmem_alloc_pages_vma(struct vm_area_struct *vma, unsigned long addr, + unsigned int try_max, unsigned int *result_nr); + +void dmem_free_pages(phys_addr_t addr, unsigned int dpages_nr); +#define dmem_free_page(addr) dmem_free_pages(addr, 1) #else static inline int dmem_reserve_init(void) { --=20 1.8.3.1