linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Joao Martins <joao.m.martins@oracle.com>
To: yulei zhang <yulei.kernel@gmail.com>
Cc: linux-fsdevel@vger.kernel.org, kvm <kvm@vger.kernel.org>,
	LKML <linux-kernel@vger.kernel.org>,
	Xiao Guangrong <xiaoguangrong.eric@gmail.com>,
	Wanpeng Li <kernellwp@gmail.com>,
	Haiwei Li <lihaiwei.kernel@gmail.com>,
	Yulei Zhang <yuleixzhang@tencent.com>,
	akpm@linux-foundation.org, naoya.horiguchi@nec.com,
	viro@zeniv.linux.org.uk, Paolo Bonzini <pbonzini@redhat.com>,
	Matthew Wilcox <willy@infradead.org>,
	Mike Kravetz <mike.kravetz@oracle.com>,
	Jane Y Chu <jane.chu@oracle.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Muchun Song <songmuchun@bytedance.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [PATCH 00/35] Enhance memory utilization with DMEMFS
Date: Fri, 9 Oct 2020 12:53:08 +0100	[thread overview]
Message-ID: <b963565b-61d8-89d3-1abd-50cd8c8daad5@oracle.com> (raw)
In-Reply-To: <CACZOiM2qKhogXQ_DXzWjGM5UCeCuEqT6wnR=f2Wi_T45_uoYHQ@mail.gmail.com>

On 10/9/20 12:39 PM, yulei zhang wrote:
> Joao, thanks a lot for the feedback. One more thing needs to mention
> is that dmemfs also support fine-grained
> memory management which makes it more flexible for tenants with
> different requirements.
> 
So as DAX when it allows to partition a region (starting 5.10). Meaning you have a region
which you dedicated to userspace. That region can then be partitioning into devices which
give you access to multiple (possibly discontinuous) extents with at a given page
granularity (selectable when you create the device), accessed through mmap().
You can then give that device to a cgroup. Or you can return that memory back to the
kernel (should you run into OOM situation), or you recreate the same mappings across
reboot/kexec.

I probably need to read your patches again, but can you extend on the 'dmemfs also support
fine-grained memory management' to understand what is the gap that you mention?

> On Fri, Oct 9, 2020 at 3:01 AM Joao Martins <joao.m.martins@oracle.com> wrote:
>>
>> [adding a couple folks that directly or indirectly work on the subject]
>>
>> On 10/8/20 8:53 AM, yulei.kernel@gmail.com wrote:
>>> From: Yulei Zhang <yuleixzhang@tencent.com>
>>>
>>> In current system each physical memory page is assocaited with
>>> a page structure which is used to track the usage of this page.
>>> But due to the memory usage rapidly growing in cloud environment,
>>> we find the resource consuming for page structure storage becomes
>>> highly remarkable. So is it an expense that we could spare?
>>>
>> Happy to see another person working to solve the same problem!
>>
>> I am really glad to see more folks being interested in solving
>> this problem and I hope we can join efforts?
>>
>> BTW, there is also a second benefit in removing struct page -
>> which is carving out memory from the direct map.
>>
>>> This patchset introduces an idea about how to save the extra
>>> memory through a new virtual filesystem -- dmemfs.
>>>
>>> Dmemfs (Direct Memory filesystem) is device memory or reserved
>>> memory based filesystem. This kind of memory is special as it
>>> is not managed by kernel and most important it is without 'struct page'.
>>> Therefore we can leverage the extra memory from the host system
>>> to support more tenants in our cloud service.
>>>
>> This is like a walk down the memory lane.
>>
>> About a year ago we followed the same exact idea/motivation to
>> have memory outside of the direct map (and removing struct page overhead)
>> and started with our own layer/thingie. However we realized that DAX
>> is one the subsystems which already gives you direct access to memory
>> for free (and is already upstream), plus a couple of things which we
>> found more handy.
>>
>> So we sent an RFC a couple months ago:
>>
>> https://lore.kernel.org/linux-mm/20200110190313.17144-1-joao.m.martins@oracle.com/
>>
>> Since then majority of the work has been in improving DAX[1].
>> But now that is done I am going to follow up with the above patchset.
>>
>> [1]
>> https://lore.kernel.org/linux-mm/159625229779.3040297.11363509688097221416.stgit@dwillia2-desk3.amr.corp.intel.com/
>>
>> (Give me a couple of days and I will send you the link to the latest
>> patches on a git-tree - would love feedback!)
>>
>> The struct page removal for DAX would then be small, and ticks the
>> same bells and whistles (MCE handling, reserving PAT memtypes, ptrace
>> support) that we both do, with a smaller diffstat and it doesn't
>> touch KVM (not at least fundamentally).
>>
>>         15 files changed, 401 insertions(+), 38 deletions(-)
>>
>> The things needed in core-mm is for handling PMD/PUD PAGE_SPECIAL much
>> like we both do. Furthermore there wouldn't be a need for a new vm type,
>> consuming an extra page bit (in addition to PAGE_SPECIAL) or new filesystem.
>>
>> [1]
>> https://lore.kernel.org/linux-mm/159625229779.3040297.11363509688097221416.stgit@dwillia2-desk3.amr.corp.intel.com/
>>
>>
>>> We uses a kernel boot parameter 'dmem=' to reserve the system
>>> memory when the host system boots up, the details can be checked
>>> in /Documentation/admin-guide/kernel-parameters.txt.
>>>
>>> Theoretically for each 4k physical page it can save 64 bytes if
>>> we drop the 'struct page', so for guest memory with 320G it can
>>> save about 5G physical memory totally.
>>>
>> Also worth mentioning that if you only care about 'struct page' cost, and not on the
>> security boundary, there's also some work on hugetlbfs preallocation of hugepages into
>> tricking vmemmap in reusing tail pages.
>>
>>   https://lore.kernel.org/linux-mm/20200915125947.26204-1-songmuchun@bytedance.com/
>>
>> Going forward that could also make sense for device-dax to avoid so many
>> struct pages allocated (which would require its transition to compound
>> struct pages like hugetlbfs which we are looking at too). In addition an
>> idea <handwaving> would be perhaps to have a stricter mode in DAX where
>> we initialize/use the metadata ('struct page') but remove the underlaying
>> PFNs (of the 'struct page') from the direct map having to bear the cost of
>> mapping/unmapping on gup/pup.
>>
>>         Joao

  reply	other threads:[~2020-10-09 11:54 UTC|newest]

Thread overview: 61+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-08  7:53 [PATCH 00/35] Enhance memory utilization with DMEMFS yulei.kernel
2020-10-08  7:53 ` [PATCH 01/35] fs: introduce dmemfs module yulei.kernel
2020-11-10 20:04   ` Al Viro
2020-11-11  8:53     ` yulei zhang
2020-11-11 23:09       ` Al Viro
2020-11-12 10:03         ` yulei zhang
2020-10-08  7:53 ` [PATCH 02/35] mm: support direct memory reservation yulei.kernel
2020-10-08 20:27   ` Randy Dunlap
2020-10-08 20:34   ` Randy Dunlap
2020-10-08  7:53 ` [PATCH 03/35] dmem: implement dmem memory management yulei.kernel
2020-10-08  7:53 ` [PATCH 04/35] dmem: let pat recognize dmem yulei.kernel
2020-10-13  7:27   ` Paolo Bonzini
2020-10-13  9:53     ` yulei zhang
2020-10-08  7:53 ` [PATCH 05/35] dmemfs: support mmap yulei.kernel
2020-10-08  7:53 ` [PATCH 06/35] dmemfs: support truncating inode down yulei.kernel
2020-10-08  7:53 ` [PATCH 07/35] dmem: trace core functions yulei.kernel
2020-10-08  7:53 ` [PATCH 08/35] dmem: show some statistic in debugfs yulei.kernel
2020-10-08 20:23   ` Randy Dunlap
2020-10-09 11:49     ` yulei zhang
2020-10-08  7:53 ` [PATCH 09/35] dmemfs: support remote access yulei.kernel
2020-10-08  7:54 ` [PATCH 10/35] dmemfs: introduce max_alloc_try_dpages parameter yulei.kernel
2020-10-08  7:54 ` [PATCH 11/35] mm: export mempolicy interfaces to serve dmem allocator yulei.kernel
2020-10-08  7:54 ` [PATCH 12/35] dmem: introduce mempolicy support yulei.kernel
2020-10-08  7:54 ` [PATCH 13/35] mm, dmem: introduce PFN_DMEM and pfn_t_dmem yulei.kernel
2020-10-08  7:54 ` [PATCH 14/35] mm, dmem: dmem-pmd vs thp-pmd yulei.kernel
2020-10-08  7:54 ` [PATCH 15/35] mm: add pmd_special() check for pmd_trans_huge_lock() yulei.kernel
2020-10-08  7:54 ` [PATCH 16/35] dmemfs: introduce ->split() to dmemfs_vm_ops yulei.kernel
2020-10-08  7:54 ` [PATCH 17/35] mm, dmemfs: support unmap_page_range() for dmemfs pmd yulei.kernel
2020-10-08  7:54 ` [PATCH 18/35] mm: follow_pmd_mask() for dmem huge pmd yulei.kernel
2020-10-08  7:54 ` [PATCH 19/35] mm: gup_huge_pmd() " yulei.kernel
2020-10-08  7:54 ` [PATCH 20/35] mm: support dmem huge pmd for vmf_insert_pfn_pmd() yulei.kernel
2020-10-08  7:54 ` [PATCH 21/35] mm: support dmem huge pmd for follow_pfn() yulei.kernel
2020-10-08  7:54 ` [PATCH 22/35] kvm, x86: Distinguish dmemfs page from mmio page yulei.kernel
2020-10-09  0:58   ` Sean Christopherson
2020-10-09 10:28     ` Joao Martins
2020-10-09 11:42       ` yulei zhang
2020-10-08  7:54 ` [PATCH 23/35] kvm, x86: introduce VM_DMEM yulei.kernel
2020-10-08  7:54 ` [PATCH 24/35] dmemfs: support hugepage for dmemfs yulei.kernel
2020-10-08  7:54 ` [PATCH 25/35] mm, x86, dmem: fix estimation of reserved page for vaddr_get_pfn() yulei.kernel
2020-10-08  7:54 ` [PATCH 26/35] mm, dmem: introduce pud_special() yulei.kernel
2020-10-08  7:54 ` [PATCH 27/35] mm: add pud_special() to support dmem huge pud yulei.kernel
2020-10-08  7:54 ` [PATCH 28/35] mm, dmemfs: support huge_fault() for dmemfs yulei.kernel
2020-10-08  7:54 ` [PATCH 29/35] mm: add follow_pte_pud() yulei.kernel
2020-10-08  7:54 ` [PATCH 30/35] dmem: introduce dmem_bitmap_alloc() and dmem_bitmap_free() yulei.kernel
2020-10-08  7:54 ` [PATCH 31/35] dmem: introduce mce handler yulei.kernel
2020-10-08  7:54 ` [PATCH 32/35] mm, dmemfs: register and handle the dmem mce yulei.kernel
2020-10-08  7:54 ` [PATCH 33/35] kvm, x86: temporary disable record_steal_time for dmem yulei.kernel
2020-10-08  7:54 ` [PATCH 34/35] dmem: add dmem unit tests yulei.kernel
2020-10-08  7:54 ` [PATCH 35/35] Add documentation for dmemfs yulei.kernel
2020-10-09  1:26   ` Randy Dunlap
2020-10-08 19:01 ` [PATCH 00/35] Enhance memory utilization with DMEMFS Joao Martins
2020-10-09 11:39   ` yulei zhang
2020-10-09 11:53     ` Joao Martins [this message]
2020-10-10  8:15       ` yulei zhang
2020-10-12 10:59         ` Joao Martins
2020-10-14 22:25           ` Dan Williams
2020-10-19 13:37             ` Paolo Bonzini
2020-10-19 19:03               ` Joao Martins
2020-10-20 15:22                 ` yulei zhang
2020-10-12 11:57 ` Zengtao (B)
2020-10-13  2:45   ` yulei zhang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b963565b-61d8-89d3-1abd-50cd8c8daad5@oracle.com \
    --to=joao.m.martins@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=dan.j.williams@intel.com \
    --cc=jane.chu@oracle.com \
    --cc=kernellwp@gmail.com \
    --cc=konrad.wilk@oracle.com \
    --cc=kvm@vger.kernel.org \
    --cc=lihaiwei.kernel@gmail.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mike.kravetz@oracle.com \
    --cc=naoya.horiguchi@nec.com \
    --cc=pbonzini@redhat.com \
    --cc=songmuchun@bytedance.com \
    --cc=viro@zeniv.linux.org.uk \
    --cc=willy@infradead.org \
    --cc=xiaoguangrong.eric@gmail.com \
    --cc=yulei.kernel@gmail.com \
    --cc=yuleixzhang@tencent.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).