From: David Hildenbrand <david@redhat.com>
To: Matthew Wilcox <willy@infradead.org>,
Rongwei Wang <rongwei.wang@linux.alibaba.com>
Cc: linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org,
linux-mm@kvack.org,
"xuyu@linux.alibaba.com" <xuyu@linux.alibaba.com>
Subject: Re: [PATCH RFC v2 0/4] Add support for sharing page tables across processes (Previously mshare)
Date: Mon, 31 Jul 2023 14:50:59 +0200 [thread overview]
Message-ID: <ae3bbfba-4207-ec5b-b4dd-ea63cb52883d@redhat.com> (raw)
In-Reply-To: <ZMeoHoM8j/ric0Bh@casper.infradead.org>
On 31.07.23 14:25, Matthew Wilcox wrote:
> On Mon, Jul 31, 2023 at 12:35:00PM +0800, Rongwei Wang wrote:
>> Hi Matthew
>>
>> May I ask you another question about mshare under this RFC? I remember you
>> said you will redesign the mshare to per-vma not per-mapping (apologize if
>> remember wrongly) in last time MM alignment session. And I also refer to you
>> to re-code this part in our internal version (based on this RFC). It seems
>> that per VMA will can simplify the structure of pgtable sharing, even
>> doesn't care the different permission of file mapping. these are advantages
>> (maybe) that I can imagine. But IMHO, It seems not a strongly reason to
>> switch per-mapping to per-vma.
>>
>> And I can't imagine other considerations of upstream. Can you share the
>> reason why redesigning in a per-vma way, due to integation with hugetlbfs
>> pgtable sharing or anonymous page sharing?
>
> It was David who wants to make page table sharing be per-VMA. I think
> he is advocating for the wrong approach. In any case, I don't have time
> to work on mshare and Khalid is on leave until September, so I don't
> think anybody is actively working on mshare.
Not that I also don't have any time to look into this, but my comment
essentially was that we should try decoupling page table sharing (reduce
memory consumption, shorter rmap walk) from the mprotect(PROT_READ) use
case.
For page table sharing I was wondering whether there could be ways to
just have that done semi-automatically. Similar to how it's done for
hugetlb. There are some clear limitations: mappings < PMD_SIZE won't be
able to benefit.
It's still unclear whether that is a real limitation. Some use cases
were raised (put all user space library mappings into a shared area),
but I realized that these conflict with MAP_PRIVATE requirements of such
areas. Maybe I'm wrong and this is easily resolved.
At least it's not the primary use case that was raised. For the primary
use cases (VMs, databases) that map huge areas, it might not be a
limitation.
Regarding mprotect(PROT_READ), my point was that mprotect() is most
probably the wrong tool to use (especially, due to signal handling).
Instead, I was suggesting having a way to essentially protect pages in a
shmem file -- and get notified whenever wants to write to such a page
either via the page tables or via write() and friends. We do have the
write-notify infrastructure for filesystems in place that we might
extend/reuse. That mechanism could benefit from shared page tables by
having to do less rmap walks.
Again, I don't have time to look into that (just like everybody else as
it appears) and might miss something important. Just sharing my thoughts
that I raised in the call.
--
Cheers,
David / dhildenb
next prev parent reply other threads:[~2023-07-31 12:51 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-04-26 16:49 [PATCH RFC v2 0/4] Add support for sharing page tables across processes (Previously mshare) Khalid Aziz
2023-04-26 16:49 ` [PATCH RFC v2 1/4] mm/ptshare: Add vm flag for shared PTE Khalid Aziz
2023-04-26 16:49 ` [PATCH RFC v2 2/4] mm/ptshare: Add flag MAP_SHARED_PT to mmap() Khalid Aziz
2023-04-27 11:17 ` kernel test robot
2023-04-29 4:41 ` kernel test robot
2023-04-26 16:49 ` [PATCH RFC v2 3/4] mm/ptshare: Create new mm struct for page table sharing Khalid Aziz
2023-06-26 8:08 ` Karim Manaouil
2023-04-26 16:49 ` [PATCH RFC v2 4/4] mm/ptshare: Add page fault handling for page table shared regions Khalid Aziz
2023-04-27 0:24 ` kernel test robot
2023-04-29 14:07 ` kernel test robot
2023-04-26 21:27 ` [PATCH RFC v2 0/4] Add support for sharing page tables across processes (Previously mshare) Mike Kravetz
2023-04-27 16:40 ` Khalid Aziz
2023-06-12 16:25 ` Peter Xu
2023-06-30 11:29 ` Rongwei Wang
2023-07-31 4:35 ` Rongwei Wang
2023-07-31 12:25 ` Matthew Wilcox
2023-07-31 12:50 ` David Hildenbrand [this message]
2023-07-31 16:19 ` Rongwei Wang
2023-07-31 16:30 ` David Hildenbrand
2023-07-31 16:38 ` Matthew Wilcox
2023-07-31 16:48 ` David Hildenbrand
2023-07-31 16:54 ` Matthew Wilcox
2023-07-31 17:06 ` David Hildenbrand
2023-08-01 6:53 ` Rongwei Wang
2023-08-01 19:28 ` Matthew Wilcox
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ae3bbfba-4207-ec5b-b4dd-ea63cb52883d@redhat.com \
--to=david@redhat.com \
--cc=linux-arch@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=rongwei.wang@linux.alibaba.com \
--cc=willy@infradead.org \
--cc=xuyu@linux.alibaba.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.