linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Kirill A. Shutemov" <kirill@shutemov.name>
To: Yang Shi <yang.shi@linux.alibaba.com>
Cc: mhocko@kernel.org, willy@infradead.org,
	ldufour@linux.vnet.ibm.com, akpm@linux-foundation.org,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [RFC v4 0/3] mm: zap pages with read mmap_sem in munmap for large mapping
Date: Wed, 11 Jul 2018 14:10:52 +0300	[thread overview]
Message-ID: <20180711111052.hbyukcwetmjjpij2@kshutemo-mobl1> (raw)
In-Reply-To: <1531265649-93433-1-git-send-email-yang.shi@linux.alibaba.com>

On Wed, Jul 11, 2018 at 07:34:06AM +0800, Yang Shi wrote:
> 
> Background:
> Recently, when we ran some vm scalability tests on machines with large memory,
> we ran into a couple of mmap_sem scalability issues when unmapping large memory
> space, please refer to https://lkml.org/lkml/2017/12/14/733 and
> https://lkml.org/lkml/2018/2/20/576.
> 
> 
> History:
> Then akpm suggested to unmap large mapping section by section and drop mmap_sem
> at a time to mitigate it (see https://lkml.org/lkml/2018/3/6/784).
> 
> V1 patch series was submitted to the mailing list per Andrew's suggestion
> (see https://lkml.org/lkml/2018/3/20/786). Then I received a lot great feedback
> and suggestions.
> 
> Then this topic was discussed on LSFMM summit 2018. In the summit, Michal Hocko
> suggested (also in the v1 patches review) to try "two phases" approach. Zapping
> pages with read mmap_sem, then doing via cleanup with write mmap_sem (for
> discussion detail, see https://lwn.net/Articles/753269/)
> 
> 
> Approach:
> Zapping pages is the most time consuming part, according to the suggestion from
> Michal Hocko [1], zapping pages can be done with holding read mmap_sem, like
> what MADV_DONTNEED does. Then re-acquire write mmap_sem to cleanup vmas.
> 
> But, we can't call MADV_DONTNEED directly, since there are two major drawbacks:
>   * The unexpected state from PF if it wins the race in the middle of munmap.
>     It may return zero page, instead of the content or SIGSEGV.
>   * Can’t handle VM_LOCKED | VM_HUGETLB | VM_PFNMAP and uprobe mappings, which
>     is a showstopper from akpm
> 
> And, some part may need write mmap_sem, for example, vma splitting. So, the
> design is as follows:
>         acquire write mmap_sem
>         lookup vmas (find and split vmas)
>         set VM_DEAD flags
>         deal with special mappings
>         downgrade_write
> 
>         zap pages
>         release mmap_sem
> 
>         retake mmap_sem exclusively
>         cleanup vmas
>         release mmap_sem
> 
> Define large mapping size thresh as PUD size, just zap pages with read mmap_sem
> for mappings which are >= PUD_SIZE. So, unmapping less than PUD_SIZE area still
> goes with the regular path.
> 
> All vmas which will be zapped soon will have VM_DEAD flag set. Since PF may race
> with munmap, may just return the right content or SIGSEGV before the optimization,
> but with the optimization, it may return a zero page. Here use this flag to mark
> PF to this area is unstable, will trigger SIGSEGV, in order to prevent from the
> unexpected 3rd state.
> 
> If the vma has VM_LOCKED | VM_HUGETLB | VM_PFNMAP or uprobe, they are considered
> as special mappings. They will be dealt with before zapping pages with write
> mmap_sem held. Basically, just update vm_flags. The actual unmapping is still
> done with read mmap_sem.
> 
> And, since they are also manipulated by unmap_single_vma() which is called by
> zap_page_range() with read mmap_sem held in this case, to prevent from updating
> vm_flags in read critical section and considering the complexity of coding, just
> check if VM_DEAD is set, then skip any VM_DEAD area since they should be handled
> before.
> 
> When cleaning up vmas, just call do_munmap() without carrying vmas from the above
> to avoid race condition, since the address space might be already changed under
> our feet after retaking exclusive lock.
> 
> For the time being, just do this in munmap syscall path. Other vm_munmap() or
> do_munmap() call sites (i.e mmap, mremap, etc) remain intact for stability reason.
> And, make this 64 bit only explicitly per akpm's suggestion.

I still see VM_DEAD as unnecessary complication. We should be fine without it.
But looks like I'm in the minority :/

It's okay. I have another suggestion that also doesn't require VM_DEAD
trick too :)

1. Take mmap_sem for write;
2. Adjust VMA layout (split/remove). After the step all memory we try to
   unmap is outside any VMA.
3. Downgrade mmap_sem to read.
4. Zap the page range.
5. Drop mmap_sem.

I believe it should be safe.

The pages in the range cannot be re-faulted after step 3 as find_vma()
will not see the corresponding VMA and deliver SIGSEGV.

New VMAs cannot be created in the range before step 5 since we hold the
semaphore at least for read the whole time.

Do you see problem in this approach?

-- 
 Kirill A. Shutemov

  parent reply	other threads:[~2018-07-11 11:11 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-07-10 23:34 [RFC v4 0/3] mm: zap pages with read mmap_sem in munmap for large mapping Yang Shi
2018-07-10 23:34 ` [RFC v4 PATCH 1/3] mm: introduce VM_DEAD flag and extend check_stable_address_space to check it Yang Shi
2018-07-10 23:34 ` [RFC v4 PATCH 2/3] mm: refactor do_munmap() to extract the common part Yang Shi
2018-07-10 23:34 ` [RFC v4 PATCH 3/3] mm: mmap: zap pages with read mmap_sem for large mapping Yang Shi
2018-07-11 10:33 ` [RFC v4 0/3] mm: zap pages with read mmap_sem in munmap " Michal Hocko
2018-07-11 11:13   ` Kirill A. Shutemov
2018-07-11 11:53     ` Michal Hocko
2018-07-11 17:08       ` Yang Shi
2018-07-11 16:57   ` Yang Shi
2018-07-11 22:49   ` Andrew Morton
2018-07-12  8:15     ` Michal Hocko
2018-07-11 11:10 ` Kirill A. Shutemov [this message]
2018-07-11 11:58   ` Michal Hocko
2018-07-11 17:04   ` Yang Shi
2018-07-12  8:04     ` Michal Hocko
2018-07-12 23:45       ` Yang Shi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180711111052.hbyukcwetmjjpij2@kshutemo-mobl1 \
    --to=kirill@shutemov.name \
    --cc=akpm@linux-foundation.org \
    --cc=ldufour@linux.vnet.ibm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=willy@infradead.org \
    --cc=yang.shi@linux.alibaba.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).