linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: yong w <yongw.pur@gmail.com>
To: Hugh Dickins <hughd@google.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Mike Kravetz <mike.kravetz@oracle.com>,
	 Matthew Wilcox <willy@infradead.org>,
	cgel.zte@gmail.com, kirill@shutemov.name,  songliubraving@fb.com,
	Linux MM <linux-mm@kvack.org>,
	 LKML <linux-kernel@vger.kernel.org>,
	yang.yang29@zte.com.cn,  wang.yong12@zte.com.cn
Subject: Re: [PATCH] memfd: fix F_SEAL_WRITE after shmem huge page allocated
Date: Wed, 2 Mar 2022 09:11:08 +0800	[thread overview]
Message-ID: <CAOH5QeA4w+CKW+K58k3c5pRWdkx6uT5ouyQ6hepMd1wKF51mcw@mail.gmail.com> (raw)
In-Reply-To: <a4f79248-df75-2c8c-3df-ba3317ccb5da@google.com>

Hello,
this patch does not apply to the 4.19 kernel.
Is it necessary to make corresponding patches for each stable version?

Thanks.

Hugh Dickins <hughd@google.com> 于2022年2月27日周日 14:41写道:
>
> Wangyong reports: after enabling tmpfs filesystem to support
> transparent hugepage with the following command:
>
>  echo always > /sys/kernel/mm/transparent_hugepage/shmem_enabled
>
> the docker program tries to add F_SEAL_WRITE through the following
> command, but it fails unexpectedly with errno EBUSY:
>
>  fcntl(5, F_ADD_SEALS, F_SEAL_WRITE) = -1.
>
> That is because memfd_tag_pins() and memfd_wait_for_pins() were never
> updated for shmem huge pages: checking page_mapcount() against
> page_count() is hopeless on THP subpages - they need to check
> total_mapcount() against page_count() on THP heads only.
>
> Make memfd_tag_pins() (compared > 1) as strict as memfd_wait_for_pins()
> (compared != 1): either can be justified, but given the non-atomic
> total_mapcount() calculation, it is better now to be strict.  Bear in
> mind that total_mapcount() itself scans all of the THP subpages, when
> choosing to take an XA_CHECK_SCHED latency break.
>
> Also fix the unlikely xa_is_value() case in memfd_wait_for_pins(): if a
> page has been swapped out since memfd_tag_pins(), then its refcount must
> have fallen, and so it can safely be untagged.
>
> Reported-by: Zeal Robot <zealci@zte.com.cn>
> Reported-by: wangyong <wang.yong12@zte.com.cn>
> Signed-off-by: Hugh Dickins <hughd@google.com>
> Cc: <stable@vger.kernel.org>
> ---
> Andrew, please remove
> fix-shmem-huge-page-failed-to-set-f_seal_write-attribute-problem.patch
> fix-shmem-huge-page-failed-to-set-f_seal_write-attribute-problem-fix.patch
> from mmotm, and replace them by this patch against 5.17-rc5:
> wangyong's patch did not handle the case of pte-mapped huge pages, and I
> had this one from earlier, when I found the same issue with MFD_HUGEPAGE
> (but MFD_HUGEPAGE did not go in, so I didn't post this one, forgetting
> the transparent_hugepage/shmem_enabled case).
>
>  mm/memfd.c |   40 ++++++++++++++++++++++++++++------------
>  1 file changed, 28 insertions(+), 12 deletions(-)
>
> --- 5.17-rc5/mm/memfd.c
> +++ linux/mm/memfd.c
> @@ -31,20 +31,28 @@
>  static void memfd_tag_pins(struct xa_state *xas)
>  {
>         struct page *page;
> -       unsigned int tagged = 0;
> +       int latency = 0;
> +       int cache_count;
>
>         lru_add_drain();
>
>         xas_lock_irq(xas);
>         xas_for_each(xas, page, ULONG_MAX) {
> -               if (xa_is_value(page))
> -                       continue;
> -               page = find_subpage(page, xas->xa_index);
> -               if (page_count(page) - page_mapcount(page) > 1)
> +               cache_count = 1;
> +               if (!xa_is_value(page) &&
> +                   PageTransHuge(page) && !PageHuge(page))
> +                       cache_count = HPAGE_PMD_NR;
> +
> +               if (!xa_is_value(page) &&
> +                   page_count(page) - total_mapcount(page) != cache_count)
>                         xas_set_mark(xas, MEMFD_TAG_PINNED);
> +               if (cache_count != 1)
> +                       xas_set(xas, page->index + cache_count);
>
> -               if (++tagged % XA_CHECK_SCHED)
> +               latency += cache_count;
> +               if (latency < XA_CHECK_SCHED)
>                         continue;
> +               latency = 0;
>
>                 xas_pause(xas);
>                 xas_unlock_irq(xas);
> @@ -73,7 +81,8 @@ static int memfd_wait_for_pins(struct ad
>
>         error = 0;
>         for (scan = 0; scan <= LAST_SCAN; scan++) {
> -               unsigned int tagged = 0;
> +               int latency = 0;
> +               int cache_count;
>
>                 if (!xas_marked(&xas, MEMFD_TAG_PINNED))
>                         break;
> @@ -87,10 +96,14 @@ static int memfd_wait_for_pins(struct ad
>                 xas_lock_irq(&xas);
>                 xas_for_each_marked(&xas, page, ULONG_MAX, MEMFD_TAG_PINNED) {
>                         bool clear = true;
> -                       if (xa_is_value(page))
> -                               continue;
> -                       page = find_subpage(page, xas.xa_index);
> -                       if (page_count(page) - page_mapcount(page) != 1) {
> +
> +                       cache_count = 1;
> +                       if (!xa_is_value(page) &&
> +                           PageTransHuge(page) && !PageHuge(page))
> +                               cache_count = HPAGE_PMD_NR;
> +
> +                       if (!xa_is_value(page) && cache_count !=
> +                           page_count(page) - total_mapcount(page)) {
>                                 /*
>                                  * On the last scan, we clean up all those tags
>                                  * we inserted; but make a note that we still
> @@ -103,8 +116,11 @@ static int memfd_wait_for_pins(struct ad
>                         }
>                         if (clear)
>                                 xas_clear_mark(&xas, MEMFD_TAG_PINNED);
> -                       if (++tagged % XA_CHECK_SCHED)
> +
> +                       latency += cache_count;
> +                       if (latency < XA_CHECK_SCHED)
>                                 continue;
> +                       latency = 0;
>
>                         xas_pause(&xas);
>                         xas_unlock_irq(&xas);


  reply	other threads:[~2022-03-02  1:11 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-02-15  7:37 [PATCH linux-next] Fix shmem huge page failed to set F_SEAL_WRITE attribute problem cgel.zte
2022-02-15 22:12 ` Andrew Morton
2022-02-16  6:57   ` CGEL
2022-02-17  1:00 ` Mike Kravetz
2022-02-17  1:25   ` Hugh Dickins
2022-02-17 13:43     ` Matthew Wilcox
2022-02-27  3:00       ` Hugh Dickins
2022-02-27  5:20         ` [PATCH] memfd: fix F_SEAL_WRITE after shmem huge page allocated Hugh Dickins
2022-03-02  1:11           ` yong w [this message]
2022-03-02 19:52             ` Hugh Dickins

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAOH5QeA4w+CKW+K58k3c5pRWdkx6uT5ouyQ6hepMd1wKF51mcw@mail.gmail.com \
    --to=yongw.pur@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=cgel.zte@gmail.com \
    --cc=hughd@google.com \
    --cc=kirill@shutemov.name \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mike.kravetz@oracle.com \
    --cc=songliubraving@fb.com \
    --cc=wang.yong12@zte.com.cn \
    --cc=willy@infradead.org \
    --cc=yang.yang29@zte.com.cn \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).