LKML Archive on lore.kernel.org
 help / color / Atom feed
From: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
To: "裘稀石(稀石)" <xishi.qiuxishi@alibaba-inc.com>
Cc: linux-mm <linux-mm@kvack.org>,
	linux-kernel <linux-kernel@vger.kernel.org>,
	陈义全 <zy.zhengyi@alibaba-inc.com>
Subject: Re: Re:[RFC] a question about reuse hwpoison page in soft_offline_page()
Date: Mon, 9 Jul 2018 00:38:02 +0000
Message-ID: <20180709003802.GA11404@hori1.linux.bs1.fc.nec.co.jp> (raw)
In-Reply-To: <ac6e703d-b0df-40d2-8918-63a63f3c5d68.xishi.qiuxishi@alibaba-inc.com>

On Fri, Jul 06, 2018 at 05:59:15PM +0800, 裘稀石(稀石) wrote:
> 
> Hi Naoya,
> 
> How about this case, we only trigger soft offline page, but someone killed
> later.
> As the race I said before, then someone may use the hwpoisoned hugetlb page
> again.
> Please see the following.
> 
> soft offline: 
>     get_any_page - find the hugetlb is free
> process A:
>     do_page_fault - handle_mm_fault - hugetlb_fault - hugetlb_no_page
> - alloc_huge_page
> soft offline:
>     soft_offline_free_page - set hwpoison flag
> process B:
>     mmap the hugetlb file from A, hugetlb_fault - hugetlb_no_page
> - find_lock_page
>     it find hwpoison flag is already set, so ret = VM_FAULT_HWPOISON
>     then mm_fault_error - do_sigbus - mce kill
> process B was killed by soft offline, right?

Right, this and your another email shows how things go bad.
Soft offline handler is simply racy now, so we had better cancel it if
the target page was allocated during soft offline handling by double
checking as I mentioned.

Thanks,
Naoya Horiguchi


> 
> Thanks,
> Xishi Qiu 
> 
>     ------------------------------------------------------------------
>     发件人:Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
>     发送时间:2018年7月6日(星期五) 16:19
>     收件人:裘稀石(稀石) <xishi.qiuxishi@alibaba-inc.com>
>     抄 送:linux-mm <linux-mm@kvack.org>; linux-kernel
>     <linux-kernel@vger.kernel.org>; 陈义全 <zy.zhengyi@alibaba-inc.com>
>     主 题:Re: [RFC] a question about reuse hwpoison page in soft_offline_page
>     ()
> 
>     On Fri, Jul 06, 2018 at 11:37:41AM +0800, 裘稀石(稀石) wrote:
>     > This patch add05cec
>     > (mm: soft-offline: don't free target page in successful page migration)
>      removes
>     > set_migratetype_isolate() and unset_migratetype_isolate()
>      in soft_offline_page
>     > ().
>     > 
>     > And this patch 243abd5b
>     > (mm: hugetlb: prevent reuse of hwpoisoned free hugepages) changes
>     > if (!is_migrate_isolate_page(page)) to if (!PageHWPoison
>     (page)), so it could
>     > prevent someone
>     > reuse the free hugetlb again after set the hwpoison flag
>     > in soft_offline_free_page()
>     > 
>     > My question is that if someone reuse the free hugetlb again before 
>     > soft_offline_free_page() and
>     > after get_any_page
>     (), then it uses the hopoison page, and this may trigger mce
>     > kill later, right?
> 
>     Hi Xishi,
> 
>     Thank you for pointing out the issue. That's nice catch.
> 
>     I think that the race condition itself could happen, but it doesn't lead
>     to MCE kill because PageHWPoison is not visible to HW which triggers MCE.
>     PageHWPoison flag is just a flag in struct page to report the memory error
>     from kernel to userspace. So even if a CPU is accessing to the page whose
>     struct page has PageHWPoison set, that doesn't cause a MCE unless the page
>     is physically broken.
>     The type of memory error that soft offline tries to handle is corrected
>     one which is not a failure yet although it's starting to wear.
>     So such PageHWPoison page can be reused, but that's not critical because
>     the page is freed at some point afterword and error containment completes.
> 
>     However, I noticed that there's a small pain in free hugetlb case.
>     We call dissolve_free_huge_page() in soft_offline_free_page() which moves
>     the PageHWPoison flag from the head page to the raw error page.
>     If the reported race happens, dissolve_free_huge_page() just return without
>     doing any dissolve work because "if (PageHuge(page) && !page_count(page))"
>     block is skipped.
>     The hugepage is allocated and used as usual, but the contaiment doesn't
>     complete as expected in the normal page, because free_huge_pages() doesn't
>     call dissolve_free_huge_page() for hwpoison hugepage. This is not critical
>     because such error hugepage just reside in free hugepage list. But this
>     might looks like a kind of memory leak. And even worse when hugepage pool
>     is shrinked and the hwpoison hugepage is freed, the PageHWPoison flag is
>     still on the head page which is unlikely to be an actual error page.
> 
>     So I think we need improvement here, how about the fix like below?
> 
>       (not tested yet, sorry)
> 
>       diff --git a/mm/memory-failure.c b/mm/memory-failure.c
>       --- a/mm/memory-failure.c
>       +++ b/mm/memory-failure.c
>       @@ -1883,6 +1883,11 @@ static void soft_offline_free_page
>     (struct page *page)
>               struct page *head = compound_head(page);
>       
>               if (!TestSetPageHWPoison(head)) {
>       +               if (page_count(head)) {
>       +                       ClearPageHWPoison(head);
>       +                       return;
>       +               }
>       +
>                       num_poisoned_pages_inc();
>                       if (PageHuge(head))
>                               dissolve_free_huge_page(page);
> 
>     Thanks,
>     Naoya Horiguchi
> 

       reply index

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <ac6e703d-b0df-40d2-8918-63a63f3c5d68.xishi.qiuxishi@alibaba-inc.com>
2018-07-09  0:38 ` Naoya Horiguchi [this message]
     [not found] <95e5634d-f78f-45bb-9847-1eb5bbdac3cf.xishi.qiuxishi@alibaba-inc.com>
2018-07-09  4:16 ` Naoya Horiguchi
     [not found] <518e6b02-47ef-4ba8-ab98-8d807e2de7d5.xishi.qiuxishi@alibaba-inc.com>
2018-07-09 10:28 ` Naoya Horiguchi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180709003802.GA11404@hori1.linux.bs1.fc.nec.co.jp \
    --to=n-horiguchi@ah.jp.nec.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=xishi.qiuxishi@alibaba-inc.com \
    --cc=zy.zhengyi@alibaba-inc.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

LKML Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/lkml/0 lkml/git/0.git
	git clone --mirror https://lore.kernel.org/lkml/1 lkml/git/1.git
	git clone --mirror https://lore.kernel.org/lkml/2 lkml/git/2.git
	git clone --mirror https://lore.kernel.org/lkml/3 lkml/git/3.git
	git clone --mirror https://lore.kernel.org/lkml/4 lkml/git/4.git
	git clone --mirror https://lore.kernel.org/lkml/5 lkml/git/5.git
	git clone --mirror https://lore.kernel.org/lkml/6 lkml/git/6.git
	git clone --mirror https://lore.kernel.org/lkml/7 lkml/git/7.git
	git clone --mirror https://lore.kernel.org/lkml/8 lkml/git/8.git
	git clone --mirror https://lore.kernel.org/lkml/9 lkml/git/9.git
	git clone --mirror https://lore.kernel.org/lkml/10 lkml/git/10.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 lkml lkml/ https://lore.kernel.org/lkml \
		linux-kernel@vger.kernel.org
	public-inbox-index lkml

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-kernel


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git