linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Miaohe Lin <linmiaohe@huawei.com>
To: "David Hildenbrand" <david@redhat.com>,
	"HORIGUCHI NAOYA(堀口 直也)" <naoya.horiguchi@nec.com>,
	"Oscar Salvador" <osalvador@suse.de>
Cc: Naoya Horiguchi <naoya.horiguchi@linux.dev>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Mike Kravetz <mike.kravetz@oracle.com>,
	Yang Shi <shy828301@gmail.com>,
	Muchun Song <songmuchun@bytedance.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [RFC PATCH v1 0/4] mm, hwpoison: improve handling workload related to hugetlb and memory_hotplug
Date: Thu, 12 May 2022 19:13:16 +0800	[thread overview]
Message-ID: <04781d15-9d87-1763-02fe-e353679c50d7@huawei.com> (raw)
In-Reply-To: <c424e8a2-a771-e738-396c-24ac907b557f@redhat.com>

On 2022/5/12 15:28, David Hildenbrand wrote:
>>>>>
>>>>> Once the problematic DIMM would actually get unplugged, the memory block devices
>>>>> would get removed as well. So when hotplugging a new DIMM in the same
>>>>> location, we could online that memory again.
>>>>
>>>> What about PG_hwpoison flags?  struct pages are also freed and reallocated
>>>> in the actual DIMM replacement?
>>>
>>> Once memory is offline, the memmap is stale and is no longer
>>> trustworthy. It gets reinitialize during memory onlining -- so any
>>> previous PG_hwpoison is overridden at least there. In some setups, we
>>> even poison the whole memmap via page_init_poison() during memory offlining.
>>>
>>> Apart from that, we should be freeing the memmap in all relevant cases
>>> when removing memory. I remember there are a couple of corner cases, but
>>> we don't really have to care about that.
>>
>> OK, so there seems no need to manipulate struct pages for hwpoison in
>> all relevant cases.
> 
> Right. When offlining a memory block, all we have to do is remember if
> we stumbled over a hwpoisoned page and rememebr that inside the memory
> block. Rejecting to online is then easy.

BTW: How should we deal with the below race window:

CPU A			CPU B				CPU C
accessing page while hold page refcnt
			memory_failure happened on page
							offline_pages
							  page can be offlined due to page refcnt
							  is ignored when PG_hwpoison is set
can still access page struct...

Any in use page (with page refcnt incremented) might be offlined while its content, e.g. flags, private ..., can
still be accessed if the above race happened. Is this possible? Or am I miss something? Any suggestion to fix it?
I can't figure out a way yet. :(

Thanks a lot!

> 



  reply	other threads:[~2022-05-12 11:13 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-04-27  4:28 [RFC PATCH v1 0/4] mm, hwpoison: improve handling workload related to hugetlb and memory_hotplug Naoya Horiguchi
2022-04-27  4:28 ` [RFC PATCH v1 1/4] mm, hwpoison, hugetlb: introduce SUBPAGE_INDEX_HWPOISON to save raw error page Naoya Horiguchi
2022-04-27  7:11   ` Miaohe Lin
2022-04-27 13:03     ` HORIGUCHI NAOYA(堀口 直也)
2022-04-28  3:14       ` Miaohe Lin
2022-05-12 22:31   ` Jane Chu
2022-05-12 22:49     ` HORIGUCHI NAOYA(堀口 直也)
2022-04-27  4:28 ` [RFC PATCH v1 2/4] mm,hwpoison,hugetlb,memory_hotplug: hotremove memory section with hwpoisoned hugepage Naoya Horiguchi
2022-04-29  8:49   ` Miaohe Lin
2022-05-09  7:55     ` HORIGUCHI NAOYA(堀口 直也)
2022-05-09  8:57       ` Miaohe Lin
2022-04-27  4:28 ` [RFC PATCH v1 3/4] mm, hwpoison: add parameter unpoison to get_hwpoison_huge_page() Naoya Horiguchi
2022-04-27  4:28 ` [RFC PATCH v1 4/4] mm, memory_hotplug: fix inconsistent num_poisoned_pages on memory hotremove Naoya Horiguchi
2022-04-28  3:20   ` Miaohe Lin
2022-04-28  4:05     ` HORIGUCHI NAOYA(堀口 直也)
2022-04-28  7:16       ` Miaohe Lin
2022-05-09 13:34         ` Naoya Horiguchi
2022-04-27 10:48 ` [RFC PATCH v1 0/4] mm, hwpoison: improve handling workload related to hugetlb and memory_hotplug David Hildenbrand
2022-04-27 12:20   ` Oscar Salvador
2022-04-27 12:20   ` HORIGUCHI NAOYA(堀口 直也)
2022-04-28  8:44     ` David Hildenbrand
2022-05-09  7:29       ` HORIGUCHI NAOYA(堀口 直也)
2022-05-09  9:04         ` Miaohe Lin
2022-05-09  9:58           ` Oscar Salvador
2022-05-09 10:53             ` Miaohe Lin
2022-05-11 15:11               ` David Hildenbrand
2022-05-11 16:10                 ` HORIGUCHI NAOYA(堀口 直也)
2022-05-11 16:22                   ` David Hildenbrand
2022-05-12  3:04                     ` Miaohe Lin
2022-05-12  6:35                     ` HORIGUCHI NAOYA(堀口 直也)
2022-05-12  7:28                       ` David Hildenbrand
2022-05-12 11:13                         ` Miaohe Lin [this message]
2022-05-12 12:59                           ` David Hildenbrand
2022-05-16  3:25                             ` Miaohe Lin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=04781d15-9d87-1763-02fe-e353679c50d7@huawei.com \
    --to=linmiaohe@huawei.com \
    --cc=akpm@linux-foundation.org \
    --cc=david@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mike.kravetz@oracle.com \
    --cc=naoya.horiguchi@linux.dev \
    --cc=naoya.horiguchi@nec.com \
    --cc=osalvador@suse.de \
    --cc=shy828301@gmail.com \
    --cc=songmuchun@bytedance.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).