All of lore.kernel.org
 help / color / mirror / Atom feed
From: Laurent Dufour <ldufour@linux.ibm.com>
To: Michal Hocko <mhocko@suse.com>
Cc: akpm@linux-foundation.org, David Hildenbrand <david@redhat.com>,
	Oscar Salvador <osalvador@suse.de>,
	rafael@kernel.org, nathanl@linux.ibm.com, cheloha@linux.ibm.com,
	stable@vger.kernel.org,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	linux-mm@kvack.org, LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH] mm: don't rely on system state to detect hot-plug operations
Date: Wed, 9 Sep 2020 18:07:15 +0200	[thread overview]
Message-ID: <74a62b00-235e-7deb-2814-f3b240fea25e@linux.ibm.com> (raw)
In-Reply-To: <20200909105914.GF7348@dhcp22.suse.cz>

Le 09/09/2020 à 12:59, Michal Hocko a écrit :
> On Wed 09-09-20 11:21:58, Laurent Dufour wrote:
>> Le 09/09/2020 à 11:09, Michal Hocko a écrit :
>>> On Wed 09-09-20 09:48:59, Laurent Dufour wrote:
>>>> Le 09/09/2020 à 09:40, Michal Hocko a écrit :
> [...]
>>>>>> In
>>>>>> that case, the system is able to boot but later hot-plug operation may lead
>>>>>> to this panic because the node's links are correctly broken:
>>>>>
>>>>> Correctly broken? Could you provide more details on the inconsistency
>>>>> please?
>>>>
>>>> laurent@ltczep3-lp4:~$ ls -l /sys/devices/system/memory/memory21
>>>> total 0
>>>> lrwxrwxrwx 1 root root     0 Aug 24 05:27 node1 -> ../../node/node1
>>>> lrwxrwxrwx 1 root root     0 Aug 24 05:27 node2 -> ../../node/node2
>>>> -rw-r--r-- 1 root root 65536 Aug 24 05:27 online
>>>> -r--r--r-- 1 root root 65536 Aug 24 05:27 phys_device
>>>> -r--r--r-- 1 root root 65536 Aug 24 05:27 phys_index
>>>> drwxr-xr-x 2 root root     0 Aug 24 05:27 power
>>>> -r--r--r-- 1 root root 65536 Aug 24 05:27 removable
>>>> -rw-r--r-- 1 root root 65536 Aug 24 05:27 state
>>>> lrwxrwxrwx 1 root root     0 Aug 24 05:25 subsystem -> ../../../../bus/memory
>>>> -rw-r--r-- 1 root root 65536 Aug 24 05:25 uevent
>>>> -r--r--r-- 1 root root 65536 Aug 24 05:27 valid_zones
>>>
>>> OK, so there are two nodes referenced here. Not terrible from the user
>>> point of view. Such a memory block will refuse to offline or online
>>> IIRC.
>>
>> No the memory block is still owned by one node, only the sysfs
>> representation is wrong. So the memory block can be hot unplugged, but only
>> one node's link will be cleaned, and a '/syss/devices/system/node#/memory21'
>> link will remain and that will be detected later when that memory block is
>> hot plugged again.
> 
> OK, so you need to hotremove first and hotadd again to trigger the
> problem. It is not like you would be a hot adding something new. This is
> a useful information to have in the changelog.
> 
>>>>> Which physical memory range you are trying to add here and what is the
>>>>> node affinity?
>>>>
>>>> None is added, the root cause of the issue is happening at boot time.
>>>
>>> Let me clarify my question. The crash has clearly happened during the
>>> hotplug add_memory_resource - which is clearly not a boot time path.
>>> I was askin for more information about why this has failed. It is quite
>>> clear that sysfs machinery has failed and that led to BUG_ON but we are
>>> mising an information on why. What was the physical memory range to be
>>> added and why sysfs failed?
>>
>> The BUG_ON is detecting a bad state generated earlier, at boot time because
>> register_mem_sect_under_node() didn't check for the block's node id.
>>
>>>>>> ------------[ cut here ]------------
>>>>>> kernel BUG at /Users/laurent/src/linux-ppc/mm/memory_hotplug.c:1084!
>>>>>> Oops: Exception in kernel mode, sig: 5 [#1]
>>>>>> LE PAGE_SIZE=64K MMU=Hash SMP NR_CPUS=2048 NUMA pSeries
>>>>>> Modules linked in: rpadlpar_io rpaphp pseries_rng rng_core vmx_crypto gf128mul binfmt_misc ip_tables x_tables xfs libcrc32c crc32c_vpmsum autofs4
>>>>>> CPU: 8 PID: 10256 Comm: drmgr Not tainted 5.9.0-rc1+ #25
>>>>>> NIP:  c000000000403f34 LR: c000000000403f2c CTR: 0000000000000000
>>>>>> REGS: c0000004876e3660 TRAP: 0700   Not tainted  (5.9.0-rc1+)
>>>>>> MSR:  800000000282b033 <SF,VEC,VSX,EE,FP,ME,IR,DR,RI,LE>  CR: 24000448  XER: 20040000
>>>>>> CFAR: c000000000846d20 IRQMASK: 0
>>>>>> GPR00: c000000000403f2c c0000004876e38f0 c0000000012f6f00 ffffffffffffffef
>>>>>> GPR04: 0000000000000227 c0000004805ae680 0000000000000000 00000004886f0000
>>>>>> GPR08: 0000000000000226 0000000000000003 0000000000000002 fffffffffffffffd
>>>>>> GPR12: 0000000088000484 c00000001ec96280 0000000000000000 0000000000000000
>>>>>> GPR16: 0000000000000000 0000000000000000 0000000000000004 0000000000000003
>>>>>> GPR20: c00000047814ffe0 c0000007ffff7c08 0000000000000010 c0000000013332c8
>>>>>> GPR24: 0000000000000000 c0000000011f6cc0 0000000000000000 0000000000000000
>>>>>> GPR28: ffffffffffffffef 0000000000000001 0000000150000000 0000000010000000
>>>>>> NIP [c000000000403f34] add_memory_resource+0x244/0x340
>>>>>> LR [c000000000403f2c] add_memory_resource+0x23c/0x340
>>>>>> Call Trace:
>>>>>> [c0000004876e38f0] [c000000000403f2c] add_memory_resource+0x23c/0x340 (unreliable)
>>>>>> [c0000004876e39c0] [c00000000040408c] __add_memory+0x5c/0xf0
>>>>>> [c0000004876e39f0] [c0000000000e2b94] dlpar_add_lmb+0x1b4/0x500
>>>>>> [c0000004876e3ad0] [c0000000000e3888] dlpar_memory+0x1f8/0xb80
>>>>>> [c0000004876e3b60] [c0000000000dc0d0] handle_dlpar_errorlog+0xc0/0x190
>>>>>> [c0000004876e3bd0] [c0000000000dc398] dlpar_store+0x198/0x4a0
>>>>>> [c0000004876e3c90] [c00000000072e630] kobj_attr_store+0x30/0x50
>>>>>> [c0000004876e3cb0] [c00000000051f954] sysfs_kf_write+0x64/0x90
>>>>>> [c0000004876e3cd0] [c00000000051ee40] kernfs_fop_write+0x1b0/0x290
>>>>>> [c0000004876e3d20] [c000000000438dd8] vfs_write+0xe8/0x290
>>>>>> [c0000004876e3d70] [c0000000004391ac] ksys_write+0xdc/0x130
>>>>>> [c0000004876e3dc0] [c000000000034e40] system_call_exception+0x160/0x270
>>>>>> [c0000004876e3e20] [c00000000000d740] system_call_common+0xf0/0x27c
>>>>>> Instruction dump:
>>>>>> 48442e35 60000000 0b030000 3cbe0001 7fa3eb78 7bc48402 38a5fffe 7ca5fa14
>>>>>> 78a58402 48442db1 60000000 7c7c1b78 <0b030000> 7f23cb78 4bda371d 60000000
>>>>>> ---[ end trace 562fd6c109cd0fb2 ]---
>>>>>
>>>>> The BUG_ON on failure is absolutely horrendous. There must be a better
>>>>> way to handle a failure like that. The failure means that
>>>>> sysfs_create_link_nowarn has failed. Please describe why that is the
>>>>> case.
>>>>>
>>>>>> This patch addresses the root cause by not relying on the system_state
>>>>>> value to detect whether the call is due to a hot-plug operation or not. An
>>>>>> additional parameter is added to link_mem_sections() to tell the context of
>>>>>> the call and this parameter is propagated to register_mem_sect_under_node()
>>>>>> throuugh the walk_memory_blocks()'s call.
>>>>>
>>>>> This looks like a hack to me and it deserves a better explanation. The
>>>>> existing code is a hack on its own and it is inconsistent with other
>>>>> boot time detection. We are using (system_state < SYSTEM_RUNNING) at other
>>>>> places IIRC. Would it help to use the same here as well? Maybe we want to
>>>>> wrap that inside a helper (early_memory_init()) and use it at all
>>>>> places.
>>>>
>>>> I agree, this looks like a hack to check for the system_state value.
>>>> I'll follow the David's proposal and introduce an enum detailing when the
>>>> node id check has to be done or not.
>>>
>>> I am not sure an enum is going to make the existing situation less
>>> messy. Sure we somehow have to distinguish boot init and runtime hotplug
>>> because they have different constrains. I am arguing that a) we should
>>> have a consistent way to check for those and b) we shouldn't blow up
>>> easily just because sysfs infrastructure has failed to initialize.
>>
>> For the point a, using the enum allows to know in
>> register_mem_sect_under_node() if the link operation is due to a hotplug
>> operation or done at boot time.
> 
> Yes, but let me repeat. We have a mess here and different paths check
> for the very same condition by different ways. We need to unify those.

What are you suggesting to unify these checks (using a MP_* enum as suggested by 
David, something else)?

> 
>> For the point b, one option would be ignore the link error in the case the
>> link is already existing, but that BUG_ON() had the benefit to highlight the
>> root issue.
> 
> Yes BUG_ON is obviously an over-reaction. The system is not in a state
> to die anytime soon.
> 


  reply	other threads:[~2020-09-09 16:46 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <5cbd92e1-c00a-4253-0119-c872bfa0f2bc@redhat.com>
2020-09-08 17:08 ` [PATCH] mm: don't rely on system state to detect hot-plug operations Laurent Dufour
2020-09-08 17:31   ` Greg Kroah-Hartman
2020-09-08 17:40     ` David Hildenbrand
2020-09-09  8:26       ` Laurent Dufour
2020-09-09  8:31         ` David Hildenbrand
2020-09-09  9:35           ` Laurent Dufour
2020-09-09  6:56     ` Laurent Dufour
2020-09-09  7:40   ` Michal Hocko
2020-09-09  7:48     ` Laurent Dufour
2020-09-09  9:09       ` Michal Hocko
2020-09-09  9:21         ` Laurent Dufour
2020-09-09  9:24           ` David Hildenbrand
2020-09-09  9:32             ` Laurent Dufour
2020-09-09 12:30             ` Greg Kroah-Hartman
2020-09-09 12:32               ` David Hildenbrand
2020-09-09 12:36                 ` Greg Kroah-Hartman
2020-09-09 12:45                 ` Michal Hocko
2020-09-09 10:59           ` Michal Hocko
2020-09-09 16:07             ` Laurent Dufour [this message]
2020-09-10  7:23               ` Michal Hocko
2020-09-10  7:51                 ` Laurent Dufour
2020-09-10 11:12                   ` Michal Hocko
2020-09-10 11:35                     ` Laurent Dufour
2020-09-10 12:00                       ` David Hildenbrand
2020-09-10 12:36                         ` Laurent Dufour
2020-09-10 12:38                           ` David Hildenbrand
2020-09-10 12:01                       ` Michal Hocko
2020-09-10 12:03                       ` Oscar Salvador
2020-09-10 12:32                         ` Laurent Dufour
2020-09-10 12:47                         ` Michal Hocko
2020-09-10 12:48                           ` Michal Hocko
2020-09-10 13:39                             ` Oscar Salvador
2020-09-10 13:51                               ` Michal Hocko
2020-09-10 14:40                                 ` Michal Hocko
2020-09-10 12:49                           ` David Hildenbrand
2020-09-10 13:54                             ` Michal Hocko
2020-09-10 13:57                               ` David Hildenbrand

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=74a62b00-235e-7deb-2814-f3b240fea25e@linux.ibm.com \
    --to=ldufour@linux.ibm.com \
    --cc=akpm@linux-foundation.org \
    --cc=cheloha@linux.ibm.com \
    --cc=david@redhat.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=nathanl@linux.ibm.com \
    --cc=osalvador@suse.de \
    --cc=rafael@kernel.org \
    --cc=stable@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.