linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: Baoquan He <bhe@redhat.com>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	mhocko@suse.com, akpm@linux-foundation.org, aarcange@redhat.com
Subject: Re: Memory hotplug softlock issue
Date: Wed, 14 Nov 2018 10:25:57 +0100	[thread overview]
Message-ID: <8c03f925-8ca4-688c-569a-a7a449612782@redhat.com> (raw)
In-Reply-To: <20181114090042.GD2653@MiWiFi-R3L-srv>

On 14.11.18 10:00, Baoquan He wrote:
> Hi David,
> 
> On 11/14/18 at 09:18am, David Hildenbrand wrote:
>> Code seems to be waiting for the mem_hotplug_lock in read.
>> We hold mem_hotplug_lock in write whenever we online/offline/add/remove
>> memory. There are two ways to trigger offlining of memory:
>>
>> 1. Offlining via "cat offline > /sys/devices/system/memory/memory0/state"
>>
>> This always properly took the mem_hotplug_lock. Nothing changed
>>
>> 2. Offlining via "cat 0 > /sys/devices/system/memory/memory0/online"
>>
>> This didn't take the mem_hotplug_lock and I fixed that for this release.
>>
>> So if you were testing with 1., you should have seen the same error
>> before this release (unless there is something else now broken in this
>> release).
> 
> Thanks a lot for looking into this.
> 
> I triggered sysrq+t to check threads' state. You can see that we use
> firmware to trigger ACPI event to go to acpi_bus_offline(), it truly
> didn't take mem_hotplug_lock() and has taken it with your fix in
> commit 381eab4a6ee mm/memory_hotplug: fix online/offline_pages called w.o. mem_hotplug_lock
> 
> [  +0.007062] Workqueue: kacpi_hotplug acpi_hotplug_work_fn
> [  +0.005398] Call Trace:
> [  +0.002476]  ? page_vma_mapped_walk+0x307/0x710
> [  +0.004538]  ? page_remove_rmap+0xa2/0x340
> [  +0.004104]  ? ptep_clear_flush+0x54/0x60
> [  +0.004027]  ? enqueue_entity+0x11c/0x620
> [  +0.005904]  ? schedule+0x28/0x80
> [  +0.003336]  ? rmap_walk_file+0xf9/0x270
> [  +0.003940]  ? try_to_unmap+0x9c/0xf0
> [  +0.003695]  ? migrate_pages+0x2b0/0xb90
> [  +0.003959]  ? try_offline_node+0x160/0x160
> [  +0.004214]  ? __offline_pages+0x6ce/0x8e0
> [  +0.004134]  ? memory_subsys_offline+0x40/0x60
> [  +0.004474]  ? device_offline+0x81/0xb0
> [  +0.003867]  ? acpi_bus_offline+0xdb/0x140
> [  +0.004117]  ? acpi_device_hotplug+0x21c/0x460
> [  +0.004458]  ? acpi_hotplug_work_fn+0x1a/0x30
> [  +0.004372]  ? process_one_work+0x1a1/0x3a0
> [  +0.004195]  ? worker_thread+0x30/0x380
> [  +0.003851]  ? drain_workqueue+0x120/0x120
> [  +0.004117]  ? kthread+0x112/0x130
> [  +0.003411]  ? kthread_park+0x80/0x80
> [  +0.005325]  ? ret_from_fork+0x35/0x40
> 

Yes, this is indeed another code path that was fixed (and I didn't
actually realize it ;) ). Thanks for the callchain. Before my fix
hotplug still would have never succeeded (offline_pages would have
silently looped forever) as far as I can tell.

> 
>>
>>
>> The real question is, however, why offlining of the last block doesn't
>> succeed. In __offline_pages() we basically have an endless loop (while
>> holding the mem_hotplug_lock in write). Now I consider this piece of
>> code very problematic (we should automatically fail after X
>> attempts/after X seconds, we should not ignore -ENOMEM), and we've had
>> other BUGs whereby we would run into an endless loop here (e.g. related
>> to hugepages I guess).
> 
> Hmm, even though memory hotplug stalled there, there are still much
> memory. E.g in this system, it has 8 nodes and each node has 64 GB
> memory, it's 512 GB in all. Now I run "stress -m 200" to trigger 200
> processes to malloc then free 256 MB contiously, and it's eating 50 GB
> in all. In theory, it still has much memory for migrating to.

Maybe a NUMA issue? But I am just making wild guesses. Maybe it is not
-ENOMEM but just some other migration condition that is not properly
handled (see Michals reply).

> 
>>
>> You mentioned memory pressure, if our host is under memory pressure we
>> can easily trigger running into an endless loop there, because we
>> basically ignore -ENOMEM e.g. when we cannot get a page to migrate some
>> memory to be offlined. I assume this is the case here.
>> do_migrate_range() could be the bad boy if it keeps failing forever and
>> we keep retrying.
> 
> Not sure what other people think about this. If failed the memory removing
> when still much free memory left, I worry customer will complain.

Indeed, we have to look into this.

> 
> Yeah, it stoped at do_migrate_range() when try to migrate the last
> memory block. And each time it's the last memory block which can't be
> offlined and hang.

It would be interesting to see which error message we keep getting.

> 
> If any message or information needed, I can provide.
> 
> Thanks
> Baoquan
> 


-- 

Thanks,

David / dhildenb

  reply	other threads:[~2018-11-14  9:26 UTC|newest]

Thread overview: 55+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-11-14  7:09 Memory hotplug softlock issue Baoquan He
2018-11-14  7:16 ` Baoquan He
2018-11-14  8:18 ` David Hildenbrand
2018-11-14  9:00   ` Baoquan He
2018-11-14  9:25     ` David Hildenbrand [this message]
2018-11-14  9:41       ` Michal Hocko
2018-11-14  9:48         ` David Hildenbrand
2018-11-14 10:04           ` Michal Hocko
2018-11-14  9:01   ` Michal Hocko
2018-11-14  9:22     ` David Hildenbrand
2018-11-14  9:37       ` Michal Hocko
2018-11-14  9:39         ` David Hildenbrand
2018-11-14 14:52     ` Baoquan He
2018-11-14 15:00       ` Michal Hocko
2018-11-15  5:10         ` Baoquan He
2018-11-15  7:30           ` Michal Hocko
2018-11-15  7:53             ` Baoquan He
2018-11-15  8:30               ` Michal Hocko
2018-11-15  9:42                 ` David Hildenbrand
2018-11-15  9:52                   ` Baoquan He
2018-11-15  9:53                     ` David Hildenbrand
2018-11-15 13:12                 ` Baoquan He
2018-11-15 13:19                   ` Michal Hocko
2018-11-15 13:23                     ` Baoquan He
2018-11-15 14:25                       ` Michal Hocko
2018-11-15 13:38                     ` Baoquan He
2018-11-15 14:32                       ` Michal Hocko
2018-11-15 14:34                         ` Baoquan He
2018-11-16  1:24                         ` Baoquan He
2018-11-16  9:14                           ` Michal Hocko
2018-11-17  4:22                             ` Baoquan He
2018-11-19 10:52                             ` Baoquan He
2018-11-19 12:40                               ` Michal Hocko
2018-11-19 12:51                                 ` Michal Hocko
2018-11-19 14:10                                   ` Michal Hocko
2018-11-19 16:36                                     ` Vlastimil Babka
2018-11-19 16:46                                       ` Michal Hocko
2018-11-19 16:46                                         ` Vlastimil Babka
2018-11-19 16:48                                           ` Vlastimil Babka
2018-11-19 17:01                                             ` Michal Hocko
2018-11-19 17:33                                     ` Michal Hocko
2018-11-19 20:34                                       ` Hugh Dickins
2018-11-19 20:59                                         ` Michal Hocko
2018-11-20  1:56                                           ` Baoquan He
2018-11-20  5:44                                             ` Hugh Dickins
2018-11-20 13:38                                               ` Vlastimil Babka
2018-11-20 13:58                                                 ` Baoquan He
2018-11-20 14:05                                                   ` Michal Hocko
2018-11-20 14:12                                                     ` Baoquan He
2018-11-21  1:21                                                   ` Hugh Dickins
2018-11-21  1:08                                                 ` Hugh Dickins
2018-11-21  3:20                                                   ` Hugh Dickins
2018-11-21 17:31                                               ` Michal Hocko
2018-11-22  1:53                                                 ` Hugh Dickins
2018-11-14 10:00 ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8c03f925-8ca4-688c-569a-a7a449612782@redhat.com \
    --to=david@redhat.com \
    --cc=aarcange@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=bhe@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).