linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Qian Cai <cai@lca.pw>
To: David Hildenbrand <david@redhat.com>, Michal Hocko <mhocko@kernel.org>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	Andrew Morton <akpm@linux-foundation.org>,
	Oscar Salvador <osalvador@suse.de>,
	Pavel Tatashin <pasha.tatashin@soleen.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: Re: [PATCH v1] mm/memory_hotplug: Don't take the cpu_hotplug_lock
Date: Wed, 25 Sep 2019 16:32:48 -0400	[thread overview]
Message-ID: <1569443568.5576.231.camel@lca.pw> (raw)
In-Reply-To: <92bce3d4-0a3e-e157-529d-35aafbc30f3b@redhat.com>

On Wed, 2019-09-25 at 21:48 +0200, David Hildenbrand wrote:
> On 25.09.19 20:20, Qian Cai wrote:
> > On Wed, 2019-09-25 at 19:48 +0200, Michal Hocko wrote:
> > > On Wed 25-09-19 12:01:02, Qian Cai wrote:
> > > > On Wed, 2019-09-25 at 09:02 +0200, David Hildenbrand wrote:
> > > > > On 24.09.19 20:54, Qian Cai wrote:
> > > > > > On Tue, 2019-09-24 at 17:11 +0200, Michal Hocko wrote:
> > > > > > > On Tue 24-09-19 11:03:21, Qian Cai wrote:
> > > > > > > [...]
> > > > > > > > While at it, it might be a good time to rethink the whole locking over there, as
> > > > > > > > it right now read files under /sys/kernel/slab/ could trigger a possible
> > > > > > > > deadlock anyway.
> > > > > > > > 
> > > > > > > 
> > > > > > > [...]
> > > > > > > > [  442.452090][ T5224] -> #0 (mem_hotplug_lock.rw_sem){++++}:
> > > > > > > > [  442.459748][ T5224]        validate_chain+0xd10/0x2bcc
> > > > > > > > [  442.464883][ T5224]        __lock_acquire+0x7f4/0xb8c
> > > > > > > > [  442.469930][ T5224]        lock_acquire+0x31c/0x360
> > > > > > > > [  442.474803][ T5224]        get_online_mems+0x54/0x150
> > > > > > > > [  442.479850][ T5224]        show_slab_objects+0x94/0x3a8
> > > > > > > > [  442.485072][ T5224]        total_objects_show+0x28/0x34
> > > > > > > > [  442.490292][ T5224]        slab_attr_show+0x38/0x54
> > > > > > > > [  442.495166][ T5224]        sysfs_kf_seq_show+0x198/0x2d4
> > > > > > > > [  442.500473][ T5224]        kernfs_seq_show+0xa4/0xcc
> > > > > > > > [  442.505433][ T5224]        seq_read+0x30c/0x8a8
> > > > > > > > [  442.509958][ T5224]        kernfs_fop_read+0xa8/0x314
> > > > > > > > [  442.515007][ T5224]        __vfs_read+0x88/0x20c
> > > > > > > > [  442.519620][ T5224]        vfs_read+0xd8/0x10c
> > > > > > > > [  442.524060][ T5224]        ksys_read+0xb0/0x120
> > > > > > > > [  442.528586][ T5224]        __arm64_sys_read+0x54/0x88
> > > > > > > > [  442.533634][ T5224]        el0_svc_handler+0x170/0x240
> > > > > > > > [  442.538768][ T5224]        el0_svc+0x8/0xc
> > > > > > > 
> > > > > > > I believe the lock is not really needed here. We do not deallocated
> > > > > > > pgdat of a hotremoved node nor destroy the slab state because an
> > > > > > > existing slabs would prevent hotremove to continue in the first place.
> > > > > > > 
> > > > > > > There are likely details to be checked of course but the lock just seems
> > > > > > > bogus.
> > > > > > 
> > > > > > Check 03afc0e25f7f ("slab: get_online_mems for
> > > > > > kmem_cache_{create,destroy,shrink}"). It actually talk about the races during
> > > > > > memory as well cpu hotplug, so it might even that cpu_hotplug_lock removal is
> > > > > > problematic?
> > > > > > 
> > > > > 
> > > > > Which removal are you referring to? get_online_mems() does not mess with
> > > > > the cpu hotplug lock (and therefore this patch).
> > > > 
> > > > The one in your patch. I suspect there might be races among the whole NUMA node
> > > > hotplug, kmem_cache_create, and show_slab_objects(). See bfc8c90139eb ("mem-
> > > > hotplug: implement get/put_online_mems")
> > > > 
> > > > "kmem_cache_{create,destroy,shrink} need to get a stable value of cpu/node
> > > > online mask, because they init/destroy/access per-cpu/node kmem_cache parts,
> > > > which can be allocated or destroyed on cpu/mem hotplug."
> > > 
> > > I still have to grasp that code but if the slub allocator really needs
> > > a stable cpu mask then it should be using the explicit cpu hotplug
> > > locking rather than rely on side effect of memory hotplug locking.
> > > 
> > > > Both online_pages() and show_slab_objects() need to get a stable value of
> > > > cpu/node online mask.
> > > 
> > > Could tou be more specific why online_pages need a stable cpu online
> > > mask? I do not think that show_slab_objects is a real problem because a
> > > potential race shouldn't be critical.
> > 
> > build_all_zonelists()
> >   __build_all_zonelists()
> >     for_each_online_cpu(cpu)
> > 
> 
> Two things:
> 
> a) We currently always hold the device hotplug lock when onlining memory
> and when onlining cpus (for CPUs at least via user space - we would have
> to double check other call paths). So theoretically, that should guard
> us from something like that already.
> 
> b)
> 
> commit 11cd8638c37f6c400cc472cc52b6eccb505aba6e
> Author: Michal Hocko <mhocko@suse.com>
> Date:   Wed Sep 6 16:20:34 2017 -0700
> 
>     mm, page_alloc: remove stop_machine from build_all_zonelists
> 
> Tells me:
> 
> "Updates of the zonelists happen very seldom, basically only when a zone
>  becomes populated during memory online or when it loses all the memory
>  during offline.  A racing iteration over zonelists could either miss a
>  zone or try to work on one zone twice.  Both of these are something we
>  can live with occasionally because there will always be at least one
>  zone visible so we are not likely to fail allocation too easily for
>  example."
> 
> Sounds like if there would be a race, we could live with it if I am not
> getting that totally wrong.
> 

What's the problem you are trying to solve? Why it is more important to live
with races than to keep the correct code?


  reply	other threads:[~2019-09-25 20:32 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-09-24 14:36 [PATCH v1] mm/memory_hotplug: Don't take the cpu_hotplug_lock David Hildenbrand
2019-09-24 14:48 ` Michal Hocko
2019-09-24 15:01   ` David Hildenbrand
2019-09-24 15:03 ` Qian Cai
2019-09-24 15:11   ` Michal Hocko
2019-09-24 18:54     ` Qian Cai
2019-09-25  7:02       ` David Hildenbrand
2019-09-25 16:01         ` Qian Cai
2019-09-25 17:48           ` Michal Hocko
2019-09-25 18:20             ` Qian Cai
2019-09-25 19:48               ` David Hildenbrand
2019-09-25 20:32                 ` Qian Cai [this message]
2019-09-26  7:26                   ` David Hildenbrand
2019-09-26  7:38                     ` Michal Hocko
2019-09-26  7:26               ` Michal Hocko
2019-09-26 11:19                 ` Qian Cai
2019-09-26 11:52                   ` Michal Hocko
2019-09-26 13:02                     ` Qian Cai
2019-09-26 13:14                       ` David Hildenbrand
2019-09-25 10:03       ` Michal Hocko
2019-09-24 15:23   ` David Hildenbrand
2019-10-02 21:37 ` Qian Cai
2019-10-04  7:42   ` David Hildenbrand

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1569443568.5576.231.camel@lca.pw \
    --to=cai@lca.pw \
    --cc=akpm@linux-foundation.org \
    --cc=dan.j.williams@intel.com \
    --cc=david@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=osalvador@suse.de \
    --cc=pasha.tatashin@soleen.com \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).