linux-acpi.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@kernel.org>
To: David Hildenbrand <david@redhat.com>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	linux-acpi@vger.kernel.org,
	"Rafael J. Wysocki" <rjw@rjwysocki.net>,
	Andrew Morton <akpm@linux-foundation.org>,
	Oscar Salvador <osalvador@suse.de>
Subject: Re: [PATCH v1] ACPI / scan: Acquire device_hotplug_lock in acpi_scan_init()
Date: Fri, 26 Jul 2019 12:31:12 +0200	[thread overview]
Message-ID: <20190726103112.GL6142@dhcp22.suse.cz> (raw)
In-Reply-To: <45c9f942-fe67-fa60-b62f-31867f9c6e53@redhat.com>

On Fri 26-07-19 10:57:52, David Hildenbrand wrote:
> On 26.07.19 10:44, Michal Hocko wrote:
> > On Fri 26-07-19 10:36:42, David Hildenbrand wrote:
> >> On 26.07.19 10:31, Michal Hocko wrote:
> > [...]
> >>> Anyway, my dislike of the device_hotplug_lock persists. I would really
> >>> love to see it go rather than grow even more to the hotplug code. We
> >>> should be really striving for mem hotplug internal and ideally range
> >>> defined locking longterm. 
> >>
> >> Yes, and that is a different story, because it will require major
> >> changes to all add_memory() users. (esp, due to the documented race
> >> conditions). Having that said, memory hotplug locking is not ideal yet.
> > 
> > I am really happy to hear that we are on the same page here. Do we have
> > any document (I am sorry but I am lacking behind recent development in
> > this area) that describes roadblocks to remove device_hotplug_lock?
> 
> Only the core-api document I mentioned (I documented there quite some
> current conditions I identified back then).

That document doesn't describe which _data structures_ are protected by
the lock though. It documents only the current state of locking.

> I am not sure if we can remove it completely from
> add_memory()/remove_memory(): We actually create/delete devices which
> can otherwise create races with user space.

More details would be really appreciated.

> Besides that:
> - try_offline_node() needs the lock to synchronize against cpu hotplug
> - I *assume* try_online_node() needs it as well

more details on why would be great.

> Then, there is the possible race condition with user space onlining
> memory avoided by the lock. Also, currently the lock protects the
> "online_type" when onlining memory.

I do not see the race, if the user API triggered online/offline takes a
range lock on the affected physical memory range

> Then, there might be other global variables (eventually
> zone/node/section related) that might need this lock right now - no
> details known.

zones/nodes have their own locking for spans. Sections should be using
a low level locking but I am not really sure this is needed if there is
a mem hotplug lock in place (range or global)

> IOW, we have to be very carefully and it is more involved than it might
> seem.

I am not questioning that. And that is why I am asking about a todo list
for that transition.

> Locking is definitely better (and more reliably!) than one year ago, but
> there is definitely a lot to do. (unfortunately, just like in many areas
> in memory hotplug code :( - say zone handling when offlining/failing to
> online memory).

Yeah, the code is shaping up. And I am happy to see that happening. But
please try to understand that I really do not like to see some ad-hoc
locking enforcement without a clear locking model in place. This patch
is an example of it. Whoever would like to rationalize locking further
will have to stumble over this and scratch head why the hack the locking
is there and my experience tells me that people usually go along with
existing code and make further assumptions based on that so we are
unlikely to get rid of the locking...
-- 
Michal Hocko
SUSE Labs

  reply	other threads:[~2019-07-26 10:31 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-07-24 14:30 [PATCH v1] ACPI / scan: Acquire device_hotplug_lock in acpi_scan_init() David Hildenbrand
2019-07-25  9:11 ` Rafael J. Wysocki
2019-07-25  9:18 ` Oscar Salvador
2019-07-25  9:22   ` Rafael J. Wysocki
2019-07-25  9:23     ` David Hildenbrand
2019-07-25 12:56 ` Michal Hocko
2019-07-25 13:05   ` David Hildenbrand
2019-07-25 13:57     ` Michal Hocko
2019-07-25 14:35       ` David Hildenbrand
2019-07-25 19:19         ` Michal Hocko
2019-07-25 20:49           ` David Hildenbrand
2019-07-25 21:23             ` Rafael J. Wysocki
2019-07-26  7:20               ` David Hildenbrand
2019-07-26  7:57             ` Michal Hocko
2019-07-26  8:05               ` David Hildenbrand
2019-07-26  8:31                 ` Michal Hocko
2019-07-26  8:36                   ` David Hildenbrand
2019-07-26  8:44                     ` Michal Hocko
2019-07-26  8:57                       ` David Hildenbrand
2019-07-26 10:31                         ` Michal Hocko [this message]
2019-07-26 10:37                           ` David Hildenbrand

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190726103112.GL6142@dhcp22.suse.cz \
    --to=mhocko@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=david@redhat.com \
    --cc=linux-acpi@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=osalvador@suse.de \
    --cc=rjw@rjwysocki.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).