All of lore.kernel.org
 help / color / mirror / Atom feed
From: Pasha Tatashin <Pavel.Tatashin@microsoft.com>
To: David Hildenbrand <david@redhat.com>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>
Cc: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-doc@vger.kernel.org" <linux-doc@vger.kernel.org>,
	"linuxppc-dev@lists.ozlabs.org" <linuxppc-dev@lists.ozlabs.org>,
	"linux-acpi@vger.kernel.org" <linux-acpi@vger.kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"devel@linuxdriverproject.org" <devel@linuxdriverproject.org>,
	Benjamin Herrenschmidt <benh@kernel.crashing.org>,
	Paul Mackerras <paulus@samba.org>,
	Michael Ellerman <mpe@ellerman.id.au>,
	Rashmica Gupta <rashmica.g@gmail.com>,
	Balbir Singh <bsingharora@gmail.com>,
	Michael Neuling <mikey@neuling.org>
Subject: Re: [PATCH RFCv2 5/6] powerpc/powernv: hold device_hotplug_lock in memtrace_offline_pages()
Date: Thu, 30 Aug 2018 19:38:26 +0000	[thread overview]
Message-ID: <226aaaf7-7d1c-6f7b-5bf4-e6eb99862ebd@microsoft.com> (raw)
In-Reply-To: <20180821104418.12710-6-david@redhat.com>

Reviewed-by: Pavel Tatashin <pavel.tatashin@microsoft.com>

On 8/21/18 6:44 AM, David Hildenbrand wrote:
> Let's perform all checking + offlining + removing under
> device_hotplug_lock, so nobody can mess with these devices via
> sysfs concurrently.
> 
> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
> Cc: Paul Mackerras <paulus@samba.org>
> Cc: Michael Ellerman <mpe@ellerman.id.au>
> Cc: Rashmica Gupta <rashmica.g@gmail.com>
> Cc: Balbir Singh <bsingharora@gmail.com>
> Cc: Michael Neuling <mikey@neuling.org>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
>  arch/powerpc/platforms/powernv/memtrace.c | 10 ++++++++--
>  1 file changed, 8 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/powerpc/platforms/powernv/memtrace.c b/arch/powerpc/platforms/powernv/memtrace.c
> index ef7181d4fe68..473e59842ec5 100644
> --- a/arch/powerpc/platforms/powernv/memtrace.c
> +++ b/arch/powerpc/platforms/powernv/memtrace.c
> @@ -74,9 +74,13 @@ static bool memtrace_offline_pages(u32 nid, u64 start_pfn, u64 nr_pages)
>  {
>  	u64 end_pfn = start_pfn + nr_pages - 1;
>  
> +	lock_device_hotplug();
> +
>  	if (walk_memory_range(start_pfn, end_pfn, NULL,
> -	    check_memblock_online))
> +	    check_memblock_online)) {
> +		unlock_device_hotplug();
>  		return false;
> +	}
>  
>  	walk_memory_range(start_pfn, end_pfn, (void *)MEM_GOING_OFFLINE,
>  			  change_memblock_state);
> @@ -84,14 +88,16 @@ static bool memtrace_offline_pages(u32 nid, u64 start_pfn, u64 nr_pages)
>  	if (offline_pages(start_pfn, nr_pages)) {
>  		walk_memory_range(start_pfn, end_pfn, (void *)MEM_ONLINE,
>  				  change_memblock_state);
> +		unlock_device_hotplug();
>  		return false;
>  	}
>  
>  	walk_memory_range(start_pfn, end_pfn, (void *)MEM_OFFLINE,
>  			  change_memblock_state);
>  
> -	remove_memory(nid, start_pfn << PAGE_SHIFT, nr_pages << PAGE_SHIFT);
> +	__remove_memory(nid, start_pfn << PAGE_SHIFT, nr_pages << PAGE_SHIFT);
>  
> +	unlock_device_hotplug();
>  	return true;
>  }
>  
> 

WARNING: multiple messages have this Message-ID (diff)
From: Pasha Tatashin <Pavel.Tatashin@microsoft.com>
To: David Hildenbrand <david@redhat.com>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>
Cc: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-doc@vger.kernel.org" <linux-doc@vger.kernel.org>,
	"linuxppc-dev@lists.ozlabs.org" <linuxppc-dev@lists.ozlabs.org>,
	"linux-acpi@vger.kernel.org" <linux-acpi@vger.kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"devel@linuxdriverproject.org" <devel@linuxdriverproject.org>,
	Benjamin Herrenschmidt <benh@kernel.crashing.org>,
	Paul Mackerras <paulus@samba.org>,
	Michael Ellerman <mpe@ellerman.id.au>,
	Rashmica Gupta <rashmica.g@gmail.com>,
	Balbir Singh <bsingharora@gmail.com>,
	Michael Neuling <mikey@neuling.org>
Subject: Re: [PATCH RFCv2 5/6] powerpc/powernv: hold device_hotplug_lock in memtrace_offline_pages()
Date: Thu, 30 Aug 2018 19:38:26 +0000	[thread overview]
Message-ID: <226aaaf7-7d1c-6f7b-5bf4-e6eb99862ebd@microsoft.com> (raw)
In-Reply-To: <20180821104418.12710-6-david@redhat.com>

UmV2aWV3ZWQtYnk6IFBhdmVsIFRhdGFzaGluIDxwYXZlbC50YXRhc2hpbkBtaWNyb3NvZnQuY29t
Pg0KDQpPbiA4LzIxLzE4IDY6NDQgQU0sIERhdmlkIEhpbGRlbmJyYW5kIHdyb3RlOg0KPiBMZXQn
cyBwZXJmb3JtIGFsbCBjaGVja2luZyArIG9mZmxpbmluZyArIHJlbW92aW5nIHVuZGVyDQo+IGRl
dmljZV9ob3RwbHVnX2xvY2ssIHNvIG5vYm9keSBjYW4gbWVzcyB3aXRoIHRoZXNlIGRldmljZXMg
dmlhDQo+IHN5c2ZzIGNvbmN1cnJlbnRseS4NCj4gDQo+IENjOiBCZW5qYW1pbiBIZXJyZW5zY2ht
aWR0IDxiZW5oQGtlcm5lbC5jcmFzaGluZy5vcmc+DQo+IENjOiBQYXVsIE1hY2tlcnJhcyA8cGF1
bHVzQHNhbWJhLm9yZz4NCj4gQ2M6IE1pY2hhZWwgRWxsZXJtYW4gPG1wZUBlbGxlcm1hbi5pZC5h
dT4NCj4gQ2M6IFJhc2htaWNhIEd1cHRhIDxyYXNobWljYS5nQGdtYWlsLmNvbT4NCj4gQ2M6IEJh
bGJpciBTaW5naCA8YnNpbmdoYXJvcmFAZ21haWwuY29tPg0KPiBDYzogTWljaGFlbCBOZXVsaW5n
IDxtaWtleUBuZXVsaW5nLm9yZz4NCj4gU2lnbmVkLW9mZi1ieTogRGF2aWQgSGlsZGVuYnJhbmQg
PGRhdmlkQHJlZGhhdC5jb20+DQo+IC0tLQ0KPiAgYXJjaC9wb3dlcnBjL3BsYXRmb3Jtcy9wb3dl
cm52L21lbXRyYWNlLmMgfCAxMCArKysrKysrKy0tDQo+ICAxIGZpbGUgY2hhbmdlZCwgOCBpbnNl
cnRpb25zKCspLCAyIGRlbGV0aW9ucygtKQ0KPiANCj4gZGlmZiAtLWdpdCBhL2FyY2gvcG93ZXJw
Yy9wbGF0Zm9ybXMvcG93ZXJudi9tZW10cmFjZS5jIGIvYXJjaC9wb3dlcnBjL3BsYXRmb3Jtcy9w
b3dlcm52L21lbXRyYWNlLmMNCj4gaW5kZXggZWY3MTgxZDRmZTY4Li40NzNlNTk4NDJlYzUgMTAw
NjQ0DQo+IC0tLSBhL2FyY2gvcG93ZXJwYy9wbGF0Zm9ybXMvcG93ZXJudi9tZW10cmFjZS5jDQo+
ICsrKyBiL2FyY2gvcG93ZXJwYy9wbGF0Zm9ybXMvcG93ZXJudi9tZW10cmFjZS5jDQo+IEBAIC03
NCw5ICs3NCwxMyBAQCBzdGF0aWMgYm9vbCBtZW10cmFjZV9vZmZsaW5lX3BhZ2VzKHUzMiBuaWQs
IHU2NCBzdGFydF9wZm4sIHU2NCBucl9wYWdlcykNCj4gIHsNCj4gIAl1NjQgZW5kX3BmbiA9IHN0
YXJ0X3BmbiArIG5yX3BhZ2VzIC0gMTsNCj4gIA0KPiArCWxvY2tfZGV2aWNlX2hvdHBsdWcoKTsN
Cj4gKw0KPiAgCWlmICh3YWxrX21lbW9yeV9yYW5nZShzdGFydF9wZm4sIGVuZF9wZm4sIE5VTEws
DQo+IC0JICAgIGNoZWNrX21lbWJsb2NrX29ubGluZSkpDQo+ICsJICAgIGNoZWNrX21lbWJsb2Nr
X29ubGluZSkpIHsNCj4gKwkJdW5sb2NrX2RldmljZV9ob3RwbHVnKCk7DQo+ICAJCXJldHVybiBm
YWxzZTsNCj4gKwl9DQo+ICANCj4gIAl3YWxrX21lbW9yeV9yYW5nZShzdGFydF9wZm4sIGVuZF9w
Zm4sICh2b2lkICopTUVNX0dPSU5HX09GRkxJTkUsDQo+ICAJCQkgIGNoYW5nZV9tZW1ibG9ja19z
dGF0ZSk7DQo+IEBAIC04NCwxNCArODgsMTYgQEAgc3RhdGljIGJvb2wgbWVtdHJhY2Vfb2ZmbGlu
ZV9wYWdlcyh1MzIgbmlkLCB1NjQgc3RhcnRfcGZuLCB1NjQgbnJfcGFnZXMpDQo+ICAJaWYgKG9m
ZmxpbmVfcGFnZXMoc3RhcnRfcGZuLCBucl9wYWdlcykpIHsNCj4gIAkJd2Fsa19tZW1vcnlfcmFu
Z2Uoc3RhcnRfcGZuLCBlbmRfcGZuLCAodm9pZCAqKU1FTV9PTkxJTkUsDQo+ICAJCQkJICBjaGFu
Z2VfbWVtYmxvY2tfc3RhdGUpOw0KPiArCQl1bmxvY2tfZGV2aWNlX2hvdHBsdWcoKTsNCj4gIAkJ
cmV0dXJuIGZhbHNlOw0KPiAgCX0NCj4gIA0KPiAgCXdhbGtfbWVtb3J5X3JhbmdlKHN0YXJ0X3Bm
biwgZW5kX3BmbiwgKHZvaWQgKilNRU1fT0ZGTElORSwNCj4gIAkJCSAgY2hhbmdlX21lbWJsb2Nr
X3N0YXRlKTsNCj4gIA0KPiAtCXJlbW92ZV9tZW1vcnkobmlkLCBzdGFydF9wZm4gPDwgUEFHRV9T
SElGVCwgbnJfcGFnZXMgPDwgUEFHRV9TSElGVCk7DQo+ICsJX19yZW1vdmVfbWVtb3J5KG5pZCwg
c3RhcnRfcGZuIDw8IFBBR0VfU0hJRlQsIG5yX3BhZ2VzIDw8IFBBR0VfU0hJRlQpOw0KPiAgDQo+
ICsJdW5sb2NrX2RldmljZV9ob3RwbHVnKCk7DQo+ICAJcmV0dXJuIHRydWU7DQo+ICB9DQo+ICAN
Cj4g

  reply	other threads:[~2018-08-30 19:38 UTC|newest]

Thread overview: 64+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-08-21 10:44 [PATCH RFCv2 0/6] mm: online/offline_pages called w.o. mem_hotplug_lock David Hildenbrand
2018-08-21 10:44 ` David Hildenbrand
2018-08-21 10:44 ` [PATCH RFCv2 1/6] mm/memory_hotplug: make remove_memory() take the device_hotplug_lock David Hildenbrand
2018-08-21 10:44 ` David Hildenbrand
2018-08-21 10:44   ` David Hildenbrand
2018-08-30 19:35   ` Pasha Tatashin
2018-08-30 19:35   ` Pasha Tatashin
2018-08-30 19:35     ` Pasha Tatashin
2018-08-30 19:35     ` Pasha Tatashin
2018-08-31 13:12     ` David Hildenbrand
2018-08-31 13:12       ` David Hildenbrand
2018-08-31 13:12     ` David Hildenbrand
2018-08-21 10:44 ` [PATCH RFCv2 2/6] mm/memory_hotplug: make add_memory() " David Hildenbrand
2018-08-21 10:44   ` David Hildenbrand
2018-08-30 19:36   ` Pasha Tatashin
2018-08-30 19:36     ` Pasha Tatashin
2018-08-30 19:36     ` Pasha Tatashin
2018-08-30 19:36   ` Pasha Tatashin
2018-08-21 10:44 ` David Hildenbrand
2018-08-21 10:44 ` [PATCH RFCv2 3/6] mm/memory_hotplug: fix online/offline_pages called w.o. mem_hotplug_lock David Hildenbrand
2018-08-21 10:44   ` David Hildenbrand
2018-08-30 19:37   ` Pasha Tatashin
2018-08-30 19:37     ` Pasha Tatashin
2018-08-30 19:37     ` Pasha Tatashin
2018-08-30 19:37   ` Pasha Tatashin
2018-09-03  0:36   ` Rashmica
2018-09-03  0:36     ` Rashmica
2018-09-17  7:32     ` David Hildenbrand
2018-09-17  7:32     ` David Hildenbrand
2018-09-17  7:32       ` David Hildenbrand
2018-09-25  1:26       ` Rashmica Gupta
2018-09-25  1:26         ` Rashmica Gupta
2018-09-25  1:26       ` Rashmica Gupta
2018-08-21 10:44 ` David Hildenbrand
2018-08-21 10:44 ` [PATCH RFCv2 4/6] powerpc/powernv: hold device_hotplug_lock when calling device_online() David Hildenbrand
2018-08-21 10:44 ` David Hildenbrand
2018-08-21 10:44   ` David Hildenbrand
2018-08-30 19:38   ` Pasha Tatashin
2018-08-30 19:38     ` Pasha Tatashin
2018-08-30 19:38   ` Pasha Tatashin
2018-08-21 10:44 ` [PATCH RFCv2 5/6] powerpc/powernv: hold device_hotplug_lock in memtrace_offline_pages() David Hildenbrand
2018-08-21 10:44 ` David Hildenbrand
2018-08-30 19:38   ` Pasha Tatashin [this message]
2018-08-30 19:38     ` Pasha Tatashin
2018-08-30 19:38   ` Pasha Tatashin
2018-08-21 10:44 ` [PATCH RFCv2 6/6] memory-hotplug.txt: Add some details about locking internals David Hildenbrand
2018-08-30 19:38   ` Pasha Tatashin
2018-08-30 19:38   ` Pasha Tatashin
2018-08-30 19:38     ` Pasha Tatashin
2018-08-30 19:38     ` Pasha Tatashin
2018-08-21 10:44 ` David Hildenbrand
2018-08-30 12:31 ` [PATCH RFCv2 0/6] mm: online/offline_pages called w.o. mem_hotplug_lock David Hildenbrand
2018-08-30 12:31   ` David Hildenbrand
2018-08-30 15:54   ` Pasha Tatashin
2018-08-30 15:54   ` Pasha Tatashin
2018-08-30 15:54     ` Pasha Tatashin
2018-08-30 15:54     ` Pasha Tatashin
2018-08-30 12:31 ` David Hildenbrand
2018-08-31 20:54 ` Oscar Salvador
2018-08-31 20:54 ` Oscar Salvador
2018-08-31 20:54   ` Oscar Salvador
2018-09-01 14:03   ` David Hildenbrand
2018-09-01 14:03   ` David Hildenbrand
2018-09-01 14:03     ` David Hildenbrand

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=226aaaf7-7d1c-6f7b-5bf4-e6eb99862ebd@microsoft.com \
    --to=pavel.tatashin@microsoft.com \
    --cc=benh@kernel.crashing.org \
    --cc=bsingharora@gmail.com \
    --cc=david@redhat.com \
    --cc=devel@linuxdriverproject.org \
    --cc=linux-acpi@vger.kernel.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=mikey@neuling.org \
    --cc=mpe@ellerman.id.au \
    --cc=paulus@samba.org \
    --cc=rashmica.g@gmail.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.