linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] mm,memory_hotplug: Drop unneeded locking
@ 2021-05-31  9:39 Oscar Salvador
  2021-05-31 11:37 ` Michal Hocko
  2021-06-01  5:17 ` Anshuman Khandual
  0 siblings, 2 replies; 8+ messages in thread
From: Oscar Salvador @ 2021-05-31  9:39 UTC (permalink / raw)
  To: Andrew Morton
  Cc: David Hildenbrand, Michal Hocko, Anshuman Khandual,
	Vlastimil Babka, Pavel Tatashin, linux-mm, linux-kernel,
	Oscar Salvador

Currently, memory-hotplug code takes zone's span_writelock
and pgdat's resize_lock when resizing the node/zone's spanned
pages via {move_pfn_range_to_zone(),remove_pfn_range_from_zone()}
and when resizing node and zone's present pages via
adjust_present_page_count().

These locks are also taken during the initialization of the system
at boot time, where it protects parallel struct page initialization,
but they should not really be needed in memory-hotplug where all
operations are a) synchronized on device level and b) serialized by
the mem_hotplug_lock lock.

Signed-off-by: Oscar Salvador <osalvador@suse.de>
Acked-by: David Hildenbrand <david@redhat.com>
---
 mm/memory_hotplug.c | 11 -----------
 1 file changed, 11 deletions(-)

diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 075b34803fec..9edbc57055bf 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -329,7 +329,6 @@ static void shrink_zone_span(struct zone *zone, unsigned long start_pfn,
 	unsigned long pfn;
 	int nid = zone_to_nid(zone);
 
-	zone_span_writelock(zone);
 	if (zone->zone_start_pfn == start_pfn) {
 		/*
 		 * If the section is smallest section in the zone, it need
@@ -362,7 +361,6 @@ static void shrink_zone_span(struct zone *zone, unsigned long start_pfn,
 			zone->spanned_pages = 0;
 		}
 	}
-	zone_span_writeunlock(zone);
 }
 
 static void update_pgdat_span(struct pglist_data *pgdat)
@@ -424,10 +422,8 @@ void __ref remove_pfn_range_from_zone(struct zone *zone,
 
 	clear_zone_contiguous(zone);
 
-	pgdat_resize_lock(zone->zone_pgdat, &flags);
 	shrink_zone_span(zone, start_pfn, start_pfn + nr_pages);
 	update_pgdat_span(pgdat);
-	pgdat_resize_unlock(zone->zone_pgdat, &flags);
 
 	set_zone_contiguous(zone);
 }
@@ -638,15 +634,10 @@ void __ref move_pfn_range_to_zone(struct zone *zone, unsigned long start_pfn,
 
 	clear_zone_contiguous(zone);
 
-	/* TODO Huh pgdat is irqsave while zone is not. It used to be like that before */
-	pgdat_resize_lock(pgdat, &flags);
-	zone_span_writelock(zone);
 	if (zone_is_empty(zone))
 		init_currently_empty_zone(zone, start_pfn, nr_pages);
 	resize_zone_range(zone, start_pfn, nr_pages);
-	zone_span_writeunlock(zone);
 	resize_pgdat_range(pgdat, start_pfn, nr_pages);
-	pgdat_resize_unlock(pgdat, &flags);
 
 	/*
 	 * Subsection population requires care in pfn_to_online_page().
@@ -739,9 +730,7 @@ void adjust_present_page_count(struct zone *zone, long nr_pages)
 	unsigned long flags;
 
 	zone->present_pages += nr_pages;
-	pgdat_resize_lock(zone->zone_pgdat, &flags);
 	zone->zone_pgdat->node_present_pages += nr_pages;
-	pgdat_resize_unlock(zone->zone_pgdat, &flags);
 }
 
 int mhp_init_memmap_on_memory(unsigned long pfn, unsigned long nr_pages,
-- 
2.16.3


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH] mm,memory_hotplug: Drop unneeded locking
  2021-05-31  9:39 [PATCH] mm,memory_hotplug: Drop unneeded locking Oscar Salvador
@ 2021-05-31 11:37 ` Michal Hocko
  2021-06-01  5:17 ` Anshuman Khandual
  1 sibling, 0 replies; 8+ messages in thread
From: Michal Hocko @ 2021-05-31 11:37 UTC (permalink / raw)
  To: Oscar Salvador
  Cc: Andrew Morton, David Hildenbrand, Anshuman Khandual,
	Vlastimil Babka, Pavel Tatashin, linux-mm, linux-kernel

On Mon 31-05-21 11:39:58, Oscar Salvador wrote:
> Currently, memory-hotplug code takes zone's span_writelock
> and pgdat's resize_lock when resizing the node/zone's spanned
> pages via {move_pfn_range_to_zone(),remove_pfn_range_from_zone()}
> and when resizing node and zone's present pages via
> adjust_present_page_count().
> 
> These locks are also taken during the initialization of the system
> at boot time, where it protects parallel struct page initialization,
> but they should not really be needed in memory-hotplug where all
> operations are a) synchronized on device level and b) serialized by
> the mem_hotplug_lock lock.
> 
> Signed-off-by: Oscar Salvador <osalvador@suse.de>
> Acked-by: David Hildenbrand <david@redhat.com>

Thanks. I wanted to kill these locks quite some time ago (see the TODO
you are remonig ;)) but never got around to that.

Acked-by: Michal Hocko <mhocko@suse.com>

> ---
>  mm/memory_hotplug.c | 11 -----------
>  1 file changed, 11 deletions(-)
> 
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index 075b34803fec..9edbc57055bf 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -329,7 +329,6 @@ static void shrink_zone_span(struct zone *zone, unsigned long start_pfn,
>  	unsigned long pfn;
>  	int nid = zone_to_nid(zone);
>  
> -	zone_span_writelock(zone);
>  	if (zone->zone_start_pfn == start_pfn) {
>  		/*
>  		 * If the section is smallest section in the zone, it need
> @@ -362,7 +361,6 @@ static void shrink_zone_span(struct zone *zone, unsigned long start_pfn,
>  			zone->spanned_pages = 0;
>  		}
>  	}
> -	zone_span_writeunlock(zone);
>  }
>  
>  static void update_pgdat_span(struct pglist_data *pgdat)
> @@ -424,10 +422,8 @@ void __ref remove_pfn_range_from_zone(struct zone *zone,
>  
>  	clear_zone_contiguous(zone);
>  
> -	pgdat_resize_lock(zone->zone_pgdat, &flags);
>  	shrink_zone_span(zone, start_pfn, start_pfn + nr_pages);
>  	update_pgdat_span(pgdat);
> -	pgdat_resize_unlock(zone->zone_pgdat, &flags);
>  
>  	set_zone_contiguous(zone);
>  }
> @@ -638,15 +634,10 @@ void __ref move_pfn_range_to_zone(struct zone *zone, unsigned long start_pfn,
>  
>  	clear_zone_contiguous(zone);
>  
> -	/* TODO Huh pgdat is irqsave while zone is not. It used to be like that before */
> -	pgdat_resize_lock(pgdat, &flags);
> -	zone_span_writelock(zone);
>  	if (zone_is_empty(zone))
>  		init_currently_empty_zone(zone, start_pfn, nr_pages);
>  	resize_zone_range(zone, start_pfn, nr_pages);
> -	zone_span_writeunlock(zone);
>  	resize_pgdat_range(pgdat, start_pfn, nr_pages);
> -	pgdat_resize_unlock(pgdat, &flags);
>  
>  	/*
>  	 * Subsection population requires care in pfn_to_online_page().
> @@ -739,9 +730,7 @@ void adjust_present_page_count(struct zone *zone, long nr_pages)
>  	unsigned long flags;
>  
>  	zone->present_pages += nr_pages;
> -	pgdat_resize_lock(zone->zone_pgdat, &flags);
>  	zone->zone_pgdat->node_present_pages += nr_pages;
> -	pgdat_resize_unlock(zone->zone_pgdat, &flags);
>  }
>  
>  int mhp_init_memmap_on_memory(unsigned long pfn, unsigned long nr_pages,
> -- 
> 2.16.3

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] mm,memory_hotplug: Drop unneeded locking
  2021-05-31  9:39 [PATCH] mm,memory_hotplug: Drop unneeded locking Oscar Salvador
  2021-05-31 11:37 ` Michal Hocko
@ 2021-06-01  5:17 ` Anshuman Khandual
  2021-06-01  7:47   ` Oscar Salvador
  1 sibling, 1 reply; 8+ messages in thread
From: Anshuman Khandual @ 2021-06-01  5:17 UTC (permalink / raw)
  To: Oscar Salvador, Andrew Morton
  Cc: David Hildenbrand, Michal Hocko, Vlastimil Babka, Pavel Tatashin,
	linux-mm, linux-kernel



On 5/31/21 3:09 PM, Oscar Salvador wrote:
> Currently, memory-hotplug code takes zone's span_writelock
> and pgdat's resize_lock when resizing the node/zone's spanned
> pages via {move_pfn_range_to_zone(),remove_pfn_range_from_zone()}
> and when resizing node and zone's present pages via
> adjust_present_page_count().
> 
> These locks are also taken during the initialization of the system
> at boot time, where it protects parallel struct page initialization,
> but they should not really be needed in memory-hotplug where all
> operations are a) synchronized on device level and b) serialized by
> the mem_hotplug_lock lock.
> 
> Signed-off-by: Oscar Salvador <osalvador@suse.de>
> Acked-by: David Hildenbrand <david@redhat.com>
> ---
>  mm/memory_hotplug.c | 11 -----------
>  1 file changed, 11 deletions(-)
> 
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index 075b34803fec..9edbc57055bf 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -329,7 +329,6 @@ static void shrink_zone_span(struct zone *zone, unsigned long start_pfn,
>  	unsigned long pfn;
>  	int nid = zone_to_nid(zone);
>  
> -	zone_span_writelock(zone);
>  	if (zone->zone_start_pfn == start_pfn) {
>  		/*
>  		 * If the section is smallest section in the zone, it need
> @@ -362,7 +361,6 @@ static void shrink_zone_span(struct zone *zone, unsigned long start_pfn,
>  			zone->spanned_pages = 0;
>  		}
>  	}
> -	zone_span_writeunlock(zone);
>  }
>  
>  static void update_pgdat_span(struct pglist_data *pgdat)
> @@ -424,10 +422,8 @@ void __ref remove_pfn_range_from_zone(struct zone *zone,
>  
>  	clear_zone_contiguous(zone);
>  
> -	pgdat_resize_lock(zone->zone_pgdat, &flags);
>  	shrink_zone_span(zone, start_pfn, start_pfn + nr_pages);
>  	update_pgdat_span(pgdat);
> -	pgdat_resize_unlock(zone->zone_pgdat, &flags);
>  
>  	set_zone_contiguous(zone);
>  }
> @@ -638,15 +634,10 @@ void __ref move_pfn_range_to_zone(struct zone *zone, unsigned long start_pfn,
>  
>  	clear_zone_contiguous(zone);
>  
> -	/* TODO Huh pgdat is irqsave while zone is not. It used to be like that before */
> -	pgdat_resize_lock(pgdat, &flags);
> -	zone_span_writelock(zone);
>  	if (zone_is_empty(zone))
>  		init_currently_empty_zone(zone, start_pfn, nr_pages);
>  	resize_zone_range(zone, start_pfn, nr_pages);
> -	zone_span_writeunlock(zone);
>  	resize_pgdat_range(pgdat, start_pfn, nr_pages);
> -	pgdat_resize_unlock(pgdat, &flags);
>  
>  	/*
>  	 * Subsection population requires care in pfn_to_online_page().
> @@ -739,9 +730,7 @@ void adjust_present_page_count(struct zone *zone, long nr_pages)
>  	unsigned long flags;
>  
>  	zone->present_pages += nr_pages;
> -	pgdat_resize_lock(zone->zone_pgdat, &flags);
>  	zone->zone_pgdat->node_present_pages += nr_pages;
> -	pgdat_resize_unlock(zone->zone_pgdat, &flags);
>  }
>  
>  int mhp_init_memmap_on_memory(unsigned long pfn, unsigned long nr_pages,
> 

Should also just drop zone_span_write[lock|unlock]() helpers as there
are no users left ?

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] mm,memory_hotplug: Drop unneeded locking
  2021-06-01  5:17 ` Anshuman Khandual
@ 2021-06-01  7:47   ` Oscar Salvador
  2021-06-01  8:02     ` David Hildenbrand
  0 siblings, 1 reply; 8+ messages in thread
From: Oscar Salvador @ 2021-06-01  7:47 UTC (permalink / raw)
  To: Anshuman Khandual
  Cc: Andrew Morton, David Hildenbrand, Michal Hocko, Vlastimil Babka,
	Pavel Tatashin, linux-mm, linux-kernel

On Tue, Jun 01, 2021 at 10:47:31AM +0530, Anshuman Khandual wrote:
> Should also just drop zone_span_write[lock|unlock]() helpers as there
> are no users left ?

Yes, definitely.
Andrew, can you squash this on top? Thanks:

diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h
index a7fd2c3ccb77..27d8ba1d32cb 100644
--- a/include/linux/memory_hotplug.h
+++ b/include/linux/memory_hotplug.h
@@ -67,10 +67,6 @@ struct range mhp_get_pluggable_range(bool need_mapping);
 
 /*
  * Zone resizing functions
- *
- * Note: any attempt to resize a zone should has pgdat_resize_lock()
- * zone_span_writelock() both held. This ensure the size of a zone
- * can't be changed while pgdat_resize_lock() held.
  */
 static inline unsigned zone_span_seqbegin(struct zone *zone)
 {
@@ -80,14 +76,6 @@ static inline int zone_span_seqretry(struct zone *zone, unsigned iv)
 {
 	return read_seqretry(&zone->span_seqlock, iv);
 }
-static inline void zone_span_writelock(struct zone *zone)
-{
-	write_seqlock(&zone->span_seqlock);
-}
-static inline void zone_span_writeunlock(struct zone *zone)
-{
-	write_sequnlock(&zone->span_seqlock);
-}
 static inline void zone_seqlock_init(struct zone *zone)
 {
 	seqlock_init(&zone->span_seqlock);
@@ -233,8 +221,6 @@ static inline int zone_span_seqretry(struct zone *zone, unsigned iv)
 {
 	return 0;
 }
-static inline void zone_span_writelock(struct zone *zone) {}
-static inline void zone_span_writeunlock(struct zone *zone) {}
 static inline void zone_seqlock_init(struct zone *zone) {}
 
 static inline int try_online_node(int nid)


-- 
Oscar Salvador
SUSE L3

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH] mm,memory_hotplug: Drop unneeded locking
  2021-06-01  7:47   ` Oscar Salvador
@ 2021-06-01  8:02     ` David Hildenbrand
  2021-06-01  8:12       ` Oscar Salvador
  0 siblings, 1 reply; 8+ messages in thread
From: David Hildenbrand @ 2021-06-01  8:02 UTC (permalink / raw)
  To: Oscar Salvador, Anshuman Khandual
  Cc: Andrew Morton, Michal Hocko, Vlastimil Babka, Pavel Tatashin,
	linux-mm, linux-kernel

On 01.06.21 09:47, Oscar Salvador wrote:
> On Tue, Jun 01, 2021 at 10:47:31AM +0530, Anshuman Khandual wrote:
>> Should also just drop zone_span_write[lock|unlock]() helpers as there
>> are no users left ?
> 
> Yes, definitely.
> Andrew, can you squash this on top? Thanks:
> 
> diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h
> index a7fd2c3ccb77..27d8ba1d32cb 100644
> --- a/include/linux/memory_hotplug.h
> +++ b/include/linux/memory_hotplug.h
> @@ -67,10 +67,6 @@ struct range mhp_get_pluggable_range(bool need_mapping);
>   
>   /*
>    * Zone resizing functions
> - *
> - * Note: any attempt to resize a zone should has pgdat_resize_lock()
> - * zone_span_writelock() both held. This ensure the size of a zone
> - * can't be changed while pgdat_resize_lock() held.
>    */
>   static inline unsigned zone_span_seqbegin(struct zone *zone)
>   {
> @@ -80,14 +76,6 @@ static inline int zone_span_seqretry(struct zone *zone, unsigned iv)
>   {
>   	return read_seqretry(&zone->span_seqlock, iv);
>   }
> -static inline void zone_span_writelock(struct zone *zone)
> -{
> -	write_seqlock(&zone->span_seqlock);
> -}
> -static inline void zone_span_writeunlock(struct zone *zone)
> -{
> -	write_sequnlock(&zone->span_seqlock);
> -}

If there is no writer anymore, why do we have to protect readers?



-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] mm,memory_hotplug: Drop unneeded locking
  2021-06-01  8:02     ` David Hildenbrand
@ 2021-06-01  8:12       ` Oscar Salvador
  2021-06-01  9:47         ` Michal Hocko
  0 siblings, 1 reply; 8+ messages in thread
From: Oscar Salvador @ 2021-06-01  8:12 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Anshuman Khandual, Andrew Morton, Michal Hocko, Vlastimil Babka,
	Pavel Tatashin, linux-mm, linux-kernel

On Tue, Jun 01, 2021 at 10:02:54AM +0200, David Hildenbrand wrote:
> If there is no writer anymore, why do we have to protect readers?

Yeah, you are right. 
Let me prepare a v2 as this is getting too sloppy.


-- 
Oscar Salvador
SUSE L3

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] mm,memory_hotplug: Drop unneeded locking
  2021-06-01  8:12       ` Oscar Salvador
@ 2021-06-01  9:47         ` Michal Hocko
  2021-06-01 10:33           ` Oscar Salvador
  0 siblings, 1 reply; 8+ messages in thread
From: Michal Hocko @ 2021-06-01  9:47 UTC (permalink / raw)
  To: Oscar Salvador
  Cc: David Hildenbrand, Anshuman Khandual, Andrew Morton,
	Vlastimil Babka, Pavel Tatashin, linux-mm, linux-kernel

On Tue 01-06-21 10:12:54, Oscar Salvador wrote:
> On Tue, Jun 01, 2021 at 10:02:54AM +0200, David Hildenbrand wrote:
> > If there is no writer anymore, why do we have to protect readers?
> 
> Yeah, you are right. 
> Let me prepare a v2 as this is getting too sloppy.

While you are touching this and want to drill all the way down then it
would be reasonable to drop pgdat resize locks as well.
It is only used in the early boot code and we have one executing thread
context per numa node during the deferred initialization. I haven't
checked all potential side effects the lock might have but it sounds
like there is quite some clean up potential over there.

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] mm,memory_hotplug: Drop unneeded locking
  2021-06-01  9:47         ` Michal Hocko
@ 2021-06-01 10:33           ` Oscar Salvador
  0 siblings, 0 replies; 8+ messages in thread
From: Oscar Salvador @ 2021-06-01 10:33 UTC (permalink / raw)
  To: Michal Hocko
  Cc: David Hildenbrand, Anshuman Khandual, Andrew Morton,
	Vlastimil Babka, Pavel Tatashin, linux-mm, linux-kernel

On Tue, Jun 01, 2021 at 11:47:13AM +0200, Michal Hocko wrote:
> While you are touching this and want to drill all the way down then it
> would be reasonable to drop pgdat resize locks as well.
> It is only used in the early boot code and we have one executing thread
> context per numa node during the deferred initialization. I haven't
> checked all potential side effects the lock might have but it sounds
> like there is quite some clean up potential over there.

I am not sure about that. True is that deferred_init_memmap() gets executed
on numa-thread so it's not a problem for itself, but we also have deferred_grow_zone().
It might be that while deferred_init_memmap() is running, we also have calls to
deferred_grow_zone() for the same node and that would cause some trouble wrt.
first_deferred_pfn. I need to double check it, but IIRC, that is the case.


-- 
Oscar Salvador
SUSE L3

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2021-06-01 10:33 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-31  9:39 [PATCH] mm,memory_hotplug: Drop unneeded locking Oscar Salvador
2021-05-31 11:37 ` Michal Hocko
2021-06-01  5:17 ` Anshuman Khandual
2021-06-01  7:47   ` Oscar Salvador
2021-06-01  8:02     ` David Hildenbrand
2021-06-01  8:12       ` Oscar Salvador
2021-06-01  9:47         ` Michal Hocko
2021-06-01 10:33           ` Oscar Salvador

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).