All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2] mm: page_alloc: avoid excessive IRQ disabled times in free_unref_page_list
@ 2017-12-08 11:42 Lucas Stach
  2017-12-08 21:49 ` Andrew Morton
  0 siblings, 1 reply; 3+ messages in thread
From: Lucas Stach @ 2017-12-08 11:42 UTC (permalink / raw)
  To: Andrew Morton, Mel Gorman
  Cc: Michal Hocko, Vlastimil Babka, linux-mm, patchwork-lst, kernel

Since 9cca35d42eb6 (mm, page_alloc: enable/disable IRQs once when freeing
a list of pages) we see excessive IRQ disabled times of up to 25ms on an
embedded ARM system (tracing overhead included).

This is due to graphics buffers being freed back to the system via
release_pages(). Graphics buffers can be huge, so it's not hard to hit
cases where the list of pages to free has 2048 entries. Disabling IRQs
while freeing all those pages is clearly not a good idea.

Introduce a batch limit, which allows IRQ servicing once every few pages.
The batch count is the same as used in other parts of the MM subsystem
when dealing with IRQ disabled regions.

Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Suggested-by: Andrew Morton <akpm@linux-foundation.org>
---
v2: Try to keep the working set of pages used in the second loop cache
    hot by going through both loops in swathes of SWAP_CLUSTER_MAX
    entries, as suggested by Andrew Morton.

    To avoid the need to replicate the batch counting in both loops
    I introduced a local batched_free_list where pages to be freed
    in the critical section are collected. IMO this makes the code
    easier to follow.
---
 mm/page_alloc.c | 42 ++++++++++++++++++++++++++++--------------
 1 file changed, 28 insertions(+), 14 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 73f5d4556b3d..522870f1a8f2 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2685,23 +2685,37 @@ void free_unref_page_list(struct list_head *list)
 	struct page *page, *next;
 	unsigned long flags, pfn;
 
-	/* Prepare pages for freeing */
-	list_for_each_entry_safe(page, next, list, lru) {
-		pfn = page_to_pfn(page);
-		if (!free_unref_page_prepare(page, pfn))
-			list_del(&page->lru);
-		set_page_private(page, pfn);
-	}
+	while (!list_empty(list)) {
+		LIST_HEAD(batched_free_list);
+		unsigned int batch_count = 0;
 
-	local_irq_save(flags);
-	list_for_each_entry_safe(page, next, list, lru) {
-		unsigned long pfn = page_private(page);
+		/*
+		 * Prepare pages for freeing. Collects at max SWAP_CLUSTER_MAX
+		 * pages for batched free in single IRQs off critical section.
+		 */
+		list_for_each_entry_safe(page, next, list, lru) {
+			pfn = page_to_pfn(page);
+			if (!free_unref_page_prepare(page, pfn)) {
+				list_del(&page->lru);
+			} else {
+				list_move(&page->lru, &batched_free_list);
+				batch_count++;
+			}
+			set_page_private(page, pfn);
+			if (batch_count == SWAP_CLUSTER_MAX)
+				break;
+		}
 
-		set_page_private(page, 0);
-		trace_mm_page_free_batched(page);
-		free_unref_page_commit(page, pfn);
+		local_irq_save(flags);
+		list_for_each_entry_safe(page, next, &batched_free_list, lru) {
+			unsigned long pfn = page_private(page);
+
+			set_page_private(page, 0);
+			trace_mm_page_free_batched(page);
+			free_unref_page_commit(page, pfn);
+		}
+		local_irq_restore(flags);
 	}
-	local_irq_restore(flags);
 }
 
 /*
-- 
2.11.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH v2] mm: page_alloc: avoid excessive IRQ disabled times in free_unref_page_list
  2017-12-08 11:42 [PATCH v2] mm: page_alloc: avoid excessive IRQ disabled times in free_unref_page_list Lucas Stach
@ 2017-12-08 21:49 ` Andrew Morton
  2017-12-11 11:02   ` Mel Gorman
  0 siblings, 1 reply; 3+ messages in thread
From: Andrew Morton @ 2017-12-08 21:49 UTC (permalink / raw)
  To: Lucas Stach
  Cc: Mel Gorman, Michal Hocko, Vlastimil Babka, linux-mm,
	patchwork-lst, kernel

On Fri,  8 Dec 2017 12:42:17 +0100 Lucas Stach <l.stach@pengutronix.de> wrote:

> Since 9cca35d42eb6 (mm, page_alloc: enable/disable IRQs once when freeing
> a list of pages) we see excessive IRQ disabled times of up to 25ms on an
> embedded ARM system (tracing overhead included).
> 
> This is due to graphics buffers being freed back to the system via
> release_pages(). Graphics buffers can be huge, so it's not hard to hit
> cases where the list of pages to free has 2048 entries. Disabling IRQs
> while freeing all those pages is clearly not a good idea.
> 
> Introduce a batch limit, which allows IRQ servicing once every few pages.
> The batch count is the same as used in other parts of the MM subsystem
> when dealing with IRQ disabled regions.
> 
> Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
> Suggested-by: Andrew Morton <akpm@linux-foundation.org>
> ---
> v2: Try to keep the working set of pages used in the second loop cache
>     hot by going through both loops in swathes of SWAP_CLUSTER_MAX
>     entries, as suggested by Andrew Morton.
> 
>     To avoid the need to replicate the batch counting in both loops
>     I introduced a local batched_free_list where pages to be freed
>     in the critical section are collected. IMO this makes the code
>     easier to follow.

Thanks.  Is anyone motivated enough to determine whether this is
worthwhile?


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH v2] mm: page_alloc: avoid excessive IRQ disabled times in free_unref_page_list
  2017-12-08 21:49 ` Andrew Morton
@ 2017-12-11 11:02   ` Mel Gorman
  0 siblings, 0 replies; 3+ messages in thread
From: Mel Gorman @ 2017-12-11 11:02 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Lucas Stach, Michal Hocko, Vlastimil Babka, linux-mm,
	patchwork-lst, kernel

On Fri, Dec 08, 2017 at 01:49:37PM -0800, Andrew Morton wrote:
> On Fri,  8 Dec 2017 12:42:17 +0100 Lucas Stach <l.stach@pengutronix.de> wrote:
> 
> > Since 9cca35d42eb6 (mm, page_alloc: enable/disable IRQs once when freeing
> > a list of pages) we see excessive IRQ disabled times of up to 25ms on an
> > embedded ARM system (tracing overhead included).
> > 
> > This is due to graphics buffers being freed back to the system via
> > release_pages(). Graphics buffers can be huge, so it's not hard to hit
> > cases where the list of pages to free has 2048 entries. Disabling IRQs
> > while freeing all those pages is clearly not a good idea.
> > 
> > Introduce a batch limit, which allows IRQ servicing once every few pages.
> > The batch count is the same as used in other parts of the MM subsystem
> > when dealing with IRQ disabled regions.
> > 
> > Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
> > Suggested-by: Andrew Morton <akpm@linux-foundation.org>
> > ---
> > v2: Try to keep the working set of pages used in the second loop cache
> >     hot by going through both loops in swathes of SWAP_CLUSTER_MAX
> >     entries, as suggested by Andrew Morton.
> > 
> >     To avoid the need to replicate the batch counting in both loops
> >     I introduced a local batched_free_list where pages to be freed
> >     in the critical section are collected. IMO this makes the code
> >     easier to follow.
> 
> Thanks.  Is anyone motivated enough to determine whether this is
> worthwhile?
> 

I didn't try and I'm not sure I'll get time before dropping offline for
holidays but I would expect the benefit to be marginal and only detected
through close examination of cache miss stats. We're talking about the
cache hotness of a few struct pages for one set of operations before the
pages are back on the per-cpu lists. For any large release_pages operation,
they are likely to be pushed off the per-cpu lists and onto the buddy
lists where the cache data will be thrashed in the near future. It's
a nice micro-optimisation but I expect it to be lost in the noise of a
release_pages operation.

-- 
Mel Gorman
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2017-12-11 11:02 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-12-08 11:42 [PATCH v2] mm: page_alloc: avoid excessive IRQ disabled times in free_unref_page_list Lucas Stach
2017-12-08 21:49 ` Andrew Morton
2017-12-11 11:02   ` Mel Gorman

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.