linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/1] Skip over regions of invalid pfns with NUMA=n && HAVE_MEMBLOCK=y
@ 2018-01-21 14:47 Eugeniu Rosca
  2018-01-21 14:47 ` [PATCH v2 1/1] mm: page_alloc: skip over regions of invalid pfns on UMA Eugeniu Rosca
  0 siblings, 1 reply; 8+ messages in thread
From: Eugeniu Rosca @ 2018-01-21 14:47 UTC (permalink / raw)
  To: Andrew Morton, Michal Hocko, Catalin Marinas, Ard Biesheuvel,
	Steven Sistare, AKASHI Takahiro, Pavel Tatashin, Gioh Kim,
	Heiko Carstens, Wei Yang, Miles Chen, Vlastimil Babka,
	Mel Gorman, Hillf Danton, Johannes Weiner, Paul Burton,
	James Hartley
  Cc: Eugeniu Rosca, linux-kernel, linux-mm

Hello MM/kernel experts,

I include this cover letter to present some background and motivation
behind the patch, although the description included in the patch itself
should be reach enough already.

The context of this change is some effort to optimize the boot time of
Rcar Gen3 SoC family, which at its roots is driven by automotive
requirements like (well-known?) "2-seconds-to-rear-view-camera".

To fulfill those, we create a defconfig based on vanilla arm64
defconfig, which is then tailored to Rcar Gen3 SoC needs. This allows
us to reduce the kernel binary image size by almost 50%. We are very
picky during this cleanup process, to the point that, as showcased
with this patch, we start to submit changes in MM core part, where
(to be honest) we don't have much expertise.

As mentioned in the description of attached patch, disabling NUMA in
the v4.15-rc8 arm64 kernel decreases the binary Image by 64kB, but,
at the same time, increases the H3ULCB boot time by ~140ms, which is
counterintuitive, since by disabling NUMA we expect to get rid of
unused NUMA infrastructure and skip unneeded NUMA init.

As already mentioned in the attached patch, the slowdown happens because
v4.11-rc1 commit b92df1de5d28 ("mm: page_alloc: skip over regions of
invalid pfns where possible") conditions itself on
CONFIG_HAVE_MEMBLOCK_NODE_MAP, which on arm64 depends on NUMA:
$> git grep HAVE_MEMBLOCK_NODE_MAP | grep arm64
arch/arm64/Kconfig:     select HAVE_MEMBLOCK_NODE_MAP if NUMA

The attached patch attempts to present some evidence that the
aforementioned commit can speed up the execution of memmap_init_zone()
not only on arm64 NUMA, but also on arm64 non-NUMA machines. This is
achieved by "relaxing" the dependency of memblock_next_valid_pfn()
from being guarded by CONFIG_HAVE_MEMBLOCK_NODE_MAP to being
guarded by the more generic CONFIG_HAVE_MEMBLOCK.

If this doesn't sound of feel right, I would appreciate your feedback.
I will definitely participate in testing any alternative proposals
that may arise in your mind. TIA!

Best regards,
Eugeniu.

Changes v1->v2:
- Fix ARCH=tile build error [1], signalled by kbuild test robot
- Re-measure Rcar H3ULCB boot time improvement on v4.15-rc8

Eugeniu Rosca (1):
  mm: page_alloc: skip over regions of invalid pfns on UMA

 include/linux/memblock.h | 3 ++-
 mm/memblock.c            | 2 ++
 mm/page_alloc.c          | 2 +-
 3 files changed, 5 insertions(+), 2 deletions(-)

[1] kbuild test robot reported for ARCH=tile with [PATCH v1]:

    mm/page_alloc.c: In function 'memmap_init_zone':
 >> mm/page_alloc.c:5359:10: error: implicit declaration of function 
 >> 'memblock_next_valid_pfn'; did you mean 'memblock_virt_alloc_low'? 
 >> [-Werror=implicit-function-declaration]
        pfn = memblock_next_valid_pfn(pfn, end_pfn) - 1;
              ^~~~~~~~~~~~~~~~~~~~~~~
              memblock_virt_alloc_low
    cc1: some warnings being treated as errors

-- 
2.14.2

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v2 1/1] mm: page_alloc: skip over regions of invalid pfns on UMA
  2018-01-21 14:47 [PATCH v2 0/1] Skip over regions of invalid pfns with NUMA=n && HAVE_MEMBLOCK=y Eugeniu Rosca
@ 2018-01-21 14:47 ` Eugeniu Rosca
  2018-01-22  1:21   ` Matthew Wilcox
  0 siblings, 1 reply; 8+ messages in thread
From: Eugeniu Rosca @ 2018-01-21 14:47 UTC (permalink / raw)
  To: Andrew Morton, Michal Hocko, Catalin Marinas, Ard Biesheuvel,
	Steven Sistare, AKASHI Takahiro, Pavel Tatashin, Gioh Kim,
	Heiko Carstens, Wei Yang, Miles Chen, Vlastimil Babka,
	Mel Gorman, Hillf Danton, Johannes Weiner, Paul Burton,
	James Hartley
  Cc: Eugeniu Rosca, linux-kernel, linux-mm

As a result of bisecting the v4.10..v4.11 commit range, it was
determined that commits [1] and [2] are both responsible of a ~140ms
early startup improvement on Rcar-H3-ES20 arm64 platform.

Since Rcar Gen3 family is not NUMA, we don't define CONFIG_NUMA in the
rcar3 defconfig (which also reduces KNL binary image by ~64KB), but this
is how the boot time improvement is lost.

This patch makes optimization [2] available on UMA systems which
provide support for CONFIG_HAVE_MEMBLOCK.

Testing this change on Rcar H3-ULCB using v4.15-rc8 KNL, vanilla arm64
defconfig + NUMA=n, a speed-up of ~140ms (from [3] to [4]) is observed
in the execution of memmap_init_zone().

No boot time improvement is sensed on Apollo Lake SoC.

[1] commit 0f84832fb8f9 ("arm64: defconfig: Enable NUMA and NUMA_BALANCING")
[2] commit b92df1de5d28 ("mm: page_alloc: skip over regions of invalid pfns where possible")

[3] 179ms spent in memmap_init_zone() on H3ULCB w/o this patch (NUMA=n)
[    2.408716] On node 0 totalpages: 1015808
[    2.408720]   DMA zone: 3584 pages used for memmap
[    2.408723]   DMA zone: 0 pages reserved
[    2.408726]   DMA zone: 229376 pages, LIFO batch:31
[    2.408729] > memmap_init_zone
[    2.429506] < memmap_init_zone
[    2.429512]   Normal zone: 12288 pages used for memmap
[    2.429514]   Normal zone: 786432 pages, LIFO batch:31
[    2.429516] > memmap_init_zone
[    2.587980] < memmap_init_zone
[    2.588013] psci: probing for conduit method from DT.

[4] 38ms spent in memmap_init_zone() on H3ULCB with this patch (NUMA=n)
[    2.415661] On node 0 totalpages: 1015808
[    2.415664]   DMA zone: 3584 pages used for memmap
[    2.415667]   DMA zone: 0 pages reserved
[    2.415670]   DMA zone: 229376 pages, LIFO batch:31
[    2.415673] > memmap_init_zone
[    2.424245] < memmap_init_zone
[    2.424250]   Normal zone: 12288 pages used for memmap
[    2.424253]   Normal zone: 786432 pages, LIFO batch:31
[    2.424256] > memmap_init_zone
[    2.453984] < memmap_init_zone
[    2.454016] psci: probing for conduit method from DT.

Signed-off-by: Eugeniu Rosca <erosca@de.adit-jv.com>
---
 include/linux/memblock.h | 3 ++-
 mm/memblock.c            | 2 ++
 mm/page_alloc.c          | 2 +-
 3 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/include/linux/memblock.h b/include/linux/memblock.h
index 7ed0f778..876c0a33 100644
--- a/include/linux/memblock.h
+++ b/include/linux/memblock.h
@@ -182,12 +182,13 @@ static inline bool memblock_is_nomap(struct memblock_region *m)
 	return m->flags & MEMBLOCK_NOMAP;
 }
 
+unsigned long memblock_next_valid_pfn(unsigned long pfn, unsigned long max_pfn);
+
 #ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP
 int memblock_search_pfn_nid(unsigned long pfn, unsigned long *start_pfn,
 			    unsigned long  *end_pfn);
 void __next_mem_pfn_range(int *idx, int nid, unsigned long *out_start_pfn,
 			  unsigned long *out_end_pfn, int *out_nid);
-unsigned long memblock_next_valid_pfn(unsigned long pfn, unsigned long max_pfn);
 
 /**
  * for_each_mem_pfn_range - early memory pfn range iterator
diff --git a/mm/memblock.c b/mm/memblock.c
index 46aacdfa..ad48cf20 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -1100,6 +1100,7 @@ void __init_memblock __next_mem_pfn_range(int *idx, int nid,
 	if (out_nid)
 		*out_nid = r->nid;
 }
+#endif /* CONFIG_HAVE_MEMBLOCK_NODE_MAP */
 
 unsigned long __init_memblock memblock_next_valid_pfn(unsigned long pfn,
 						      unsigned long max_pfn)
@@ -1129,6 +1130,7 @@ unsigned long __init_memblock memblock_next_valid_pfn(unsigned long pfn,
 		return min(PHYS_PFN(type->regions[right].base), max_pfn);
 }
 
+#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP
 /**
  * memblock_set_node - set node ID on memblock regions
  * @base: base of area to set node ID for
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 76c9688b..9ad47f46 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -5344,7 +5344,7 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,
 			goto not_early;
 
 		if (!early_pfn_valid(pfn)) {
-#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP
+#ifdef CONFIG_HAVE_MEMBLOCK
 			/*
 			 * Skip to the pfn preceding the next valid one (or
 			 * end_pfn), such that we hit a valid pfn (or end_pfn)
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 1/1] mm: page_alloc: skip over regions of invalid pfns on UMA
  2018-01-21 14:47 ` [PATCH v2 1/1] mm: page_alloc: skip over regions of invalid pfns on UMA Eugeniu Rosca
@ 2018-01-22  1:21   ` Matthew Wilcox
  2018-01-22 20:25     ` Eugeniu Rosca
  0 siblings, 1 reply; 8+ messages in thread
From: Matthew Wilcox @ 2018-01-22  1:21 UTC (permalink / raw)
  To: Eugeniu Rosca
  Cc: Andrew Morton, Michal Hocko, Catalin Marinas, Ard Biesheuvel,
	Steven Sistare, AKASHI Takahiro, Pavel Tatashin, Gioh Kim,
	Heiko Carstens, Wei Yang, Miles Chen, Vlastimil Babka,
	Mel Gorman, Hillf Danton, Johannes Weiner, Paul Burton,
	James Hartley, linux-kernel, linux-mm


I like the patch.  I think it could be better.

> +++ b/mm/page_alloc.c
> @@ -5344,7 +5344,7 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,
>  			goto not_early;
>  
>  		if (!early_pfn_valid(pfn)) {
> -#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP
> +#ifdef CONFIG_HAVE_MEMBLOCK
>  			/*
>  			 * Skip to the pfn preceding the next valid one (or
>  			 * end_pfn), such that we hit a valid pfn (or end_pfn)

This ifdef makes me sad.  Here's more of the context:

                if (!early_pfn_valid(pfn)) {
#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP
                        /*
                         * Skip to the pfn preceding the next valid one (or
                         * end_pfn), such that we hit a valid pfn (or end_pfn)
                         * on our next iteration of the loop.
                         */
                        pfn = memblock_next_valid_pfn(pfn, end_pfn) - 1;
#endif
                        continue;
                }

This is crying out for:

#ifdef CONFIG_HAVE_MEMBLOCK
unsigned long memblock_next_valid_pfn(unsigned long pfn, unsigned long max_pfn);
#else
static inline unsigned long memblock_next_valid_pfn(unsigned long pfn,
		unsigned long max_pfn)
{
	return pfn + 1;
}
#endif

in a header file somewhere.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 1/1] mm: page_alloc: skip over regions of invalid pfns on UMA
  2018-01-22  1:21   ` Matthew Wilcox
@ 2018-01-22 20:25     ` Eugeniu Rosca
  2018-01-23  0:45       ` Matthew Wilcox
  0 siblings, 1 reply; 8+ messages in thread
From: Eugeniu Rosca @ 2018-01-22 20:25 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Andrew Morton, Michal Hocko, Catalin Marinas, Ard Biesheuvel,
	Steven Sistare, AKASHI Takahiro, Pavel Tatashin, Gioh Kim,
	Heiko Carstens, Wei Yang, Miles Chen, Vlastimil Babka,
	Mel Gorman, Johannes Weiner, Paul Burton, James Hartley,
	linux-kernel, linux-mm

Hi Matthew and thanks for your feedback and review comments!

On Sun, Jan 21, 2018 at 05:21:56PM -0800, Matthew Wilcox wrote:
> 
> I like the patch.  I think it could be better.
> 
> > +++ b/mm/page_alloc.c
> > @@ -5344,7 +5344,7 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,
> >  			goto not_early;
> >  
> >  		if (!early_pfn_valid(pfn)) {
> > -#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP
> > +#ifdef CONFIG_HAVE_MEMBLOCK
> >  			/*
> >  			 * Skip to the pfn preceding the next valid one (or
> >  			 * end_pfn), such that we hit a valid pfn (or end_pfn)
> 
> This ifdef makes me sad.  Here's more of the context:
> 
>                 if (!early_pfn_valid(pfn)) {
> #ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP
>                         /*
>                          * Skip to the pfn preceding the next valid one (or
>                          * end_pfn), such that we hit a valid pfn (or end_pfn)
>                          * on our next iteration of the loop.
>                          */
>                         pfn = memblock_next_valid_pfn(pfn, end_pfn) - 1;
> #endif
>                         continue;
>                 }
> 
> This is crying out for:
> 
> #ifdef CONFIG_HAVE_MEMBLOCK
> unsigned long memblock_next_valid_pfn(unsigned long pfn, unsigned long max_pfn);
> #else
> static inline unsigned long memblock_next_valid_pfn(unsigned long pfn,
> 		unsigned long max_pfn)
> {
> 	return pfn + 1;
> }
> #endif
> 
> in a header file somewhere.
> 

Here is what I came up with, based on your proposal:

---------------------------------------------------------

diff --git a/include/linux/memblock.h b/include/linux/memblock.h
index 7ed0f7782d16..9efd592c5da4 100644
--- a/include/linux/memblock.h
+++ b/include/linux/memblock.h
@@ -187,7 +187,6 @@ int memblock_search_pfn_nid(unsigned long pfn, unsigned long *start_pfn,
 			    unsigned long  *end_pfn);
 void __next_mem_pfn_range(int *idx, int nid, unsigned long *out_start_pfn,
 			  unsigned long *out_end_pfn, int *out_nid);
-unsigned long memblock_next_valid_pfn(unsigned long pfn, unsigned long max_pfn);
 
 /**
  * for_each_mem_pfn_range - early memory pfn range iterator
diff --git a/include/linux/mm.h b/include/linux/mm.h
index ea818ff739cd..b82b30522585 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2064,8 +2064,14 @@ extern int __meminit __early_pfn_to_nid(unsigned long pfn,
 
 #ifdef CONFIG_HAVE_MEMBLOCK
 void zero_resv_unavail(void);
+unsigned long memblock_next_valid_pfn(unsigned long pfn, unsigned long max_pfn);
 #else
 static inline void zero_resv_unavail(void) {}
+static inline unsigned long memblock_next_valid_pfn(unsigned long pfn,
+						    unsigned long max_pfn)
+{
+	return pfn + 1;
+}
 #endif
 
 extern void set_dma_reserve(unsigned long new_dma_reserve);
diff --git a/mm/memblock.c b/mm/memblock.c
index 46aacdfa4f4d..ad48cf200e3b 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -1100,6 +1100,7 @@ void __init_memblock __next_mem_pfn_range(int *idx, int nid,
 	if (out_nid)
 		*out_nid = r->nid;
 }
+#endif /* CONFIG_HAVE_MEMBLOCK_NODE_MAP */
 
 unsigned long __init_memblock memblock_next_valid_pfn(unsigned long pfn,
 						      unsigned long max_pfn)
@@ -1129,6 +1130,7 @@ unsigned long __init_memblock memblock_next_valid_pfn(unsigned long pfn,
 		return min(PHYS_PFN(type->regions[right].base), max_pfn);
 }
 
+#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP
 /**
  * memblock_set_node - set node ID on memblock regions
  * @base: base of area to set node ID for
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 76c9688b6a0a..4a3d5936a9a0 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -5344,14 +5344,12 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,
 			goto not_early;
 
 		if (!early_pfn_valid(pfn)) {
-#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP
 			/*
 			 * Skip to the pfn preceding the next valid one (or
 			 * end_pfn), such that we hit a valid pfn (or end_pfn)
 			 * on our next iteration of the loop.
 			 */
 			pfn = memblock_next_valid_pfn(pfn, end_pfn) - 1;
-#endif
 			continue;
 		}
 		if (!early_pfn_in_nid(pfn, nid))

---------------------------------------------------------

Here are the sanity checks and tests done (all on v4.15-rc9):
- compiled natively on x86_64
- cross-compiled for ARCH=arm64 (NUMA=y/n), ARCH=tile (for which kbuild
  test robot reported a build failure with [PATCH v1])
- no new issues reported by:
  - checkpatch --strict
  - make W=1
  - make CHECK="/path/to/smatch -p=kernel --two-passes --spammy" C=2 mm/
  - make C=2 CF="-D__CHECK_ENDIAN__"  -Wunused-function mm/
  - cppcheck --force --enable=all --inconclusive mm/
- re-tested on H3ULCB and confirmed the same behavior as with [PATCH v2]

If no other comments, I will submit [PATCH v3] in the next days.

Many thanks!

Best regards,
Eugeniu,

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 1/1] mm: page_alloc: skip over regions of invalid pfns on UMA
  2018-01-22 20:25     ` Eugeniu Rosca
@ 2018-01-23  0:45       ` Matthew Wilcox
  2018-01-23 19:00         ` Eugeniu Rosca
  0 siblings, 1 reply; 8+ messages in thread
From: Matthew Wilcox @ 2018-01-23  0:45 UTC (permalink / raw)
  To: Eugeniu Rosca
  Cc: Andrew Morton, Michal Hocko, Catalin Marinas, Ard Biesheuvel,
	Steven Sistare, AKASHI Takahiro, Pavel Tatashin, Gioh Kim,
	Heiko Carstens, Wei Yang, Miles Chen, Vlastimil Babka,
	Mel Gorman, Johannes Weiner, Paul Burton, James Hartley,
	linux-kernel, linux-mm

On Mon, Jan 22, 2018 at 09:25:30PM +0100, Eugeniu Rosca wrote:
> Here is what I came up with, based on your proposal:

Reviewed-by: Matthew Wilcox <mawilcox@microsoft.com>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 1/1] mm: page_alloc: skip over regions of invalid pfns on UMA
  2018-01-23  0:45       ` Matthew Wilcox
@ 2018-01-23 19:00         ` Eugeniu Rosca
  2018-01-23 20:27           ` Matthew Wilcox
  0 siblings, 1 reply; 8+ messages in thread
From: Eugeniu Rosca @ 2018-01-23 19:00 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Andrew Morton, Michal Hocko, Catalin Marinas, Ard Biesheuvel,
	Steven Sistare, AKASHI Takahiro, Pavel Tatashin, Gioh Kim,
	Heiko Carstens, Wei Yang, Miles Chen, Vlastimil Babka,
	Mel Gorman, Johannes Weiner, Paul Burton, James Hartley,
	linux-kernel, linux-mm, Eugeniu Rosca

Hi Matthew,

On Mon, Jan 22, 2018 at 04:45:51PM -0800, Matthew Wilcox wrote:
> On Mon, Jan 22, 2018 at 09:25:30PM +0100, Eugeniu Rosca wrote:
> > Here is what I came up with, based on your proposal:
> 
> Reviewed-by: Matthew Wilcox <mawilcox@microsoft.com>

Apologies for not knowing the process. Should I include the
`Reviewed-by` in the description of the next patch or will it be
done by the maintainer who will hopefully pick up the patch?

Regards,
Eugeniu.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 1/1] mm: page_alloc: skip over regions of invalid pfns on UMA
  2018-01-23 19:00         ` Eugeniu Rosca
@ 2018-01-23 20:27           ` Matthew Wilcox
  2018-01-24 14:51             ` Eugeniu Rosca
  0 siblings, 1 reply; 8+ messages in thread
From: Matthew Wilcox @ 2018-01-23 20:27 UTC (permalink / raw)
  To: Eugeniu Rosca
  Cc: Andrew Morton, Michal Hocko, Catalin Marinas, Ard Biesheuvel,
	Steven Sistare, AKASHI Takahiro, Pavel Tatashin, Gioh Kim,
	Heiko Carstens, Wei Yang, Miles Chen, Vlastimil Babka,
	Mel Gorman, Johannes Weiner, Paul Burton, James Hartley,
	linux-kernel, linux-mm

On Tue, Jan 23, 2018 at 08:00:36PM +0100, Eugeniu Rosca wrote:
> Hi Matthew,
> 
> On Mon, Jan 22, 2018 at 04:45:51PM -0800, Matthew Wilcox wrote:
> > On Mon, Jan 22, 2018 at 09:25:30PM +0100, Eugeniu Rosca wrote:
> > > Here is what I came up with, based on your proposal:
> > 
> > Reviewed-by: Matthew Wilcox <mawilcox@microsoft.com>
> 
> Apologies for not knowing the process. Should I include the
> `Reviewed-by` in the description of the next patch or will it be
> done by the maintainer who will hopefully pick up the patch?

It's OK to not know the process ;-)

The next step is for you to integrate the changes you made here into a
fresh patch against mainline, and then add my Reviewed-by: tag underneath
your Signed-off-by: line.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 1/1] mm: page_alloc: skip over regions of invalid pfns on UMA
  2018-01-23 20:27           ` Matthew Wilcox
@ 2018-01-24 14:51             ` Eugeniu Rosca
  0 siblings, 0 replies; 8+ messages in thread
From: Eugeniu Rosca @ 2018-01-24 14:51 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Andrew Morton, Michal Hocko, Catalin Marinas, Ard Biesheuvel,
	Steven Sistare, AKASHI Takahiro, Pavel Tatashin, Gioh Kim,
	Heiko Carstens, Wei Yang, Miles Chen, Vlastimil Babka,
	Mel Gorman, Johannes Weiner, Paul Burton, James Hartley,
	linux-kernel, linux-mm

On Tue, Jan 23, 2018 at 12:27:00PM -0800, Matthew Wilcox wrote:
> On Tue, Jan 23, 2018 at 08:00:36PM +0100, Eugeniu Rosca wrote:
> > Hi Matthew,
> > 
> > On Mon, Jan 22, 2018 at 04:45:51PM -0800, Matthew Wilcox wrote:
> > > On Mon, Jan 22, 2018 at 09:25:30PM +0100, Eugeniu Rosca wrote:
> > > > Here is what I came up with, based on your proposal:
> > > 
> > > Reviewed-by: Matthew Wilcox <mawilcox@microsoft.com>
> > 
> > Apologies for not knowing the process. Should I include the
> > `Reviewed-by` in the description of the next patch or will it be
> > done by the maintainer who will hopefully pick up the patch?
> 
> It's OK to not know the process ;-)
> 
> The next step is for you to integrate the changes you made here into a
> fresh patch against mainline, and then add my Reviewed-by: tag underneath
> your Signed-off-by: line.

[PATCH v3] pushed to https://marc.info/?l=linux-mm&m=151680461529468&w=2
Thanks for your support!

Best regards,
Eugeniu.

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2018-01-24 14:51 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-01-21 14:47 [PATCH v2 0/1] Skip over regions of invalid pfns with NUMA=n && HAVE_MEMBLOCK=y Eugeniu Rosca
2018-01-21 14:47 ` [PATCH v2 1/1] mm: page_alloc: skip over regions of invalid pfns on UMA Eugeniu Rosca
2018-01-22  1:21   ` Matthew Wilcox
2018-01-22 20:25     ` Eugeniu Rosca
2018-01-23  0:45       ` Matthew Wilcox
2018-01-23 19:00         ` Eugeniu Rosca
2018-01-23 20:27           ` Matthew Wilcox
2018-01-24 14:51             ` Eugeniu Rosca

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).