zsmalloc: Fix TLB coherency and build problem
diff mbox series

Message ID 1359334808-19794-1-git-send-email-minchan@kernel.org
State New, archived
Headers show
Series
  • zsmalloc: Fix TLB coherency and build problem
Related show

Commit Message

Minchan Kim Jan. 28, 2013, 1 a.m. UTC
Recently, Matt Sealey reported he fail to build zsmalloc caused by
using of local_flush_tlb_kernel_range which are architecture dependent
function so !CONFIG_SMP in ARM couldn't implement it so it ends up
build error following as.

  MODPOST 216 modules
  LZMA    arch/arm/boot/compressed/piggy.lzma
  AS      arch/arm/boot/compressed/lib1funcs.o
ERROR: "v7wbi_flush_kern_tlb_range"
[drivers/staging/zsmalloc/zsmalloc.ko] undefined!
make[1]: *** [__modpost] Error 1
make: *** [modules] Error 2
make: *** Waiting for unfinished jobs....

The reason we used that function is copy method by [1]
was really slow in ARM but at that time.

More severe problem is ARM can prefetch speculatively on other CPUs
so under us, other TLBs can have an entry only if we do flush local
CPU. Russell King pointed that. Thanks!
We don't have many choices except using flush_tlb_kernel_range.

My experiment in ARMv7 processor 4 core didn't make any difference with
zsmapbench[2] between local_flush_tlb_kernel_range and flush_tlb_kernel_range
but still page-table based is much better than copy-based.

* bigger is better.

1. local_flush_tlb_kernel_range: 3918795 mappings
2. flush_tlb_kernel_range : 3989538 mappings
3. copy-based: 635158 mappings

This patch replace local_flush_tlb_kernel_range with
flush_tlb_kernel_range which are avaialbe in all architectures
because we already have used it in vmalloc allocator which are
generic one so build problem should go away and performane loss
shoud be void.

[1] f553646, zsmalloc: add page table mapping method
[2] https://github.com/spartacus06/zsmapbench

Cc: stable@vger.kernel.org
Cc: Dan Magenheimer <dan.magenheimer@oracle.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>
Cc: Nitin Gupta <ngupta@vflare.org>
Cc: Seth Jennings <sjenning@linux.vnet.ibm.com>
Reported-by: Matt Sealey <matt@genesi-usa.com>
Signed-off-by: Minchan Kim <minchan@kernel.org>
---

Matt, Could you test this patch?

 drivers/staging/zsmalloc/zsmalloc-main.c |   10 ++++------
 1 file changed, 4 insertions(+), 6 deletions(-)

Comments

Minchan Kim Jan. 30, 2013, 2:48 a.m. UTC | #1
Ping

On Mon, Jan 28, 2013 at 10:00:08AM +0900, Minchan Kim wrote:
> Recently, Matt Sealey reported he fail to build zsmalloc caused by
> using of local_flush_tlb_kernel_range which are architecture dependent
> function so !CONFIG_SMP in ARM couldn't implement it so it ends up
> build error following as.
> 
>   MODPOST 216 modules
>   LZMA    arch/arm/boot/compressed/piggy.lzma
>   AS      arch/arm/boot/compressed/lib1funcs.o
> ERROR: "v7wbi_flush_kern_tlb_range"
> [drivers/staging/zsmalloc/zsmalloc.ko] undefined!
> make[1]: *** [__modpost] Error 1
> make: *** [modules] Error 2
> make: *** Waiting for unfinished jobs....
> 
> The reason we used that function is copy method by [1]
> was really slow in ARM but at that time.
> 
> More severe problem is ARM can prefetch speculatively on other CPUs
> so under us, other TLBs can have an entry only if we do flush local
> CPU. Russell King pointed that. Thanks!
> We don't have many choices except using flush_tlb_kernel_range.
> 
> My experiment in ARMv7 processor 4 core didn't make any difference with
> zsmapbench[2] between local_flush_tlb_kernel_range and flush_tlb_kernel_range
> but still page-table based is much better than copy-based.
> 
> * bigger is better.
> 
> 1. local_flush_tlb_kernel_range: 3918795 mappings
> 2. flush_tlb_kernel_range : 3989538 mappings
> 3. copy-based: 635158 mappings
> 
> This patch replace local_flush_tlb_kernel_range with
> flush_tlb_kernel_range which are avaialbe in all architectures
> because we already have used it in vmalloc allocator which are
> generic one so build problem should go away and performane loss
> shoud be void.
> 
> [1] f553646, zsmalloc: add page table mapping method
> [2] https://github.com/spartacus06/zsmapbench
> 
> Cc: stable@vger.kernel.org
> Cc: Dan Magenheimer <dan.magenheimer@oracle.com>
> Cc: Russell King <linux@arm.linux.org.uk>
> Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>
> Cc: Nitin Gupta <ngupta@vflare.org>
> Cc: Seth Jennings <sjenning@linux.vnet.ibm.com>
> Reported-by: Matt Sealey <matt@genesi-usa.com>
> Signed-off-by: Minchan Kim <minchan@kernel.org>
> ---
> 
> Matt, Could you test this patch?
> 
>  drivers/staging/zsmalloc/zsmalloc-main.c |   10 ++++------
>  1 file changed, 4 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/staging/zsmalloc/zsmalloc-main.c b/drivers/staging/zsmalloc/zsmalloc-main.c
> index eb00772..82e627c 100644
> --- a/drivers/staging/zsmalloc/zsmalloc-main.c
> +++ b/drivers/staging/zsmalloc/zsmalloc-main.c
> @@ -222,11 +222,9 @@ struct zs_pool {
>  /*
>   * By default, zsmalloc uses a copy-based object mapping method to access
>   * allocations that span two pages. However, if a particular architecture
> - * 1) Implements local_flush_tlb_kernel_range() and 2) Performs VM mapping
> - * faster than copying, then it should be added here so that
> - * USE_PGTABLE_MAPPING is defined. This causes zsmalloc to use page table
> - * mapping rather than copying
> - * for object mapping.
> + * performs VM mapping faster than copying, then it should be added here
> + * so that USE_PGTABLE_MAPPING is defined. This causes zsmalloc to use
> + * page table mapping rather than copying for object mapping.
>  */
>  #if defined(CONFIG_ARM)
>  #define USE_PGTABLE_MAPPING
> @@ -663,7 +661,7 @@ static inline void __zs_unmap_object(struct mapping_area *area,
>  
>  	flush_cache_vunmap(addr, end);
>  	unmap_kernel_range_noflush(addr, PAGE_SIZE * 2);
> -	local_flush_tlb_kernel_range(addr, end);
> +	flush_tlb_kernel_range(addr, end);
>  }
>  
>  #else /* USE_PGTABLE_MAPPING */
> -- 
> 1.7.9.5
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
Konrad Rzeszutek Wilk Feb. 1, 2013, 1:02 p.m. UTC | #2
On Mon, Jan 28, 2013 at 10:00:08AM +0900, Minchan Kim wrote:
> Recently, Matt Sealey reported he fail to build zsmalloc caused by
> using of local_flush_tlb_kernel_range which are architecture dependent
> function so !CONFIG_SMP in ARM couldn't implement it so it ends up
> build error following as.
> 
>   MODPOST 216 modules
>   LZMA    arch/arm/boot/compressed/piggy.lzma
>   AS      arch/arm/boot/compressed/lib1funcs.o
> ERROR: "v7wbi_flush_kern_tlb_range"
> [drivers/staging/zsmalloc/zsmalloc.ko] undefined!
> make[1]: *** [__modpost] Error 1
> make: *** [modules] Error 2
> make: *** Waiting for unfinished jobs....
> 
> The reason we used that function is copy method by [1]
> was really slow in ARM but at that time.
> 
> More severe problem is ARM can prefetch speculatively on other CPUs
> so under us, other TLBs can have an entry only if we do flush local
> CPU. Russell King pointed that. Thanks!
> We don't have many choices except using flush_tlb_kernel_range.
> 
> My experiment in ARMv7 processor 4 core didn't make any difference with
> zsmapbench[2] between local_flush_tlb_kernel_range and flush_tlb_kernel_range
> but still page-table based is much better than copy-based.
> 
> * bigger is better.
> 
> 1. local_flush_tlb_kernel_range: 3918795 mappings
> 2. flush_tlb_kernel_range : 3989538 mappings
> 3. copy-based: 635158 mappings
> 
> This patch replace local_flush_tlb_kernel_range with
> flush_tlb_kernel_range which are avaialbe in all architectures
> because we already have used it in vmalloc allocator which are
> generic one so build problem should go away and performane loss
> shoud be void.
> 
> [1] f553646, zsmalloc: add page table mapping method
> [2] https://github.com/spartacus06/zsmapbench
> 
> Cc: stable@vger.kernel.org
> Cc: Dan Magenheimer <dan.magenheimer@oracle.com>
> Cc: Russell King <linux@arm.linux.org.uk>
> Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>

Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

> Cc: Nitin Gupta <ngupta@vflare.org>
> Cc: Seth Jennings <sjenning@linux.vnet.ibm.com>
> Reported-by: Matt Sealey <matt@genesi-usa.com>
> Signed-off-by: Minchan Kim <minchan@kernel.org>
> ---
> 
> Matt, Could you test this patch?
> 
>  drivers/staging/zsmalloc/zsmalloc-main.c |   10 ++++------
>  1 file changed, 4 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/staging/zsmalloc/zsmalloc-main.c b/drivers/staging/zsmalloc/zsmalloc-main.c
> index eb00772..82e627c 100644
> --- a/drivers/staging/zsmalloc/zsmalloc-main.c
> +++ b/drivers/staging/zsmalloc/zsmalloc-main.c
> @@ -222,11 +222,9 @@ struct zs_pool {
>  /*
>   * By default, zsmalloc uses a copy-based object mapping method to access
>   * allocations that span two pages. However, if a particular architecture
> - * 1) Implements local_flush_tlb_kernel_range() and 2) Performs VM mapping
> - * faster than copying, then it should be added here so that
> - * USE_PGTABLE_MAPPING is defined. This causes zsmalloc to use page table
> - * mapping rather than copying
> - * for object mapping.
> + * performs VM mapping faster than copying, then it should be added here
> + * so that USE_PGTABLE_MAPPING is defined. This causes zsmalloc to use
> + * page table mapping rather than copying for object mapping.
>  */
>  #if defined(CONFIG_ARM)
>  #define USE_PGTABLE_MAPPING
> @@ -663,7 +661,7 @@ static inline void __zs_unmap_object(struct mapping_area *area,
>  
>  	flush_cache_vunmap(addr, end);
>  	unmap_kernel_range_noflush(addr, PAGE_SIZE * 2);
> -	local_flush_tlb_kernel_range(addr, end);
> +	flush_tlb_kernel_range(addr, end);
>  }
>  
>  #else /* USE_PGTABLE_MAPPING */
> -- 
> 1.7.9.5
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/
Russell King - ARM Linux Feb. 1, 2013, 2:02 p.m. UTC | #3
On Mon, Jan 28, 2013 at 10:00:08AM +0900, Minchan Kim wrote:
> @@ -663,7 +661,7 @@ static inline void __zs_unmap_object(struct mapping_area *area,
>  
>  	flush_cache_vunmap(addr, end);
>  	unmap_kernel_range_noflush(addr, PAGE_SIZE * 2);
> -	local_flush_tlb_kernel_range(addr, end);
> +	flush_tlb_kernel_range(addr, end);

void unmap_kernel_range_noflush(unsigned long addr, unsigned long size)
{
        vunmap_page_range(addr, addr + size);
}

void unmap_kernel_range(unsigned long addr, unsigned long size)
{
        unsigned long end = addr + size;

        flush_cache_vunmap(addr, end);
        vunmap_page_range(addr, end);
        flush_tlb_kernel_range(addr, end);
}

So, given the above, what would be different between:

	unsigned long end = addr + (PAGE_SIZE * 2);

	flush_cache_vunmap(addr, end);
	unmap_kernel_range_noflush(addr, PAGE_SIZE * 2);
	flush_tlb_kernel_range(addr, end);

(which is what it becomes after your change) and

	unmap_kernel_range(addr, PAGE_SIZE * 2);

?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/
Minchan Kim Feb. 3, 2013, 11:50 p.m. UTC | #4
On Fri, Feb 01, 2013 at 02:02:18PM +0000, Russell King - ARM Linux wrote:
> On Mon, Jan 28, 2013 at 10:00:08AM +0900, Minchan Kim wrote:
> > @@ -663,7 +661,7 @@ static inline void __zs_unmap_object(struct mapping_area *area,
> >  
> >  	flush_cache_vunmap(addr, end);
> >  	unmap_kernel_range_noflush(addr, PAGE_SIZE * 2);
> > -	local_flush_tlb_kernel_range(addr, end);
> > +	flush_tlb_kernel_range(addr, end);
> 
> void unmap_kernel_range_noflush(unsigned long addr, unsigned long size)
> {
>         vunmap_page_range(addr, addr + size);
> }
> 
> void unmap_kernel_range(unsigned long addr, unsigned long size)
> {
>         unsigned long end = addr + size;
> 
>         flush_cache_vunmap(addr, end);
>         vunmap_page_range(addr, end);
>         flush_tlb_kernel_range(addr, end);
> }
> 
> So, given the above, what would be different between:
> 
> 	unsigned long end = addr + (PAGE_SIZE * 2);
> 
> 	flush_cache_vunmap(addr, end);
> 	unmap_kernel_range_noflush(addr, PAGE_SIZE * 2);
> 	flush_tlb_kernel_range(addr, end);
> 
> (which is what it becomes after your change) and
> 
> 	unmap_kernel_range(addr, PAGE_SIZE * 2);
> 
> ?

Good point. I will clean it up.
Thanks.

> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
Simon Jeons Feb. 19, 2013, 10:07 a.m. UTC | #5
On 01/28/2013 09:00 AM, Minchan Kim wrote:
> Recently, Matt Sealey reported he fail to build zsmalloc caused by
> using of local_flush_tlb_kernel_range which are architecture dependent
> function so !CONFIG_SMP in ARM couldn't implement it so it ends up
> build error following as.

Confuse me!

1) Why I see flush_tlb_kernel_range is different in different architecture?
2) Does local here means local cpu? If the answer is yes, why ARM 
doesn't support it?

>
>    MODPOST 216 modules
>    LZMA    arch/arm/boot/compressed/piggy.lzma
>    AS      arch/arm/boot/compressed/lib1funcs.o
> ERROR: "v7wbi_flush_kern_tlb_range"
> [drivers/staging/zsmalloc/zsmalloc.ko] undefined!
> make[1]: *** [__modpost] Error 1
> make: *** [modules] Error 2
> make: *** Waiting for unfinished jobs....
>
> The reason we used that function is copy method by [1]
> was really slow in ARM but at that time.
>
> More severe problem is ARM can prefetch speculatively on other CPUs
> so under us, other TLBs can have an entry only if we do flush local
> CPU. Russell King pointed that. Thanks!
> We don't have many choices except using flush_tlb_kernel_range.
>
> My experiment in ARMv7 processor 4 core didn't make any difference with
> zsmapbench[2] between local_flush_tlb_kernel_range and flush_tlb_kernel_range
> but still page-table based is much better than copy-based.
>
> * bigger is better.
>
> 1. local_flush_tlb_kernel_range: 3918795 mappings
> 2. flush_tlb_kernel_range : 3989538 mappings
> 3. copy-based: 635158 mappings
>
> This patch replace local_flush_tlb_kernel_range with
> flush_tlb_kernel_range which are avaialbe in all architectures
> because we already have used it in vmalloc allocator which are
> generic one so build problem should go away and performane loss
> shoud be void.
>
> [1] f553646, zsmalloc: add page table mapping method
> [2] https://github.com/spartacus06/zsmapbench
>
> Cc: stable@vger.kernel.org
> Cc: Dan Magenheimer <dan.magenheimer@oracle.com>
> Cc: Russell King <linux@arm.linux.org.uk>
> Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>
> Cc: Nitin Gupta <ngupta@vflare.org>
> Cc: Seth Jennings <sjenning@linux.vnet.ibm.com>
> Reported-by: Matt Sealey <matt@genesi-usa.com>
> Signed-off-by: Minchan Kim <minchan@kernel.org>
> ---
>
> Matt, Could you test this patch?
>
>   drivers/staging/zsmalloc/zsmalloc-main.c |   10 ++++------
>   1 file changed, 4 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/staging/zsmalloc/zsmalloc-main.c b/drivers/staging/zsmalloc/zsmalloc-main.c
> index eb00772..82e627c 100644
> --- a/drivers/staging/zsmalloc/zsmalloc-main.c
> +++ b/drivers/staging/zsmalloc/zsmalloc-main.c
> @@ -222,11 +222,9 @@ struct zs_pool {
>   /*
>    * By default, zsmalloc uses a copy-based object mapping method to access
>    * allocations that span two pages. However, if a particular architecture
> - * 1) Implements local_flush_tlb_kernel_range() and 2) Performs VM mapping
> - * faster than copying, then it should be added here so that
> - * USE_PGTABLE_MAPPING is defined. This causes zsmalloc to use page table
> - * mapping rather than copying
> - * for object mapping.
> + * performs VM mapping faster than copying, then it should be added here
> + * so that USE_PGTABLE_MAPPING is defined. This causes zsmalloc to use
> + * page table mapping rather than copying for object mapping.
>   */
>   #if defined(CONFIG_ARM)
>   #define USE_PGTABLE_MAPPING
> @@ -663,7 +661,7 @@ static inline void __zs_unmap_object(struct mapping_area *area,
>   
>   	flush_cache_vunmap(addr, end);
>   	unmap_kernel_range_noflush(addr, PAGE_SIZE * 2);
> -	local_flush_tlb_kernel_range(addr, end);
> +	flush_tlb_kernel_range(addr, end);
>   }
>   
>   #else /* USE_PGTABLE_MAPPING */

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/
Minchan Kim Feb. 19, 2013, 11:45 p.m. UTC | #6
On Tue, Feb 19, 2013 at 06:07:40PM +0800, Simon Jeons wrote:
> On 01/28/2013 09:00 AM, Minchan Kim wrote:
> >Recently, Matt Sealey reported he fail to build zsmalloc caused by
> >using of local_flush_tlb_kernel_range which are architecture dependent
> >function so !CONFIG_SMP in ARM couldn't implement it so it ends up
> >build error following as.
> 
> Confuse me!
> 
> 1) Why I see flush_tlb_kernel_range is different in different architecture?

IMHO, all architecture can do their best effort by thier own way.

> 2) Does local here means local cpu? If the answer is yes, why ARM

Yes.

> doesn't support it?

ARM supports it for some configuration and CPUs.
The thing is that every architecture doesn't support so it's not a generic API.
It means we should avoid it in general layer.

> 
> >
> >   MODPOST 216 modules
> >   LZMA    arch/arm/boot/compressed/piggy.lzma
> >   AS      arch/arm/boot/compressed/lib1funcs.o
> >ERROR: "v7wbi_flush_kern_tlb_range"
> >[drivers/staging/zsmalloc/zsmalloc.ko] undefined!
> >make[1]: *** [__modpost] Error 1
> >make: *** [modules] Error 2
> >make: *** Waiting for unfinished jobs....
> >
> >The reason we used that function is copy method by [1]
> >was really slow in ARM but at that time.
> >
> >More severe problem is ARM can prefetch speculatively on other CPUs
> >so under us, other TLBs can have an entry only if we do flush local
> >CPU. Russell King pointed that. Thanks!
> >We don't have many choices except using flush_tlb_kernel_range.
> >
> >My experiment in ARMv7 processor 4 core didn't make any difference with
> >zsmapbench[2] between local_flush_tlb_kernel_range and flush_tlb_kernel_range
> >but still page-table based is much better than copy-based.
> >
> >* bigger is better.
> >
> >1. local_flush_tlb_kernel_range: 3918795 mappings
> >2. flush_tlb_kernel_range : 3989538 mappings
> >3. copy-based: 635158 mappings
> >
> >This patch replace local_flush_tlb_kernel_range with
> >flush_tlb_kernel_range which are avaialbe in all architectures
> >because we already have used it in vmalloc allocator which are
> >generic one so build problem should go away and performane loss
> >shoud be void.
> >
> >[1] f553646, zsmalloc: add page table mapping method
> >[2] https://github.com/spartacus06/zsmapbench
> >
> >Cc: stable@vger.kernel.org
> >Cc: Dan Magenheimer <dan.magenheimer@oracle.com>
> >Cc: Russell King <linux@arm.linux.org.uk>
> >Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>
> >Cc: Nitin Gupta <ngupta@vflare.org>
> >Cc: Seth Jennings <sjenning@linux.vnet.ibm.com>
> >Reported-by: Matt Sealey <matt@genesi-usa.com>
> >Signed-off-by: Minchan Kim <minchan@kernel.org>
> >---
> >
> >Matt, Could you test this patch?
> >
> >  drivers/staging/zsmalloc/zsmalloc-main.c |   10 ++++------
> >  1 file changed, 4 insertions(+), 6 deletions(-)
> >
> >diff --git a/drivers/staging/zsmalloc/zsmalloc-main.c b/drivers/staging/zsmalloc/zsmalloc-main.c
> >index eb00772..82e627c 100644
> >--- a/drivers/staging/zsmalloc/zsmalloc-main.c
> >+++ b/drivers/staging/zsmalloc/zsmalloc-main.c
> >@@ -222,11 +222,9 @@ struct zs_pool {
> >  /*
> >   * By default, zsmalloc uses a copy-based object mapping method to access
> >   * allocations that span two pages. However, if a particular architecture
> >- * 1) Implements local_flush_tlb_kernel_range() and 2) Performs VM mapping
> >- * faster than copying, then it should be added here so that
> >- * USE_PGTABLE_MAPPING is defined. This causes zsmalloc to use page table
> >- * mapping rather than copying
> >- * for object mapping.
> >+ * performs VM mapping faster than copying, then it should be added here
> >+ * so that USE_PGTABLE_MAPPING is defined. This causes zsmalloc to use
> >+ * page table mapping rather than copying for object mapping.
> >  */
> >  #if defined(CONFIG_ARM)
> >  #define USE_PGTABLE_MAPPING
> >@@ -663,7 +661,7 @@ static inline void __zs_unmap_object(struct mapping_area *area,
> >  	flush_cache_vunmap(addr, end);
> >  	unmap_kernel_range_noflush(addr, PAGE_SIZE * 2);
> >-	local_flush_tlb_kernel_range(addr, end);
> >+	flush_tlb_kernel_range(addr, end);
> >  }
> >  #else /* USE_PGTABLE_MAPPING */
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

Patch
diff mbox series

diff --git a/drivers/staging/zsmalloc/zsmalloc-main.c b/drivers/staging/zsmalloc/zsmalloc-main.c
index eb00772..82e627c 100644
--- a/drivers/staging/zsmalloc/zsmalloc-main.c
+++ b/drivers/staging/zsmalloc/zsmalloc-main.c
@@ -222,11 +222,9 @@  struct zs_pool {
 /*
  * By default, zsmalloc uses a copy-based object mapping method to access
  * allocations that span two pages. However, if a particular architecture
- * 1) Implements local_flush_tlb_kernel_range() and 2) Performs VM mapping
- * faster than copying, then it should be added here so that
- * USE_PGTABLE_MAPPING is defined. This causes zsmalloc to use page table
- * mapping rather than copying
- * for object mapping.
+ * performs VM mapping faster than copying, then it should be added here
+ * so that USE_PGTABLE_MAPPING is defined. This causes zsmalloc to use
+ * page table mapping rather than copying for object mapping.
 */
 #if defined(CONFIG_ARM)
 #define USE_PGTABLE_MAPPING
@@ -663,7 +661,7 @@  static inline void __zs_unmap_object(struct mapping_area *area,
 
 	flush_cache_vunmap(addr, end);
 	unmap_kernel_range_noflush(addr, PAGE_SIZE * 2);
-	local_flush_tlb_kernel_range(addr, end);
+	flush_tlb_kernel_range(addr, end);
 }
 
 #else /* USE_PGTABLE_MAPPING */