All of lore.kernel.org
 help / color / mirror / Atom feed
* Patch "arm64: Fix swiotlb fallback allocation" has been added to the 4.9-stable tree
@ 2017-01-23 16:02 gregkh
  2017-01-23 16:15 ` Geert Uytterhoeven
  0 siblings, 1 reply; 4+ messages in thread
From: gregkh @ 2017-01-23 16:02 UTC (permalink / raw)
  To: agraf, catalin.marinas, geert+renesas, gregkh, jszhang, konrad.wilk
  Cc: stable, stable-commits


This is a note to let you know that I've just added the patch titled

    arm64: Fix swiotlb fallback allocation

to the 4.9-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     arm64-fix-swiotlb-fallback-allocation.patch
and it can be found in the queue-4.9 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


>From 524dabe1c68e0bca25ce7b108099e5d89472a101 Mon Sep 17 00:00:00 2001
From: Alexander Graf <agraf@suse.de>
Date: Mon, 16 Jan 2017 12:46:33 +0100
Subject: arm64: Fix swiotlb fallback allocation

From: Alexander Graf <agraf@suse.de>

commit 524dabe1c68e0bca25ce7b108099e5d89472a101 upstream.

Commit b67a8b29df introduced logic to skip swiotlb allocation when all memory
is DMA accessible anyway.

While this is a great idea, __dma_alloc still calls swiotlb code unconditionally
to allocate memory when there is no CMA memory available. The swiotlb code is
called to ensure that we at least try get_free_pages().

Without initialization, swiotlb allocation code tries to access io_tlb_list
which is NULL. That results in a stack trace like this:

  Unable to handle kernel NULL pointer dereference at virtual address 00000000
  [...]
  [<ffff00000845b908>] swiotlb_tbl_map_single+0xd0/0x2b0
  [<ffff00000845be94>] swiotlb_alloc_coherent+0x10c/0x198
  [<ffff000008099dc0>] __dma_alloc+0x68/0x1a8
  [<ffff000000a1b410>] drm_gem_cma_create+0x98/0x108 [drm]
  [<ffff000000abcaac>] drm_fbdev_cma_create_with_funcs+0xbc/0x368 [drm_kms_helper]
  [<ffff000000abcd84>] drm_fbdev_cma_create+0x2c/0x40 [drm_kms_helper]
  [<ffff000000abc040>] drm_fb_helper_initial_config+0x238/0x410 [drm_kms_helper]
  [<ffff000000abce88>] drm_fbdev_cma_init_with_funcs+0x98/0x160 [drm_kms_helper]
  [<ffff000000abcf90>] drm_fbdev_cma_init+0x40/0x58 [drm_kms_helper]
  [<ffff000000b47980>] vc4_kms_load+0x90/0xf0 [vc4]
  [<ffff000000b46a94>] vc4_drm_bind+0xec/0x168 [vc4]
  [...]

Thankfully swiotlb code just learned how to not do allocations with the FORCE_NO
option. This patch configures the swiotlb code to use that if we decide not to
initialize the swiotlb framework.

Fixes: b67a8b29df ("arm64: mm: only initialize swiotlb when necessary")
Signed-off-by: Alexander Graf <agraf@suse.de>
CC: Jisheng Zhang <jszhang@marvell.com>
CC: Geert Uytterhoeven <geert+renesas@glider.be>
CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 arch/arm64/mm/init.c |    2 ++
 1 file changed, 2 insertions(+)

--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -403,6 +403,8 @@ void __init mem_init(void)
 {
 	if (swiotlb_force || max_pfn > (arm64_dma_phys_limit >> PAGE_SHIFT))
 		swiotlb_init(1);
+	else
+		swiotlb_force = SWIOTLB_NO_FORCE;
 
 	set_max_mapnr(pfn_to_page(max_pfn) - mem_map);
 


Patches currently in stable-queue which might be from agraf@suse.de are

queue-4.9/arm64-fix-swiotlb-fallback-allocation.patch

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Patch "arm64: Fix swiotlb fallback allocation" has been added to the 4.9-stable tree
  2017-01-23 16:02 Patch "arm64: Fix swiotlb fallback allocation" has been added to the 4.9-stable tree gregkh
@ 2017-01-23 16:15 ` Geert Uytterhoeven
  2017-01-23 16:33   ` Catalin Marinas
  0 siblings, 1 reply; 4+ messages in thread
From: Geert Uytterhoeven @ 2017-01-23 16:15 UTC (permalink / raw)
  To: Greg KH
  Cc: Alexander Graf, Catalin Marinas, Geert Uytterhoeven, jszhang,
	Konrad Rzeszutek Wilk, stable, stable-commits

Hi Greg,

On Mon, Jan 23, 2017 at 5:02 PM,  <gregkh@linuxfoundation.org> wrote:
> This is a note to let you know that I've just added the patch titled
>
>     arm64: Fix swiotlb fallback allocation
>
> to the 4.9-stable tree which can be found at:
>     http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary
>
> The filename of the patch is:
>      arm64-fix-swiotlb-fallback-allocation.patch
> and it can be found in the queue-4.9 subdirectory.
>
> If you, or anyone else, feels it should not be added to the stable tree,
> please let <stable@vger.kernel.org> know about it.
>
>
> From 524dabe1c68e0bca25ce7b108099e5d89472a101 Mon Sep 17 00:00:00 2001
> From: Alexander Graf <agraf@suse.de>
> Date: Mon, 16 Jan 2017 12:46:33 +0100
> Subject: arm64: Fix swiotlb fallback allocation
>
> From: Alexander Graf <agraf@suse.de>
>
> commit 524dabe1c68e0bca25ce7b108099e5d89472a101 upstream.
>
> Commit b67a8b29df introduced logic to skip swiotlb allocation when all memory
> is DMA accessible anyway.
>
> While this is a great idea, __dma_alloc still calls swiotlb code unconditionally
> to allocate memory when there is no CMA memory available. The swiotlb code is
> called to ensure that we at least try get_free_pages().
>
> Without initialization, swiotlb allocation code tries to access io_tlb_list
> which is NULL. That results in a stack trace like this:
>
>   Unable to handle kernel NULL pointer dereference at virtual address 00000000
>   [...]
>   [<ffff00000845b908>] swiotlb_tbl_map_single+0xd0/0x2b0
>   [<ffff00000845be94>] swiotlb_alloc_coherent+0x10c/0x198
>   [<ffff000008099dc0>] __dma_alloc+0x68/0x1a8
>   [<ffff000000a1b410>] drm_gem_cma_create+0x98/0x108 [drm]
>   [<ffff000000abcaac>] drm_fbdev_cma_create_with_funcs+0xbc/0x368 [drm_kms_helper]
>   [<ffff000000abcd84>] drm_fbdev_cma_create+0x2c/0x40 [drm_kms_helper]
>   [<ffff000000abc040>] drm_fb_helper_initial_config+0x238/0x410 [drm_kms_helper]
>   [<ffff000000abce88>] drm_fbdev_cma_init_with_funcs+0x98/0x160 [drm_kms_helper]
>   [<ffff000000abcf90>] drm_fbdev_cma_init+0x40/0x58 [drm_kms_helper]
>   [<ffff000000b47980>] vc4_kms_load+0x90/0xf0 [vc4]
>   [<ffff000000b46a94>] vc4_drm_bind+0xec/0x168 [vc4]
>   [...]
>
> Thankfully swiotlb code just learned how to not do allocations with the FORCE_NO
> option. This patch configures the swiotlb code to use that if we decide not to
> initialize the swiotlb framework.
>
> Fixes: b67a8b29df ("arm64: mm: only initialize swiotlb when necessary")
> Signed-off-by: Alexander Graf <agraf@suse.de>
> CC: Jisheng Zhang <jszhang@marvell.com>
> CC: Geert Uytterhoeven <geert+renesas@glider.be>
> CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
>
> ---
>  arch/arm64/mm/init.c |    2 ++
>  1 file changed, 2 insertions(+)
>
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -403,6 +403,8 @@ void __init mem_init(void)
>  {
>         if (swiotlb_force || max_pfn > (arm64_dma_phys_limit >> PAGE_SHIFT))
>                 swiotlb_init(1);
> +       else
> +               swiotlb_force = SWIOTLB_NO_FORCE;

The above definition depends on:

commit ae7871be189cb411 ("swiotlb: Convert swiotlb_force from int to enum")
commit fff5d99225107f5f ("swiotlb: Add swiotlb=noforce debug option")

Gr{oetje,eeting}s,

                        Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Patch "arm64: Fix swiotlb fallback allocation" has been added to the 4.9-stable tree
  2017-01-23 16:15 ` Geert Uytterhoeven
@ 2017-01-23 16:33   ` Catalin Marinas
  2017-01-24  7:30     ` Greg KH
  0 siblings, 1 reply; 4+ messages in thread
From: Catalin Marinas @ 2017-01-23 16:33 UTC (permalink / raw)
  To: Geert Uytterhoeven
  Cc: Greg KH, Alexander Graf, Geert Uytterhoeven, jszhang,
	Konrad Rzeszutek Wilk, stable, stable-commits

On Mon, Jan 23, 2017 at 05:15:39PM +0100, Geert Uytterhoeven wrote:
> On Mon, Jan 23, 2017 at 5:02 PM,  <gregkh@linuxfoundation.org> wrote:
> > This is a note to let you know that I've just added the patch titled
> >
> >     arm64: Fix swiotlb fallback allocation
> >
> > to the 4.9-stable tree which can be found at:
> >     http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary
> >
> > The filename of the patch is:
> >      arm64-fix-swiotlb-fallback-allocation.patch
> > and it can be found in the queue-4.9 subdirectory.
> >
> > If you, or anyone else, feels it should not be added to the stable tree,
> > please let <stable@vger.kernel.org> know about it.
> >
> >
> > From 524dabe1c68e0bca25ce7b108099e5d89472a101 Mon Sep 17 00:00:00 2001
> > From: Alexander Graf <agraf@suse.de>
> > Date: Mon, 16 Jan 2017 12:46:33 +0100
> > Subject: arm64: Fix swiotlb fallback allocation
> >
> > From: Alexander Graf <agraf@suse.de>
> >
> > commit 524dabe1c68e0bca25ce7b108099e5d89472a101 upstream.
> >
> > Commit b67a8b29df introduced logic to skip swiotlb allocation when all memory
> > is DMA accessible anyway.
> >
> > While this is a great idea, __dma_alloc still calls swiotlb code unconditionally
> > to allocate memory when there is no CMA memory available. The swiotlb code is
> > called to ensure that we at least try get_free_pages().
> >
> > Without initialization, swiotlb allocation code tries to access io_tlb_list
> > which is NULL. That results in a stack trace like this:
> >
> >   Unable to handle kernel NULL pointer dereference at virtual address 00000000
> >   [...]
> >   [<ffff00000845b908>] swiotlb_tbl_map_single+0xd0/0x2b0
> >   [<ffff00000845be94>] swiotlb_alloc_coherent+0x10c/0x198
> >   [<ffff000008099dc0>] __dma_alloc+0x68/0x1a8
> >   [<ffff000000a1b410>] drm_gem_cma_create+0x98/0x108 [drm]
> >   [<ffff000000abcaac>] drm_fbdev_cma_create_with_funcs+0xbc/0x368 [drm_kms_helper]
> >   [<ffff000000abcd84>] drm_fbdev_cma_create+0x2c/0x40 [drm_kms_helper]
> >   [<ffff000000abc040>] drm_fb_helper_initial_config+0x238/0x410 [drm_kms_helper]
> >   [<ffff000000abce88>] drm_fbdev_cma_init_with_funcs+0x98/0x160 [drm_kms_helper]
> >   [<ffff000000abcf90>] drm_fbdev_cma_init+0x40/0x58 [drm_kms_helper]
> >   [<ffff000000b47980>] vc4_kms_load+0x90/0xf0 [vc4]
> >   [<ffff000000b46a94>] vc4_drm_bind+0xec/0x168 [vc4]
> >   [...]
> >
> > Thankfully swiotlb code just learned how to not do allocations with the FORCE_NO
> > option. This patch configures the swiotlb code to use that if we decide not to
> > initialize the swiotlb framework.
> >
> > Fixes: b67a8b29df ("arm64: mm: only initialize swiotlb when necessary")
> > Signed-off-by: Alexander Graf <agraf@suse.de>
> > CC: Jisheng Zhang <jszhang@marvell.com>
> > CC: Geert Uytterhoeven <geert+renesas@glider.be>
> > CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
> > Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> >
> > ---
> >  arch/arm64/mm/init.c |    2 ++
> >  1 file changed, 2 insertions(+)
> >
> > --- a/arch/arm64/mm/init.c
> > +++ b/arch/arm64/mm/init.c
> > @@ -403,6 +403,8 @@ void __init mem_init(void)
> >  {
> >         if (swiotlb_force || max_pfn > (arm64_dma_phys_limit >> PAGE_SHIFT))
> >                 swiotlb_init(1);
> > +       else
> > +               swiotlb_force = SWIOTLB_NO_FORCE;
> 
> The above definition depends on:
> 
> commit ae7871be189cb411 ("swiotlb: Convert swiotlb_force from int to enum")
> commit fff5d99225107f5f ("swiotlb: Add swiotlb=noforce debug option")

If these are not suitable for stable (too big, new feature), we can just
drop the arm64 commit from 4.9-stable (and we could fix it in a
different way).

-- 
Catalin

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Patch "arm64: Fix swiotlb fallback allocation" has been added to the 4.9-stable tree
  2017-01-23 16:33   ` Catalin Marinas
@ 2017-01-24  7:30     ` Greg KH
  0 siblings, 0 replies; 4+ messages in thread
From: Greg KH @ 2017-01-24  7:30 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: Geert Uytterhoeven, Alexander Graf, Geert Uytterhoeven, jszhang,
	Konrad Rzeszutek Wilk, stable, stable-commits

On Mon, Jan 23, 2017 at 04:33:08PM +0000, Catalin Marinas wrote:
> On Mon, Jan 23, 2017 at 05:15:39PM +0100, Geert Uytterhoeven wrote:
> > On Mon, Jan 23, 2017 at 5:02 PM,  <gregkh@linuxfoundation.org> wrote:
> > > This is a note to let you know that I've just added the patch titled
> > >
> > >     arm64: Fix swiotlb fallback allocation
> > >
> > > to the 4.9-stable tree which can be found at:
> > >     http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary
> > >
> > > The filename of the patch is:
> > >      arm64-fix-swiotlb-fallback-allocation.patch
> > > and it can be found in the queue-4.9 subdirectory.
> > >
> > > If you, or anyone else, feels it should not be added to the stable tree,
> > > please let <stable@vger.kernel.org> know about it.
> > >
> > >
> > > From 524dabe1c68e0bca25ce7b108099e5d89472a101 Mon Sep 17 00:00:00 2001
> > > From: Alexander Graf <agraf@suse.de>
> > > Date: Mon, 16 Jan 2017 12:46:33 +0100
> > > Subject: arm64: Fix swiotlb fallback allocation
> > >
> > > From: Alexander Graf <agraf@suse.de>
> > >
> > > commit 524dabe1c68e0bca25ce7b108099e5d89472a101 upstream.
> > >
> > > Commit b67a8b29df introduced logic to skip swiotlb allocation when all memory
> > > is DMA accessible anyway.
> > >
> > > While this is a great idea, __dma_alloc still calls swiotlb code unconditionally
> > > to allocate memory when there is no CMA memory available. The swiotlb code is
> > > called to ensure that we at least try get_free_pages().
> > >
> > > Without initialization, swiotlb allocation code tries to access io_tlb_list
> > > which is NULL. That results in a stack trace like this:
> > >
> > >   Unable to handle kernel NULL pointer dereference at virtual address 00000000
> > >   [...]
> > >   [<ffff00000845b908>] swiotlb_tbl_map_single+0xd0/0x2b0
> > >   [<ffff00000845be94>] swiotlb_alloc_coherent+0x10c/0x198
> > >   [<ffff000008099dc0>] __dma_alloc+0x68/0x1a8
> > >   [<ffff000000a1b410>] drm_gem_cma_create+0x98/0x108 [drm]
> > >   [<ffff000000abcaac>] drm_fbdev_cma_create_with_funcs+0xbc/0x368 [drm_kms_helper]
> > >   [<ffff000000abcd84>] drm_fbdev_cma_create+0x2c/0x40 [drm_kms_helper]
> > >   [<ffff000000abc040>] drm_fb_helper_initial_config+0x238/0x410 [drm_kms_helper]
> > >   [<ffff000000abce88>] drm_fbdev_cma_init_with_funcs+0x98/0x160 [drm_kms_helper]
> > >   [<ffff000000abcf90>] drm_fbdev_cma_init+0x40/0x58 [drm_kms_helper]
> > >   [<ffff000000b47980>] vc4_kms_load+0x90/0xf0 [vc4]
> > >   [<ffff000000b46a94>] vc4_drm_bind+0xec/0x168 [vc4]
> > >   [...]
> > >
> > > Thankfully swiotlb code just learned how to not do allocations with the FORCE_NO
> > > option. This patch configures the swiotlb code to use that if we decide not to
> > > initialize the swiotlb framework.
> > >
> > > Fixes: b67a8b29df ("arm64: mm: only initialize swiotlb when necessary")
> > > Signed-off-by: Alexander Graf <agraf@suse.de>
> > > CC: Jisheng Zhang <jszhang@marvell.com>
> > > CC: Geert Uytterhoeven <geert+renesas@glider.be>
> > > CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > > Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
> > > Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> > >
> > > ---
> > >  arch/arm64/mm/init.c |    2 ++
> > >  1 file changed, 2 insertions(+)
> > >
> > > --- a/arch/arm64/mm/init.c
> > > +++ b/arch/arm64/mm/init.c
> > > @@ -403,6 +403,8 @@ void __init mem_init(void)
> > >  {
> > >         if (swiotlb_force || max_pfn > (arm64_dma_phys_limit >> PAGE_SHIFT))
> > >                 swiotlb_init(1);
> > > +       else
> > > +               swiotlb_force = SWIOTLB_NO_FORCE;
> > 
> > The above definition depends on:
> > 
> > commit ae7871be189cb411 ("swiotlb: Convert swiotlb_force from int to enum")
> > commit fff5d99225107f5f ("swiotlb: Add swiotlb=noforce debug option")
> 
> If these are not suitable for stable (too big, new feature), we can just
> drop the arm64 commit from 4.9-stable (and we could fix it in a
> different way).

Those look fine, I've queued them up now (took a bit of manual work),
and will let the autobuilder have fun with it...

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2017-01-24  7:30 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-01-23 16:02 Patch "arm64: Fix swiotlb fallback allocation" has been added to the 4.9-stable tree gregkh
2017-01-23 16:15 ` Geert Uytterhoeven
2017-01-23 16:33   ` Catalin Marinas
2017-01-24  7:30     ` Greg KH

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.