Linux SNPS ARC Archive on lore.kernel.org
 help / color / Atom feed
From: rppt@linux.ibm.com (Mike Rapoport)
To: linux-snps-arc@lists.infradead.org
Subject: [PATCH v2 00/21] Refine memblock API
Date: Wed, 2 Oct 2019 10:36:06 +0300
Message-ID: <20191002073605.GA30433@linux.ibm.com> (raw)
In-Reply-To: <CAHCN7xKLhWw4P9-sZKXQcfSfh2r3J_+rLxuxACW0UVgimCzyVw@mail.gmail.com>

Hi Adam,

On Tue, Oct 01, 2019@07:14:13PM -0500, Adam Ford wrote:
> On Sun, Sep 29, 2019@8:33 AM Adam Ford <aford173@gmail.com> wrote:
> >
> > I am attaching two logs.  I now the mailing lists will be unhappy, but
> >  don't want to try and spam a bunch of log through the mailing liast.
> > The two logs show the differences between the working and non-working
> > imx6q 3D accelerator when trying to run a simple glmark2-es2-drm demo.
> >
> > The only change between them is the 2 line code change you suggested.
> >
> > In both cases, I have cma=128M set in my bootargs.  Historically this
> > has been sufficient, but cma=256M has not made a difference.
> >
> 
> Mike any suggestions on how to move forward?
> I was hoping to get the fixes tested and pushed before 5.4 is released
> if at all possible

I have a fix (below) that kinda restores the original behaviour, but I
still would like to double check to make sure it's not a band aid and I
haven't missed the actual root cause.

Can you please send me your device tree definition and the output of 

cat /sys/kernel/debug/memblock/memory

and 

cat /sys/kernel/debug/memblock/reserved

Thanks!

>From 06529f861772b7dea2912fc2245debe4690139b8 Mon Sep 17 00:00:00 2001
From: Mike Rapoport <rppt@linux.ibm.com>
Date: Wed, 2 Oct 2019 10:14:17 +0300
Subject: [PATCH] mm: memblock: do not enforce current limit for memblock_phys*
 family

Until commit 92d12f9544b7 ("memblock: refactor internal allocation
functions") the maximal address for memblock allocations was forced to
memblock.current_limit only for the allocation functions returning virtual
address. The changes introduced by that commit moved the limit enforcement
into the allocation core and as a result the allocation functions returning
physical address also started to limit allocations to
memblock.current_limit.

This caused breakage of etnaviv GPU driver:

[    3.682347] etnaviv etnaviv: bound 130000.gpu (ops gpu_ops)
[    3.688669] etnaviv etnaviv: bound 134000.gpu (ops gpu_ops)
[    3.695099] etnaviv etnaviv: bound 2204000.gpu (ops gpu_ops)
[    3.700800] etnaviv-gpu 130000.gpu: model: GC2000, revision: 5108
[    3.723013] etnaviv-gpu 130000.gpu: command buffer outside valid
memory window
[    3.731308] etnaviv-gpu 134000.gpu: model: GC320, revision: 5007
[    3.752437] etnaviv-gpu 134000.gpu: command buffer outside valid
memory window
[    3.760583] etnaviv-gpu 2204000.gpu: model: GC355, revision: 1215
[    3.766766] etnaviv-gpu 2204000.gpu: Ignoring GPU with VG and FE2.0

Restore the behaviour of memblock_phys* family so that these functions will
not enforce memblock.current_limit.

Fixes: 92d12f9544b7 ("memblock: refactor internal allocation functions")
Reported-by: Adam Ford <aford173 at gmail.com>
Signed-off-by: Mike Rapoport <rppt at linux.ibm.com>
---
 mm/memblock.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/mm/memblock.c b/mm/memblock.c
index 7d4f61a..c4b16ca 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -1356,9 +1356,6 @@ static phys_addr_t __init memblock_alloc_range_nid(phys_addr_t size,
 		align = SMP_CACHE_BYTES;
 	}
 
-	if (end > memblock.current_limit)
-		end = memblock.current_limit;
-
 again:
 	found = memblock_find_in_range_node(size, align, start, end, nid,
 					    flags);
@@ -1469,6 +1466,9 @@ static void * __init memblock_alloc_internal(
 	if (WARN_ON_ONCE(slab_is_available()))
 		return kzalloc_node(size, GFP_NOWAIT, nid);
 
+	if (max_addr > memblock.current_limit)
+		max_addr = memblock.current_limit;
+
 	alloc = memblock_alloc_range_nid(size, align, min_addr, max_addr, nid);
 
 	/* retry allocation without lower limit */
-- 
2.7.4

 
> > adam
> >
> > On Sat, Sep 28, 2019@2:33 AM Mike Rapoport <rppt@linux.ibm.com> wrote:
> > >
> > > On Thu, Sep 26, 2019@02:35:53PM -0500, Adam Ford wrote:
> > > > On Thu, Sep 26, 2019@11:04 AM Mike Rapoport <rppt@linux.ibm.com> wrote:
> > > > >
> > > > > Hi,
> > > > >
> > > > > On Thu, Sep 26, 2019@08:09:52AM -0500, Adam Ford wrote:
> > > > > > On Wed, Sep 25, 2019@10:17 AM Fabio Estevam <festevam@gmail.com> wrote:
> > > > > > >
> > > > > > > On Wed, Sep 25, 2019@9:17 AM Adam Ford <aford173@gmail.com> wrote:
> > > > > > >
> > > > > > > > I tried cma=256M and noticed the cma dump at the beginning didn't
> > > > > > > > change.  Do we need to setup a reserved-memory node like
> > > > > > > > imx6ul-ccimx6ulsom.dtsi did?
> > > > > > >
> > > > > > > I don't think so.
> > > > > > >
> > > > > > > Were you able to identify what was the exact commit that caused such regression?
> > > > > >
> > > > > > I was able to narrow it down the 92d12f9544b7 ("memblock: refactor
> > > > > > internal allocation functions") that caused the regression with
> > > > > > Etnaviv.
> > > > >
> > > > >
> > > > > Can you please test with this change:
> > > > >
> > > >
> > > > That appears to have fixed my issue.  I am not sure what the impact
> > > > is, but is this a safe option?
> > >
> > > It's not really a fix, I just wanted to see how exactly 92d12f9544b7 ("memblock:
> > > refactor internal allocation functions") broke your setup.
> > >
> > > Can you share the dts you are using and the full kernel log?
> > >
> > > > adam
> > > >
> > > > > diff --git a/mm/memblock.c b/mm/memblock.c
> > > > > index 7d4f61a..1f5a0eb 100644
> > > > > --- a/mm/memblock.c
> > > > > +++ b/mm/memblock.c
> > > > > @@ -1356,9 +1356,6 @@ static phys_addr_t __init memblock_alloc_range_nid(phys_addr_t size,
> > > > >                 align = SMP_CACHE_BYTES;
> > > > >         }
> > > > >
> > > > > -       if (end > memblock.current_limit)
> > > > > -               end = memblock.current_limit;
> > > > > -
> > > > >  again:
> > > > >         found = memblock_find_in_range_node(size, align, start, end, nid,
> > > > >                                             flags);
> > > > >
> > > > > > I also noticed that if I create a reserved memory node as was done one
> > > > > > imx6ul-ccimx6ulsom.dtsi the 3D seems to work again, but without it, I
> > > > > > was getting errors regardless of the 'cma=256M' or not.
> > > > > > I don't have a problem using the reserved memory, but I guess I am not
> > > > > > sure what the amount should be.  I know for the video decoding 1080p,
> > > > > > I have historically used cma=128M, but with the 3D also needing some
> > > > > > memory allocation, is that enough or should I use 256M?
> > > > > >
> > > > > > adam
> > > > >
> > > > > --
> > > > > Sincerely yours,
> > > > > Mike.
> > > > >
> > >
> > > --
> > > Sincerely yours,
> > > Mike.
> > >

-- 
Sincerely yours,
Mike.

  reply index

Thread overview: 52+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-01-21  8:03 rppt
2019-01-21  8:03 ` [PATCH v2 01/21] openrisc: prefer memblock APIs returning virtual address rppt
2019-01-27  3:07   ` shorne
2019-01-21  8:03 ` [PATCH v2 02/21] powerpc: use memblock functions " rppt
2019-01-29  9:52   ` mpe
2019-01-21  8:03 ` [PATCH v2 03/21] memblock: replace memblock_alloc_base(ANYWHERE) with memblock_phys_alloc rppt
2019-01-21  8:03 ` [PATCH v2 04/21] memblock: drop memblock_alloc_base_nid() rppt
2019-01-21  8:03 ` [PATCH v2 05/21] memblock: emphasize that memblock_alloc_range() returns a physical address rppt
2019-01-21  8:03 ` [PATCH v2 06/21] memblock: memblock_phys_alloc_try_nid(): don't panic rppt
2019-01-25 17:45   ` catalin.marinas
2019-01-25 19:32     ` rppt
2019-01-29  9:56   ` mpe
2019-01-29  9:58     ` mpe
2019-01-21  8:03 ` [PATCH v2 07/21] memblock: memblock_phys_alloc(): " rppt
2019-01-21  8:03 ` [PATCH v2 08/21] memblock: drop __memblock_alloc_base() rppt
2019-01-21  8:03 ` [PATCH v2 09/21] memblock: drop memblock_alloc_base() rppt
2019-01-29 10:29   ` mpe
2019-01-21  8:03 ` [PATCH v2 10/21] memblock: refactor internal allocation functions rppt
2019-02-03  9:39   ` mpe
2019-02-03 10:04     ` rppt
2019-01-21  8:03 ` [PATCH v2 11/21] memblock: make memblock_find_in_range_node() and choose_memblock_flags() static rppt
2019-01-21  8:03 ` [PATCH v2 12/21] arch: use memblock_alloc() instead of memblock_alloc_from(size, align, 0) rppt
2019-01-21  8:04 ` [PATCH v2 13/21] arch: don't memset(0) memory returned by memblock_alloc() rppt
2019-01-21  8:04 ` [PATCH v2 14/21] ia64: add checks for the return value of memblock_alloc*() rppt
2019-01-21  8:04 ` [PATCH v2 15/21] sparc: " rppt
2019-01-21  8:04 ` [PATCH v2 16/21] mm/percpu: " rppt
2019-01-21  8:04 ` [PATCH v2 17/21] init/main: " rppt
2019-01-21  8:04 ` [PATCH v2 18/21] swiotlb: " rppt
2019-01-21  8:04 ` [PATCH v2 19/21] treewide: " rppt
2019-01-21  8:39   ` geert
2019-01-21 17:18   ` robh
2019-01-31  6:07   ` christophe.leroy
2019-01-31  6:41     ` rppt
2019-01-31  6:44       ` christophe.leroy
2019-01-31  7:07         ` christophe.leroy
2019-01-31  7:14           ` rppt
2019-01-31 15:23   ` jcmvbkbc
2019-01-21  8:04 ` [PATCH v2 20/21] memblock: memblock_alloc_try_nid: don't panic rppt
2019-01-21  8:04 ` [PATCH v2 21/21] memblock: drop memblock_alloc_*_nopanic() variants rppt
2019-01-30 13:38   ` pmladek
2019-09-24 17:52 ` [PATCH v2 00/21] Refine memblock API aford173
2019-09-25 12:12   ` festevam
2019-09-25 12:17     ` aford173
2019-09-25 15:17       ` festevam
2019-09-26 13:09         ` aford173
2019-09-26 16:04           ` rppt
2019-09-26 19:35             ` aford173
2019-09-28  7:33               ` rppt
2019-09-29 13:33                 ` aford173
2019-10-02  0:14                   ` aford173
2019-10-02  7:36                     ` rppt [this message]
2019-10-02 11:14                       ` aford173

Reply instructions:

You may reply publically to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191002073605.GA30433@linux.ibm.com \
    --to=rppt@linux.ibm.com \
    --cc=linux-snps-arc@lists.infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

Linux SNPS ARC Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-snps-arc/0 linux-snps-arc/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-snps-arc linux-snps-arc/ https://lore.kernel.org/linux-snps-arc \
		linux-snps-arc@lists.infradead.org
	public-inbox-index linux-snps-arc

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.infradead.lists.linux-snps-arc


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git