All of lore.kernel.org
 help / color / mirror / Atom feed
* Wandboard Quad experience (good!)
@ 2013-07-09 21:38 Chris Tapp
  2013-07-09 21:51 ` Otavio Salvador
  0 siblings, 1 reply; 16+ messages in thread
From: Chris Tapp @ 2013-07-09 21:38 UTC (permalink / raw)
  To: meta-freescale

Thanks to help from Otavio and others, I now have an image building for the Wandboard Quad.

I've built a custom OpenGLES application running under EGL (which I normally run on a Cedar Trail platform) and it's mostly running as expected - just need to work out why my gstreamer pipeline isn't working.

The only bit that I had to work round was broadcom-nvram-config, which resulted in:

NOTE: Resolving any missing task queue dependencies
ERROR: Nothing RPROVIDES 'linux-firmware-INVALID' (but /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/broadcom-nvram-config.bb RDEPENDS on or otherwise requires it)
NOTE: Runtime target 'linux-firmware-INVALID' is unbuildable, removing...

I got round this by commenting out:

MACHINE_EXTRA_RRECOMMENDS += " broadcom-nvram-config"

in the machine file.

Chris Tapp

opensource@keylevel.com
www.keylevel.com





^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Wandboard Quad experience (good!)
  2013-07-09 21:38 Wandboard Quad experience (good!) Chris Tapp
@ 2013-07-09 21:51 ` Otavio Salvador
  2013-07-09 22:03   ` Chris Tapp
  0 siblings, 1 reply; 16+ messages in thread
From: Otavio Salvador @ 2013-07-09 21:51 UTC (permalink / raw)
  To: Chris Tapp; +Cc: meta-freescale

On Tue, Jul 9, 2013 at 6:38 PM, Chris Tapp <opensource@keylevel.com> wrote:
> Thanks to help from Otavio and others, I now have an image building for the Wandboard Quad.
>
> I've built a custom OpenGLES application running under EGL (which I normally run on a Cedar Trail platform) and it's mostly running as expected - just need to work out why my gstreamer pipeline isn't working.
>
> The only bit that I had to work round was broadcom-nvram-config, which resulted in:
>
> NOTE: Resolving any missing task queue dependencies
> ERROR: Nothing RPROVIDES 'linux-firmware-INVALID' (but /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/broadcom-nvram-config.bb RDEPENDS on or otherwise requires it)
> NOTE: Runtime target 'linux-firmware-INVALID' is unbuildable, removing...
>
> I got round this by commenting out:
>
> MACHINE_EXTRA_RRECOMMENDS += " broadcom-nvram-config"

I fixed this and pused this for master-next; please confirm it works :-)

--
Otavio Salvador                             O.S. Systems
http://www.ossystems.com.br        http://projetos.ossystems.com.br
Mobile: +55 (53) 9981-7854            Mobile: +1 (347) 903-9750


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Wandboard Quad experience (good!)
  2013-07-09 21:51 ` Otavio Salvador
@ 2013-07-09 22:03   ` Chris Tapp
  2013-07-09 23:05     ` Chris Tapp
  0 siblings, 1 reply; 16+ messages in thread
From: Chris Tapp @ 2013-07-09 22:03 UTC (permalink / raw)
  To: Otavio Salvador; +Cc: meta-freescale


On 9 Jul 2013, at 22:51, Otavio Salvador wrote:

> On Tue, Jul 9, 2013 at 6:38 PM, Chris Tapp <opensource@keylevel.com> wrote:
>> Thanks to help from Otavio and others, I now have an image building for the Wandboard Quad.
>> 
>> I've built a custom OpenGLES application running under EGL (which I normally run on a Cedar Trail platform) and it's mostly running as expected - just need to work out why my gstreamer pipeline isn't working.
>> 
>> The only bit that I had to work round was broadcom-nvram-config, which resulted in:
>> 
>> NOTE: Resolving any missing task queue dependencies
>> ERROR: Nothing RPROVIDES 'linux-firmware-INVALID' (but /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/broadcom-nvram-config.bb RDEPENDS on or otherwise requires it)
>> NOTE: Runtime target 'linux-firmware-INVALID' is unbuildable, removing...
>> 
>> I got round this by commenting out:
>> 
>> MACHINE_EXTRA_RRECOMMENDS += " broadcom-nvram-config"
> 
> I fixed this and pused this for master-next; please confirm it works :-)

Yes it does. Thank you :-)

> --
> Otavio Salvador                             O.S. Systems
> http://www.ossystems.com.br        http://projetos.ossystems.com.br
> Mobile: +55 (53) 9981-7854            Mobile: +1 (347) 903-9750

Chris Tapp

opensource@keylevel.com
www.keylevel.com





^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Wandboard Quad experience (good!)
  2013-07-09 22:03   ` Chris Tapp
@ 2013-07-09 23:05     ` Chris Tapp
  2013-07-10  2:57       ` John Weber
  0 siblings, 1 reply; 16+ messages in thread
From: Chris Tapp @ 2013-07-09 23:05 UTC (permalink / raw)
  To: Otavio Salvador; +Cc: meta-freescale


On 9 Jul 2013, at 23:03, Chris Tapp wrote:

> 
> On 9 Jul 2013, at 22:51, Otavio Salvador wrote:
> 
>> On Tue, Jul 9, 2013 at 6:38 PM, Chris Tapp <opensource@keylevel.com> wrote:
>>> Thanks to help from Otavio and others, I now have an image building for the Wandboard Quad.
>>> 
>>> I've built a custom OpenGLES application running under EGL (which I normally run on a Cedar Trail platform) and it's mostly running as expected - just need to work out why my gstreamer pipeline isn't working.
>>> 
>>> The only bit that I had to work round was broadcom-nvram-config, which resulted in:
>>> 
>>> NOTE: Resolving any missing task queue dependencies
>>> ERROR: Nothing RPROVIDES 'linux-firmware-INVALID' (but /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/broadcom-nvram-config.bb RDEPENDS on or otherwise requires it)
>>> NOTE: Runtime target 'linux-firmware-INVALID' is unbuildable, removing...
>>> 
>>> I got round this by commenting out:
>>> 
>>> MACHINE_EXTRA_RRECOMMENDS += " broadcom-nvram-config"
>> 
>> I fixed this and pused this for master-next; please confirm it works :-)
> 
> Yes it does. Thank you :-)

Or not... I'm getting a fetcher failure:

WARNING: Failed to fetch URL file://nvram.txt, attempting MIRRORS if available
ERROR: Fetcher failure: Unable to find file file://nvram.txt anywhere. The paths that were searched were:
    /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/broadcom-nvram-config-1.0/arm
    /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/broadcom-nvram-config-1.0/armv7a
    /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/broadcom-nvram-config-1.0/mx6
    /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/broadcom-nvram-config-1.0/mx6q
    /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/broadcom-nvram-config-1.0/wandboard
    /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/broadcom-nvram-config-1.0/wandboard-quad
    /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/broadcom-nvram-config-1.0/poky
    /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/broadcom-nvram-config-1.0/
    /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/broadcom-nvram-config/arm
    /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/broadcom-nvram-config/armv7a
    /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/broadcom-nvram-config/mx6
    /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/broadcom-nvram-config/mx6q
    /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/broadcom-nvram-config/wandboard
    /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/broadcom-nvram-config/wandboard-quad
    /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/broadcom-nvram-config/poky
    /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/broadcom-nvram-config/
    /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/files/arm
    /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/files/armv7a
    /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/files/mx6
    /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/files/mx6q
    /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/files/wandboard
    /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/files/wandboard-quad
    /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/files/poky
    /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/files/
    /media/SSD-RAID/build-danny-wandboard/../yocto-downloads
ERROR: Function failed: Fetcher failure for URL: 'file://nvram.txt'. Unable to fetch URL from any source.
ERROR: Task 4 (/media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/broadcom-nvram-config.bb, do_fetch) failed with exit code '1'

> 
>> --
>> Otavio Salvador                             O.S. Systems
>> http://www.ossystems.com.br        http://projetos.ossystems.com.br
>> Mobile: +55 (53) 9981-7854            Mobile: +1 (347) 903-9750
> 
> Chris Tapp
> 
> opensource@keylevel.com
> www.keylevel.com
> 
> 
> 
> _______________________________________________
> meta-freescale mailing list
> meta-freescale@yoctoproject.org
> https://lists.yoctoproject.org/listinfo/meta-freescale

Chris Tapp

opensource@keylevel.com
www.keylevel.com





^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Wandboard Quad experience (good!)
  2013-07-09 23:05     ` Chris Tapp
@ 2013-07-10  2:57       ` John Weber
  2013-07-10  7:50         ` Chris Tapp
  2013-07-12 22:50         ` Chris Tapp
  0 siblings, 2 replies; 16+ messages in thread
From: John Weber @ 2013-07-10  2:57 UTC (permalink / raw)
  To: Chris Tapp; +Cc: meta-freescale, Otavio Salvador

[-- Attachment #1: Type: text/plain, Size: 5818 bytes --]

Hi Chris,

First off - thanks for trying out the kernel.  Let me know if you see
anything that needs to be fixed.  Patches are welcome, of course.

I chatted with Otavio about this issue because I'm working on getting Quad
up and running tonight.  He already has a patch to submit.  If you're
interested in trying this tonight, all you need to do is rename:

meta-fsl-arm-extra/recipes-bsp/broadcom-nvram-config/files/wandboard-dual

to

meta-fsl-arm-extra/recipes-bsp/broadcom-nvram-config/files/wandboard
(without the '-dual')

When we did this in the beginning, all we had with Wifi on it was the
Dual.  Now we have Quad, so we need to use a common directory.

John



On Tue, Jul 9, 2013 at 6:05 PM, Chris Tapp <opensource@keylevel.com> wrote:

>
> On 9 Jul 2013, at 23:03, Chris Tapp wrote:
>
> >
> > On 9 Jul 2013, at 22:51, Otavio Salvador wrote:
> >
> >> On Tue, Jul 9, 2013 at 6:38 PM, Chris Tapp <opensource@keylevel.com>
> wrote:
> >>> Thanks to help from Otavio and others, I now have an image building
> for the Wandboard Quad.
> >>>
> >>> I've built a custom OpenGLES application running under EGL (which I
> normally run on a Cedar Trail platform) and it's mostly running as expected
> - just need to work out why my gstreamer pipeline isn't working.
> >>>
> >>> The only bit that I had to work round was broadcom-nvram-config, which
> resulted in:
> >>>
> >>> NOTE: Resolving any missing task queue dependencies
> >>> ERROR: Nothing RPROVIDES 'linux-firmware-INVALID' (but
> /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/
> broadcom-nvram-config.bb RDEPENDS on or otherwise requires it)
> >>> NOTE: Runtime target 'linux-firmware-INVALID' is unbuildable,
> removing...
> >>>
> >>> I got round this by commenting out:
> >>>
> >>> MACHINE_EXTRA_RRECOMMENDS += " broadcom-nvram-config"
> >>
> >> I fixed this and pused this for master-next; please confirm it works :-)
> >
> > Yes it does. Thank you :-)
>
> Or not... I'm getting a fetcher failure:
>
> WARNING: Failed to fetch URL file://nvram.txt, attempting MIRRORS if
> available
> ERROR: Fetcher failure: Unable to find file file://nvram.txt anywhere. The
> paths that were searched were:
>
> /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/broadcom-nvram-config-1.0/arm
>
> /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/broadcom-nvram-config-1.0/armv7a
>
> /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/broadcom-nvram-config-1.0/mx6
>
> /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/broadcom-nvram-config-1.0/mx6q
>
> /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/broadcom-nvram-config-1.0/wandboard
>
> /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/broadcom-nvram-config-1.0/wandboard-quad
>
> /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/broadcom-nvram-config-1.0/poky
>
> /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/broadcom-nvram-config-1.0/
>
> /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/broadcom-nvram-config/arm
>
> /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/broadcom-nvram-config/armv7a
>
> /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/broadcom-nvram-config/mx6
>
> /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/broadcom-nvram-config/mx6q
>
> /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/broadcom-nvram-config/wandboard
>
> /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/broadcom-nvram-config/wandboard-quad
>
> /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/broadcom-nvram-config/poky
>
> /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/broadcom-nvram-config/
>
> /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/files/arm
>
> /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/files/armv7a
>
> /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/files/mx6
>
> /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/files/mx6q
>
> /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/files/wandboard
>
> /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/files/wandboard-quad
>
> /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/files/poky
>
> /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/files/
>     /media/SSD-RAID/build-danny-wandboard/../yocto-downloads
> ERROR: Function failed: Fetcher failure for URL: 'file://nvram.txt'.
> Unable to fetch URL from any source.
> ERROR: Task 4
> (/media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/
> broadcom-nvram-config.bb, do_fetch) failed with exit code '1'
>
> >
> >> --
> >> Otavio Salvador                             O.S. Systems
> >> http://www.ossystems.com.br        http://projetos.ossystems.com.br
> >> Mobile: +55 (53) 9981-7854            Mobile: +1 (347) 903-9750
> >
> > Chris Tapp
> >
> > opensource@keylevel.com
> > www.keylevel.com
> >
> >
> >
> > _______________________________________________
> > meta-freescale mailing list
> > meta-freescale@yoctoproject.org
> > https://lists.yoctoproject.org/listinfo/meta-freescale
>
> Chris Tapp
>
> opensource@keylevel.com
> www.keylevel.com
>
>
>
> _______________________________________________
> meta-freescale mailing list
> meta-freescale@yoctoproject.org
> https://lists.yoctoproject.org/listinfo/meta-freescale
>

[-- Attachment #2: Type: text/html, Size: 7669 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Wandboard Quad experience (good!)
  2013-07-10  2:57       ` John Weber
@ 2013-07-10  7:50         ` Chris Tapp
  2013-07-10 12:44           ` Otavio Salvador
  2013-07-12 22:50         ` Chris Tapp
  1 sibling, 1 reply; 16+ messages in thread
From: Chris Tapp @ 2013-07-10  7:50 UTC (permalink / raw)
  To: John Weber; +Cc: meta-freescale, Otavio Salvador

[-- Attachment #1: Type: text/plain, Size: 6098 bytes --]

Hi John,

On 10 Jul 2013, at 03:57, John Weber wrote:

> Hi Chris,
> 
> First off - thanks for trying out the kernel.  Let me know if you see anything that needs to be fixed.  Patches are welcome, of course.
> 
> I chatted with Otavio about this issue because I'm working on getting Quad up and running tonight.  He already has a patch to submit.  If you're interested in trying this tonight, all you need to do is rename:
> 
> meta-fsl-arm-extra/recipes-bsp/broadcom-nvram-config/files/wandboard-dual
> 
> to
> 
> meta-fsl-arm-extra/recipes-bsp/broadcom-nvram-config/files/wandboard  (without the '-dual')
> 
> When we did this in the beginning, all we had with Wifi on it was the Dual.  Now we have Quad, so we need to use a common directory.

Thanks, that's done the trick.

Kernel looks good so far - I just need to enable CONFIG_HID_APPLE so my keyboard works ;-)

> 
> John
> 
> 
> 
> On Tue, Jul 9, 2013 at 6:05 PM, Chris Tapp <opensource@keylevel.com> wrote:
> 
> On 9 Jul 2013, at 23:03, Chris Tapp wrote:
> 
> >
> > On 9 Jul 2013, at 22:51, Otavio Salvador wrote:
> >
> >> On Tue, Jul 9, 2013 at 6:38 PM, Chris Tapp <opensource@keylevel.com> wrote:
> >>> Thanks to help from Otavio and others, I now have an image building for the Wandboard Quad.
> >>>
> >>> I've built a custom OpenGLES application running under EGL (which I normally run on a Cedar Trail platform) and it's mostly running as expected - just need to work out why my gstreamer pipeline isn't working.
> >>>
> >>> The only bit that I had to work round was broadcom-nvram-config, which resulted in:
> >>>
> >>> NOTE: Resolving any missing task queue dependencies
> >>> ERROR: Nothing RPROVIDES 'linux-firmware-INVALID' (but /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/broadcom-nvram-config.bb RDEPENDS on or otherwise requires it)
> >>> NOTE: Runtime target 'linux-firmware-INVALID' is unbuildable, removing...
> >>>
> >>> I got round this by commenting out:
> >>>
> >>> MACHINE_EXTRA_RRECOMMENDS += " broadcom-nvram-config"
> >>
> >> I fixed this and pused this for master-next; please confirm it works :-)
> >
> > Yes it does. Thank you :-)
> 
> Or not... I'm getting a fetcher failure:
> 
> WARNING: Failed to fetch URL file://nvram.txt, attempting MIRRORS if available
> ERROR: Fetcher failure: Unable to find file file://nvram.txt anywhere. The paths that were searched were:
>     /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/broadcom-nvram-config-1.0/arm
>     /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/broadcom-nvram-config-1.0/armv7a
>     /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/broadcom-nvram-config-1.0/mx6
>     /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/broadcom-nvram-config-1.0/mx6q
>     /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/broadcom-nvram-config-1.0/wandboard
>     /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/broadcom-nvram-config-1.0/wandboard-quad
>     /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/broadcom-nvram-config-1.0/poky
>     /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/broadcom-nvram-config-1.0/
>     /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/broadcom-nvram-config/arm
>     /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/broadcom-nvram-config/armv7a
>     /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/broadcom-nvram-config/mx6
>     /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/broadcom-nvram-config/mx6q
>     /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/broadcom-nvram-config/wandboard
>     /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/broadcom-nvram-config/wandboard-quad
>     /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/broadcom-nvram-config/poky
>     /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/broadcom-nvram-config/
>     /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/files/arm
>     /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/files/armv7a
>     /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/files/mx6
>     /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/files/mx6q
>     /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/files/wandboard
>     /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/files/wandboard-quad
>     /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/files/poky
>     /media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/files/
>     /media/SSD-RAID/build-danny-wandboard/../yocto-downloads
> ERROR: Function failed: Fetcher failure for URL: 'file://nvram.txt'. Unable to fetch URL from any source.
> ERROR: Task 4 (/media/SSD-RAID/meta-fsl-arm-extra-git/recipes-bsp/broadcom-nvram-config/broadcom-nvram-config.bb, do_fetch) failed with exit code '1'
> 
> >
> >> --
> >> Otavio Salvador                             O.S. Systems
> >> http://www.ossystems.com.br        http://projetos.ossystems.com.br
> >> Mobile: +55 (53) 9981-7854            Mobile: +1 (347) 903-9750
> >
> > Chris Tapp
> >
> > opensource@keylevel.com
> > www.keylevel.com
> >
> >
> >
> > _______________________________________________
> > meta-freescale mailing list
> > meta-freescale@yoctoproject.org
> > https://lists.yoctoproject.org/listinfo/meta-freescale
> 
> Chris Tapp
> 
> opensource@keylevel.com
> www.keylevel.com
> 
> 
> 
> _______________________________________________
> meta-freescale mailing list
> meta-freescale@yoctoproject.org
> https://lists.yoctoproject.org/listinfo/meta-freescale
> 

Chris Tapp

opensource@keylevel.com
www.keylevel.com




[-- Attachment #2: Type: text/html, Size: 9480 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Wandboard Quad experience (good!)
  2013-07-10  7:50         ` Chris Tapp
@ 2013-07-10 12:44           ` Otavio Salvador
  0 siblings, 0 replies; 16+ messages in thread
From: Otavio Salvador @ 2013-07-10 12:44 UTC (permalink / raw)
  To: Chris Tapp; +Cc: meta-freescale

On Wed, Jul 10, 2013 at 4:50 AM, Chris Tapp <opensource@keylevel.com> wrote:
> Hi John,
>
> On 10 Jul 2013, at 03:57, John Weber wrote:
>
> Hi Chris,
>
> First off - thanks for trying out the kernel.  Let me know if you see
> anything that needs to be fixed.  Patches are welcome, of course.
>
> I chatted with Otavio about this issue because I'm working on getting Quad
> up and running tonight.  He already has a patch to submit.  If you're
> interested in trying this tonight, all you need to do is rename:
>
> meta-fsl-arm-extra/recipes-bsp/broadcom-nvram-config/files/wandboard-dual
>
> to
>
> meta-fsl-arm-extra/recipes-bsp/broadcom-nvram-config/files/wandboard
> (without the '-dual')
>
> When we did this in the beginning, all we had with Wifi on it was the Dual.
> Now we have Quad, so we need to use a common directory.
>
>
> Thanks, that's done the trick.

All previous patches merged to master.

> Kernel looks good so far - I just need to enable CONFIG_HID_APPLE so my
> keyboard works ;-)

Please make a patch and send it ;)

--
Otavio Salvador                             O.S. Systems
http://www.ossystems.com.br        http://projetos.ossystems.com.br
Mobile: +55 (53) 9981-7854            Mobile: +1 (347) 903-9750


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Wandboard Quad experience (good!)
  2013-07-10  2:57       ` John Weber
  2013-07-10  7:50         ` Chris Tapp
@ 2013-07-12 22:50         ` Chris Tapp
  2013-07-13  0:07           ` John Weber
  1 sibling, 1 reply; 16+ messages in thread
From: Chris Tapp @ 2013-07-12 22:50 UTC (permalink / raw)
  To: John Weber; +Cc: meta-freescale Mailing List

Hi John,


On 10 Jul 2013, at 03:57, John Weber wrote:

> Hi Chris,
> 
> First off - thanks for trying out the kernel.  Let me know if you see anything that needs to be fixed.  Patches are welcome, of course.

May have one for you:

[  997.270164] Unable to handle kernel NULL pointer dereference at virtual address 0000003b
[  997.278271] pgd = e44f8000
[  997.281635] [0000003b] *pgd=34476831, *pte=00000000, *ppte=00000000
[  997.288109] Internal error: Oops: 817 [#1] PREEMPT SMP
[  997.293254] Modules linked in: hid_apple brcmfmac brcmutil ov5640_camera_mipi camera_sensor_clock
[  997.302227] CPU: 1    Not tainted  (3.0.35-wandboard+yocto+g0a103c1 #1)
[  997.308863] PC is at gckCOMMAND_Commit+0x280/0x924
[  997.313664] LR is at gckCOMMAND_Commit+0x1dc/0x924
[  997.318465] pc : [<c0302e68>]    lr : [<c0302dc4>]    psr: 800f0013
[  997.318472] sp : e461fbd8  ip : ffdf6000  fp : e461fd5c
[  997.329967] r10: 00000008  r9 : 00001a68  r8 : ffffffff
[  997.335198] r7 : 00000001  r6 : e461fc38  r5 : 00000000  r4 : e9ea0f00
[  997.341733] r3 : e9bd10c0  r2 : 00000038  r1 : 00000000  r0 : ffffffff
[  997.348270] Flags: Nzcv  IRQs on  FIQs on  Mode SVC_32  ISA ARM  Segment user
[  997.355414] Control: 10c53c7d  Table: 344f804a  DAC: 00000015
[  997.361168] Process wallboard (pid: 1457, stack limit = 0xe461e2f8)
[  997.367442] Stack: (0xe461fbd8 to 0xe4620000)
[  997.371807] fbc0:                                                       00000000 00000000
[  997.379998] fbe0: 000000e8 00001000 ffdf6000 147ee000 000837b0 40ef53b0 00000038 00001a68
[  997.388191] fc00: 00000000 e9dd8e20 e9f9a000 4254c1e8 e461fc2c 00000001 11c9e1e8 00000008
[  997.396381] fc20: 00000020 00000008 00000010 c00638f4 e461fc7c 00000000 42444d43 00000000
[  997.404571] fc40: 00000000 00000000 00000001 00000000 00000000 0000004b 4252e000 00000000
[  997.412759] fc60: 00020000 0001e1e8 0001fc48 000003a8 4254dc38 00000000 0001fc38 00000000
[  997.420945] fc80: 00000000 c01dd274 00000102 e461e000 c0585ca0 c0033b58 00000000 00000001
[  997.429131] fca0: 00000102 e461e000 e461e000 c0585ca0 e461fcd4 e461fcc0 c0070730 c00638f4
[  997.437317] fcc0: 0001248d e461e018 e461fcfc e461fcd8 c00351a4 c00706a4 a00f0013 ffffffff
[  997.445503] fce0: f2a00100 0000001d 00000001 00000000 e461fd5c e461fd00 c003a64c c0035148
[  997.453689] fd00: 00000000 00000000 00000000 e9e76000 00000000 e9f18de0 e9ea0f00 00000000
[  997.461877] fd20: 00000001 e461e000 be8231a0 e461fd5c 00000001 e461fe20 e9f18de0 e9ea0f00
[  997.470063] fd40: 00000000 00000001 e461e000 be8231a0 e461fdfc e461fd60 c03017d8 c0302bf4
[  997.478248] fd60: 410e4f60 000005b1 c0427580 fffffff5 00000040 00000003 00000200 00000000
[  997.486434] fd80: 00000000 00000000 00000000 000005b1 e461fddc e461fda0 e9ea2400 c03a0710
[  997.494620] fda0: 00000040 00000000 e461fdb4 00000000 e9bab160 00000010 c03536cc 00000000
[  997.502806] fdc0: 00000000 00000000 e60397a0 00000406 e461fecc e461e000 e9cb9a00 e42f3d20
[  997.510992] fde0: 00007530 e461e000 e461e000 be8231a0 e461feec e461fe00 c02fb908 c030100c
[  997.519177] fe00: be8231f0 00000000 000000a8 00000000 be8231f0 00000000 000000a8 00000000
[  997.527363] fe20: 00000013 00000001 be82321c be823220 be823224 4401e7d0 4400a9e0 4400da04
[  997.535548] fe40: 00000041 00000000 40ef53b0 00000000 000837b0 00000000 410e4f60 00000000
[  997.543734] fe60: 00000000 00085088 41102158 00000000 42bedcdc 41102158 41102158 41102158
[  997.551920] fe80: 00000002 00000000 00000000 42b40014 42bedcdc 42bedce8 41102158 42b40e10
[  997.560106] fea0: 42bedce8 00000001 41102158 42bedce8 00000000 42b449d4 00000000 4480b100
[  997.568292] fec0: 00000000 00000000 e98f3120 e98f3120 00007530 00000003 e9c84328 e461e000
[  997.576478] fee0: e461fefc e461fef0 c00f144c c02fb75c e461ff7c e461ff00 c00f1e78 c00f1430
[  997.584663] ff00: e4391460 00000000 00000000 00000000 c00a87e0 e98f3808 00000008 00000001
[  997.592849] ff20: e60008b8 00000000 e461e000 00000000 e461ff6c e461ff40 c00e2fb4 c0111ba0
[  997.601035] ff40: 00000000 00000008 00000001 e98f3120 e461ff7c be8231a0 e98f3120 00007530
[  997.609221] ff60: 00000003 c003ad84 e461e000 00000000 e461ffa4 e461ff80 c00f1fa8 c00f1a70
[  997.617407] ff80: e461ffa4 00000001 42bedc38 00007530 00002710 00000036 00000000 e461ffa8
[  997.625593] ffa0: c003ac00 c00f1f78 42bedc38 00007530 00000003 00007530 be8231a0 00075028
[  997.633779] ffc0: 42bedc38 00007530 00002710 00000036 000000a8 00000000 be8231f0 410e4f60
[  997.641965] ffe0: 42bea100 be823194 42bd2684 43ba962c 200f0010 00000003 00000000 00000000
[  997.650145] Backtrace: 
[  997.652619] [<c0302be8>] (gckCOMMAND_Commit+0x0/0x924) from [<c03017d8>] (gckKERNEL_Dispatch+0x7d8/0x14c0)
[  997.662292] [<c0301000>] (gckKERNEL_Dispatch+0x0/0x14c0) from [<c02fb908>] (drv_ioctl+0x1b8/0x264)
[  997.671269] [<c02fb750>] (drv_ioctl+0x0/0x264) from [<c00f144c>] (vfs_ioctl+0x28/0x44)
[  997.679189]  r9:e461e000 r8:e9c84328 r7:00000003 r6:00007530 r5:e98f3120
[  997.685776] r4:e98f3120
[  997.688431] [<c00f1424>] (vfs_ioctl+0x0/0x44) from [<c00f1e78>] (do_vfs_ioctl+0x414/0x508)
[  997.696708] [<c00f1a64>] (do_vfs_ioctl+0x0/0x508) from [<c00f1fa8>] (sys_ioctl+0x3c/0x68)
[  997.704900] [<c00f1f6c>] (sys_ioctl+0x0/0x68) from [<c003ac00>] (ret_fast_syscall+0x0/0x30)
[  997.713254]  r7:00000036 r6:00002710 r5:00007530 r4:42bedc38
[  997.718977] Code: e1530008 0a00005e e3a07001 e1a00008 (e588703c) 
[  997.726672] ---[ end trace c96b5c701dda9d58 ]---

Does that mean anything? I think this may be related to the gstreamer problems I've been having, but...

Chris Tapp

opensource@keylevel.com
www.keylevel.com





^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Wandboard Quad experience (good!)
  2013-07-12 22:50         ` Chris Tapp
@ 2013-07-13  0:07           ` John Weber
  2013-07-13 21:26             ` Chris Tapp
  0 siblings, 1 reply; 16+ messages in thread
From: John Weber @ 2013-07-13  0:07 UTC (permalink / raw)
  To: Chris Tapp; +Cc: meta-freescale Mailing List

Chris,

This looks like it is coming from the Vivante GPU driver in:
drivers/mxc/gpu-viv/hal/kernel/gc_hal_kernel.c, line 1315.

How would I replicate this problem?

I did make one commit to this driver to init some uninitialized variables. 
These were throwing some errors during kernel compile.  Other than that, this 
driver is untouched by me.

John

On 7/12/13 5:50 PM, Chris Tapp wrote:
> Hi John,
>
>
> On 10 Jul 2013, at 03:57, John Weber wrote:
>
>> Hi Chris,
>>
>> First off - thanks for trying out the kernel.  Let me know if you see anything that needs to be fixed.  Patches are welcome, of course.
>
> May have one for you:
>
> [  997.270164] Unable to handle kernel NULL pointer dereference at virtual address 0000003b
> [  997.278271] pgd = e44f8000
> [  997.281635] [0000003b] *pgd=34476831, *pte=00000000, *ppte=00000000
> [  997.288109] Internal error: Oops: 817 [#1] PREEMPT SMP
> [  997.293254] Modules linked in: hid_apple brcmfmac brcmutil ov5640_camera_mipi camera_sensor_clock
> [  997.302227] CPU: 1    Not tainted  (3.0.35-wandboard+yocto+g0a103c1 #1)
> [  997.308863] PC is at gckCOMMAND_Commit+0x280/0x924
> [  997.313664] LR is at gckCOMMAND_Commit+0x1dc/0x924
> [  997.318465] pc : [<c0302e68>]    lr : [<c0302dc4>]    psr: 800f0013
> [  997.318472] sp : e461fbd8  ip : ffdf6000  fp : e461fd5c
> [  997.329967] r10: 00000008  r9 : 00001a68  r8 : ffffffff
> [  997.335198] r7 : 00000001  r6 : e461fc38  r5 : 00000000  r4 : e9ea0f00
> [  997.341733] r3 : e9bd10c0  r2 : 00000038  r1 : 00000000  r0 : ffffffff
> [  997.348270] Flags: Nzcv  IRQs on  FIQs on  Mode SVC_32  ISA ARM  Segment user
> [  997.355414] Control: 10c53c7d  Table: 344f804a  DAC: 00000015
> [  997.361168] Process wallboard (pid: 1457, stack limit = 0xe461e2f8)
> [  997.367442] Stack: (0xe461fbd8 to 0xe4620000)
> [  997.371807] fbc0:                                                       00000000 00000000
> [  997.379998] fbe0: 000000e8 00001000 ffdf6000 147ee000 000837b0 40ef53b0 00000038 00001a68
> [  997.388191] fc00: 00000000 e9dd8e20 e9f9a000 4254c1e8 e461fc2c 00000001 11c9e1e8 00000008
> [  997.396381] fc20: 00000020 00000008 00000010 c00638f4 e461fc7c 00000000 42444d43 00000000
> [  997.404571] fc40: 00000000 00000000 00000001 00000000 00000000 0000004b 4252e000 00000000
> [  997.412759] fc60: 00020000 0001e1e8 0001fc48 000003a8 4254dc38 00000000 0001fc38 00000000
> [  997.420945] fc80: 00000000 c01dd274 00000102 e461e000 c0585ca0 c0033b58 00000000 00000001
> [  997.429131] fca0: 00000102 e461e000 e461e000 c0585ca0 e461fcd4 e461fcc0 c0070730 c00638f4
> [  997.437317] fcc0: 0001248d e461e018 e461fcfc e461fcd8 c00351a4 c00706a4 a00f0013 ffffffff
> [  997.445503] fce0: f2a00100 0000001d 00000001 00000000 e461fd5c e461fd00 c003a64c c0035148
> [  997.453689] fd00: 00000000 00000000 00000000 e9e76000 00000000 e9f18de0 e9ea0f00 00000000
> [  997.461877] fd20: 00000001 e461e000 be8231a0 e461fd5c 00000001 e461fe20 e9f18de0 e9ea0f00
> [  997.470063] fd40: 00000000 00000001 e461e000 be8231a0 e461fdfc e461fd60 c03017d8 c0302bf4
> [  997.478248] fd60: 410e4f60 000005b1 c0427580 fffffff5 00000040 00000003 00000200 00000000
> [  997.486434] fd80: 00000000 00000000 00000000 000005b1 e461fddc e461fda0 e9ea2400 c03a0710
> [  997.494620] fda0: 00000040 00000000 e461fdb4 00000000 e9bab160 00000010 c03536cc 00000000
> [  997.502806] fdc0: 00000000 00000000 e60397a0 00000406 e461fecc e461e000 e9cb9a00 e42f3d20
> [  997.510992] fde0: 00007530 e461e000 e461e000 be8231a0 e461feec e461fe00 c02fb908 c030100c
> [  997.519177] fe00: be8231f0 00000000 000000a8 00000000 be8231f0 00000000 000000a8 00000000
> [  997.527363] fe20: 00000013 00000001 be82321c be823220 be823224 4401e7d0 4400a9e0 4400da04
> [  997.535548] fe40: 00000041 00000000 40ef53b0 00000000 000837b0 00000000 410e4f60 00000000
> [  997.543734] fe60: 00000000 00085088 41102158 00000000 42bedcdc 41102158 41102158 41102158
> [  997.551920] fe80: 00000002 00000000 00000000 42b40014 42bedcdc 42bedce8 41102158 42b40e10
> [  997.560106] fea0: 42bedce8 00000001 41102158 42bedce8 00000000 42b449d4 00000000 4480b100
> [  997.568292] fec0: 00000000 00000000 e98f3120 e98f3120 00007530 00000003 e9c84328 e461e000
> [  997.576478] fee0: e461fefc e461fef0 c00f144c c02fb75c e461ff7c e461ff00 c00f1e78 c00f1430
> [  997.584663] ff00: e4391460 00000000 00000000 00000000 c00a87e0 e98f3808 00000008 00000001
> [  997.592849] ff20: e60008b8 00000000 e461e000 00000000 e461ff6c e461ff40 c00e2fb4 c0111ba0
> [  997.601035] ff40: 00000000 00000008 00000001 e98f3120 e461ff7c be8231a0 e98f3120 00007530
> [  997.609221] ff60: 00000003 c003ad84 e461e000 00000000 e461ffa4 e461ff80 c00f1fa8 c00f1a70
> [  997.617407] ff80: e461ffa4 00000001 42bedc38 00007530 00002710 00000036 00000000 e461ffa8
> [  997.625593] ffa0: c003ac00 c00f1f78 42bedc38 00007530 00000003 00007530 be8231a0 00075028
> [  997.633779] ffc0: 42bedc38 00007530 00002710 00000036 000000a8 00000000 be8231f0 410e4f60
> [  997.641965] ffe0: 42bea100 be823194 42bd2684 43ba962c 200f0010 00000003 00000000 00000000
> [  997.650145] Backtrace:
> [  997.652619] [<c0302be8>] (gckCOMMAND_Commit+0x0/0x924) from [<c03017d8>] (gckKERNEL_Dispatch+0x7d8/0x14c0)
> [  997.662292] [<c0301000>] (gckKERNEL_Dispatch+0x0/0x14c0) from [<c02fb908>] (drv_ioctl+0x1b8/0x264)
> [  997.671269] [<c02fb750>] (drv_ioctl+0x0/0x264) from [<c00f144c>] (vfs_ioctl+0x28/0x44)
> [  997.679189]  r9:e461e000 r8:e9c84328 r7:00000003 r6:00007530 r5:e98f3120
> [  997.685776] r4:e98f3120
> [  997.688431] [<c00f1424>] (vfs_ioctl+0x0/0x44) from [<c00f1e78>] (do_vfs_ioctl+0x414/0x508)
> [  997.696708] [<c00f1a64>] (do_vfs_ioctl+0x0/0x508) from [<c00f1fa8>] (sys_ioctl+0x3c/0x68)
> [  997.704900] [<c00f1f6c>] (sys_ioctl+0x0/0x68) from [<c003ac00>] (ret_fast_syscall+0x0/0x30)
> [  997.713254]  r7:00000036 r6:00002710 r5:00007530 r4:42bedc38
> [  997.718977] Code: e1530008 0a00005e e3a07001 e1a00008 (e588703c)
> [  997.726672] ---[ end trace c96b5c701dda9d58 ]---
>
> Does that mean anything? I think this may be related to the gstreamer problems I've been having, but...
>
> Chris Tapp
>
> opensource@keylevel.com
> www.keylevel.com
>
>
>


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Wandboard Quad experience (good!)
  2013-07-13  0:07           ` John Weber
@ 2013-07-13 21:26             ` Chris Tapp
  2013-07-14 19:38               ` John Weber
  0 siblings, 1 reply; 16+ messages in thread
From: Chris Tapp @ 2013-07-13 21:26 UTC (permalink / raw)
  To: John Weber; +Cc: meta-freescale Mailing List

Hi John,

On 13 Jul 2013, at 01:07, John Weber wrote:

> Chris,
> 
> This looks like it is coming from the Vivante GPU driver in:
> drivers/mxc/gpu-viv/hal/kernel/gc_hal_kernel.c, line 1315.
> 
> How would I replicate this problem?

Good question! Simply running (with GST_DEBUG="*:2")

gst-launch playbin2 uri=http://media.w3.org/2010/05/sintel/trailer.webm video-sink="queue2 ! mfw_v4lsink"

sometimes gives this:

[ 2445.396718] source:src: page allocation failure: order:11, mode:0xd1
[ 2445.403170] Backtrace: 
[ 2445.405710] [<c003e358>] (dump_backtrace+0x0/0x104) from [<c0421760>] (dump_stack+0x18/0x1c)
[ 2445.414222]  r6:e9b80000 r5:000000d1 r4:00000001 r3:00000000
[ 2445.420029] [<c0421748>] (dump_stack+0x0/0x1c) from [<c00b7be8>] (warn_alloc_failed+0xe4/0x104)
[ 2445.428825] [<c00b7b04>] (warn_alloc_failed+0x0/0x104) from [<c00ba12c>] (__alloc_pages_nodemask+0x5b8/0x634)
[ 2445.438806]  r3:00000000 r2:00000000
[ 2445.442482]  r8:00000000 r7:00000000 r6:e9b80000 r5:0000000b r4:000000d1
[ 2445.449342] [<c00b9b74>] (__alloc_pages_nodemask+0x0/0x634) from [<c004400c>] (__dma_alloc+0xc4/0x2b0)
[ 2445.458728] [<c0043f48>] (__dma_alloc+0x0/0x2b0) from [<c0044550>] (dma_alloc_coherent+0x5c/0x68)
[ 2445.467695] [<c00444f4>] (dma_alloc_coherent+0x0/0x68) from [<c02f693c>] (vpu_alloc_dma_buffer+0x34/0x5c)
[ 2445.477323]  r7:e9b80000 r6:e9cb6f28 r5:e9cb6f20 r4:e9cb6f28
[ 2445.483422] [<c02f6908>] (vpu_alloc_dma_buffer+0x0/0x5c) from [<c02f6a38>] (vpu_ioctl+0xd4/0x7ac)
[ 2445.492366]  r4:41efe940 r3:00000000
[ 2445.496042] [<c02f6964>] (vpu_ioctl+0x0/0x7ac) from [<c00f144c>] (vfs_ioctl+0x28/0x44)
[ 2445.504025]  r8:e9b468d0 r7:00000007 r6:00005600 r5:e9b667a0 r4:e9b667a0
[ 2445.510888] [<c00f1424>] (vfs_ioctl+0x0/0x44) from [<c00f1e78>] (do_vfs_ioctl+0x414/0x508)
[ 2445.519167] [<c00f1a64>] (do_vfs_ioctl+0x0/0x508) from [<c00f1fa8>] (sys_ioctl+0x3c/0x68)
[ 2445.527381] [<c00f1f6c>] (sys_ioctl+0x0/0x68) from [<c003ac00>] (ret_fast_syscall+0x0/0x30)
[ 2445.535753]  r7:00000036 r6:415c11ac r5:420762a8 r4:41efe940
[ 2445.541491] Mem-info:
[ 2445.543770] DMA per-cpu:
[ 2445.546309] CPU    0: hi:   90, btch:  15 usd:  84
[ 2445.551119] CPU    1: hi:   90, btch:  15 usd:  84
[ 2445.555918] CPU    2: hi:   90, btch:  15 usd:   0
[ 2445.560733] CPU    3: hi:   90, btch:  15 usd:  98
[ 2445.565529] Normal per-cpu:
[ 2445.568330] CPU    0: hi:   90, btch:  15 usd:  80
[ 2445.573146] CPU    1: hi:   90, btch:  15 usd:  66
[ 2445.577944] CPU    2: hi:   90, btch:  15 usd:  77
[ 2445.582757] CPU    3: hi:   90, btch:  15 usd:  19
[ 2445.587553] HighMem per-cpu:
[ 2445.590455] CPU    0: hi:  186, btch:  31 usd:  16
[ 2445.595254] CPU    1: hi:  186, btch:  31 usd:  37
[ 2445.600052] CPU    2: hi:  186, btch:  31 usd: 172
[ 2445.604863] CPU    3: hi:  186, btch:  31 usd: 181
[ 2445.609675] active_anon:1321 inactive_anon:19 isolated_anon:0
[ 2445.609680]  active_file:2311 inactive_file:2276 isolated_file:0
[ 2445.609686]  unevictable:0 dirty:1 writeback:0 unstable:0
[ 2445.609690]  free:456985 slab_reclaimable:305 slab_unreclaimable:1705
[ 2445.609696]  mapped:1837 shmem:43 pagetables:153 bounce:0
[ 2445.638709] DMA free:48392kB min:1052kB low:1312kB high:1576kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:186944kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes
[ 2445.675225] lowmem_reserve[]: 0 308 1673 1673
[ 2445.679676] Normal free:392244kB min:1776kB low:2220kB high:2664kB active_anon:0kB inactive_anon:0kB active_file:2036kB inactive_file:1676kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:315584kB mlocked:0kB dirty:4kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:1220kB slab_unreclaimable:6820kB kernel_stack:656kB pagetables:612kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
[ 2445.717837] lowmem_reserve[]: 0 0 10922 10922
[ 2445.722299] HighMem free:1387304kB min:512kB low:2480kB high:4448kB active_anon:5284kB inactive_anon:76kB active_file:7208kB inactive_file:7428kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:1398016kB mlocked:0kB dirty:0kB writeback:0kB mapped:7348kB shmem:172kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
[ 2445.760550] lowmem_reserve[]: 0 0 0 0
[ 2445.764287] DMA: 10*4kB 12*8kB 14*16kB 13*32kB 12*64kB 12*128kB 7*256kB 3*512kB 5*1024kB 6*2048kB 6*4096kB 0*8192kB 0*16384kB 0*32768kB = 48392kB
[ 2445.777566] Normal: 99*4kB 71*8kB 41*16kB 9*32kB 5*64kB 3*128kB 2*256kB 2*512kB 1*1024kB 1*2048kB 2*4096kB 4*8192kB 3*16384kB 9*32768kB = 392244kB
[ 2445.790945] HighMem: 400*4kB 205*8kB 114*16kB 131*32kB 62*64kB 29*128kB 13*256kB 2*512kB 4*1024kB 1*2048kB 2*4096kB 1*8192kB 2*16384kB 40*32768kB = 1387304kB
[ 2445.805269] 4614 total pagecache pages
[ 2445.809023] 0 pages in swap cache
[ 2445.812355] Swap cache stats: add 0, delete 0, find 0/0
[ 2445.817586] Free swap  = 0kB
[ 2445.820483] Total swap = 0kB
[ 2445.870417] 524288 pages of RAM
[ 2445.873566] 458040 free pages
[ 2445.876537] 50744 reserved pages
[ 2445.879767] 2012 slab pages
[ 2445.882584] 5157 pages shared
[ 2445.885557] 0 pages swap cached
[ 2445.888705] Physical memory allocation error!
[ 2445.893083] Physical memory allocation error!

I'll have a go at getting the other one to show...

Chris Tapp

opensource@keylevel.com
www.keylevel.com





^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Wandboard Quad experience (good!)
  2013-07-13 21:26             ` Chris Tapp
@ 2013-07-14 19:38               ` John Weber
  2013-07-15  7:46                 ` Chris Tapp
  2013-07-15 20:27                 ` Chris Tapp
  0 siblings, 2 replies; 16+ messages in thread
From: John Weber @ 2013-07-14 19:38 UTC (permalink / raw)
  To: Chris Tapp; +Cc: meta-freescale Mailing List

Hi Chris,

Thanks.  You've probably noticed, but this is a different error from the first 
one you sent.  The first one was in the Vivante driver.

This one seems related to a memory limitation and perhaps related to the 
CONFIG_SWAP being on by default in the kernel build.  Since there is no swap 
partition, having CONFIG_SWAP seems a little useless.  Oddly enough, all of the 
default i.MX6 defconfigs set CONFIG_SWAP.  I'm seeing if we can remove that option.

John

On 7/13/13 4:26 PM, Chris Tapp wrote:
> Hi John,
>
> On 13 Jul 2013, at 01:07, John Weber wrote:
>
>> Chris,
>>
>> This looks like it is coming from the Vivante GPU driver in:
>> drivers/mxc/gpu-viv/hal/kernel/gc_hal_kernel.c, line 1315.
>>
>> How would I replicate this problem?
>
> Good question! Simply running (with GST_DEBUG="*:2")
>
> gst-launch playbin2 uri=http://media.w3.org/2010/05/sintel/trailer.webm video-sink="queue2 ! mfw_v4lsink"
>
> sometimes gives this:
>
> [ 2445.396718] source:src: page allocation failure: order:11, mode:0xd1
> [ 2445.403170] Backtrace:
> [ 2445.405710] [<c003e358>] (dump_backtrace+0x0/0x104) from [<c0421760>] (dump_stack+0x18/0x1c)
> [ 2445.414222]  r6:e9b80000 r5:000000d1 r4:00000001 r3:00000000
> [ 2445.420029] [<c0421748>] (dump_stack+0x0/0x1c) from [<c00b7be8>] (warn_alloc_failed+0xe4/0x104)
> [ 2445.428825] [<c00b7b04>] (warn_alloc_failed+0x0/0x104) from [<c00ba12c>] (__alloc_pages_nodemask+0x5b8/0x634)
> [ 2445.438806]  r3:00000000 r2:00000000
> [ 2445.442482]  r8:00000000 r7:00000000 r6:e9b80000 r5:0000000b r4:000000d1
> [ 2445.449342] [<c00b9b74>] (__alloc_pages_nodemask+0x0/0x634) from [<c004400c>] (__dma_alloc+0xc4/0x2b0)
> [ 2445.458728] [<c0043f48>] (__dma_alloc+0x0/0x2b0) from [<c0044550>] (dma_alloc_coherent+0x5c/0x68)
> [ 2445.467695] [<c00444f4>] (dma_alloc_coherent+0x0/0x68) from [<c02f693c>] (vpu_alloc_dma_buffer+0x34/0x5c)
> [ 2445.477323]  r7:e9b80000 r6:e9cb6f28 r5:e9cb6f20 r4:e9cb6f28
> [ 2445.483422] [<c02f6908>] (vpu_alloc_dma_buffer+0x0/0x5c) from [<c02f6a38>] (vpu_ioctl+0xd4/0x7ac)
> [ 2445.492366]  r4:41efe940 r3:00000000
> [ 2445.496042] [<c02f6964>] (vpu_ioctl+0x0/0x7ac) from [<c00f144c>] (vfs_ioctl+0x28/0x44)
> [ 2445.504025]  r8:e9b468d0 r7:00000007 r6:00005600 r5:e9b667a0 r4:e9b667a0
> [ 2445.510888] [<c00f1424>] (vfs_ioctl+0x0/0x44) from [<c00f1e78>] (do_vfs_ioctl+0x414/0x508)
> [ 2445.519167] [<c00f1a64>] (do_vfs_ioctl+0x0/0x508) from [<c00f1fa8>] (sys_ioctl+0x3c/0x68)
> [ 2445.527381] [<c00f1f6c>] (sys_ioctl+0x0/0x68) from [<c003ac00>] (ret_fast_syscall+0x0/0x30)
> [ 2445.535753]  r7:00000036 r6:415c11ac r5:420762a8 r4:41efe940
> [ 2445.541491] Mem-info:
> [ 2445.543770] DMA per-cpu:
> [ 2445.546309] CPU    0: hi:   90, btch:  15 usd:  84
> [ 2445.551119] CPU    1: hi:   90, btch:  15 usd:  84
> [ 2445.555918] CPU    2: hi:   90, btch:  15 usd:   0
> [ 2445.560733] CPU    3: hi:   90, btch:  15 usd:  98
> [ 2445.565529] Normal per-cpu:
> [ 2445.568330] CPU    0: hi:   90, btch:  15 usd:  80
> [ 2445.573146] CPU    1: hi:   90, btch:  15 usd:  66
> [ 2445.577944] CPU    2: hi:   90, btch:  15 usd:  77
> [ 2445.582757] CPU    3: hi:   90, btch:  15 usd:  19
> [ 2445.587553] HighMem per-cpu:
> [ 2445.590455] CPU    0: hi:  186, btch:  31 usd:  16
> [ 2445.595254] CPU    1: hi:  186, btch:  31 usd:  37
> [ 2445.600052] CPU    2: hi:  186, btch:  31 usd: 172
> [ 2445.604863] CPU    3: hi:  186, btch:  31 usd: 181
> [ 2445.609675] active_anon:1321 inactive_anon:19 isolated_anon:0
> [ 2445.609680]  active_file:2311 inactive_file:2276 isolated_file:0
> [ 2445.609686]  unevictable:0 dirty:1 writeback:0 unstable:0
> [ 2445.609690]  free:456985 slab_reclaimable:305 slab_unreclaimable:1705
> [ 2445.609696]  mapped:1837 shmem:43 pagetables:153 bounce:0
> [ 2445.638709] DMA free:48392kB min:1052kB low:1312kB high:1576kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:186944kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes
> [ 2445.675225] lowmem_reserve[]: 0 308 1673 1673
> [ 2445.679676] Normal free:392244kB min:1776kB low:2220kB high:2664kB active_anon:0kB inactive_anon:0kB active_file:2036kB inactive_file:1676kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:315584kB mlocked:0kB dirty:4kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:1220kB slab_unreclaimable:6820kB kernel_stack:656kB pagetables:612kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
> [ 2445.717837] lowmem_reserve[]: 0 0 10922 10922
> [ 2445.722299] HighMem free:1387304kB min:512kB low:2480kB high:4448kB active_anon:5284kB inactive_anon:76kB active_file:7208kB inactive_file:7428kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:1398016kB mlocked:0kB dirty:0kB writeback:0kB mapped:7348kB shmem:172kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
> [ 2445.760550] lowmem_reserve[]: 0 0 0 0
> [ 2445.764287] DMA: 10*4kB 12*8kB 14*16kB 13*32kB 12*64kB 12*128kB 7*256kB 3*512kB 5*1024kB 6*2048kB 6*4096kB 0*8192kB 0*16384kB 0*32768kB = 48392kB
> [ 2445.777566] Normal: 99*4kB 71*8kB 41*16kB 9*32kB 5*64kB 3*128kB 2*256kB 2*512kB 1*1024kB 1*2048kB 2*4096kB 4*8192kB 3*16384kB 9*32768kB = 392244kB
> [ 2445.790945] HighMem: 400*4kB 205*8kB 114*16kB 131*32kB 62*64kB 29*128kB 13*256kB 2*512kB 4*1024kB 1*2048kB 2*4096kB 1*8192kB 2*16384kB 40*32768kB = 1387304kB
> [ 2445.805269] 4614 total pagecache pages
> [ 2445.809023] 0 pages in swap cache
> [ 2445.812355] Swap cache stats: add 0, delete 0, find 0/0
> [ 2445.817586] Free swap  = 0kB
> [ 2445.820483] Total swap = 0kB
> [ 2445.870417] 524288 pages of RAM
> [ 2445.873566] 458040 free pages
> [ 2445.876537] 50744 reserved pages
> [ 2445.879767] 2012 slab pages
> [ 2445.882584] 5157 pages shared
> [ 2445.885557] 0 pages swap cached
> [ 2445.888705] Physical memory allocation error!
> [ 2445.893083] Physical memory allocation error!
>
> I'll have a go at getting the other one to show...
>
> Chris Tapp
>
> opensource@keylevel.com
> www.keylevel.com
>
>
>


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Wandboard Quad experience (good!)
  2013-07-14 19:38               ` John Weber
@ 2013-07-15  7:46                 ` Chris Tapp
  2013-07-15 20:27                 ` Chris Tapp
  1 sibling, 0 replies; 16+ messages in thread
From: Chris Tapp @ 2013-07-15  7:46 UTC (permalink / raw)
  To: John Weber; +Cc: meta-freescale Mailing List

Hi John,

On 14 Jul 2013, at 20:38, John Weber wrote:

> Hi Chris,
> 
> Thanks.  You've probably noticed, but this is a different error from the first one you sent.  The first one was in the Vivante driver.

Yes, I had spotted it was different...

> 
> This one seems related to a memory limitation and perhaps related to the CONFIG_SWAP being on by default in the kernel build.  Since there is no swap partition, having CONFIG_SWAP seems a little useless.  Oddly enough, all of the default i.MX6 defconfigs set CONFIG_SWAP.  I'm seeing if we can remove that option.

but I had no idea it was related to swap ;-) I'll disable locally and see if that makes a difference. I can't see why swap should be getting used as the app never uses anywhere near the 2GB the board has, but I've see linux hit swap when there still seems to be plenty of physical memory about, so this may be expected.

It is just possible that this may be related to the other issue, as I can imagine a gstreamer-heavy app does a lot of memory allocation/deallocation...

> John
> 
> On 7/13/13 4:26 PM, Chris Tapp wrote:
>> Hi John,
>> 
>> On 13 Jul 2013, at 01:07, John Weber wrote:
>> 
>>> Chris,
>>> 
>>> This looks like it is coming from the Vivante GPU driver in:
>>> drivers/mxc/gpu-viv/hal/kernel/gc_hal_kernel.c, line 1315.
>>> 
>>> How would I replicate this problem?
>> 
>> Good question! Simply running (with GST_DEBUG="*:2")
>> 
>> gst-launch playbin2 uri=http://media.w3.org/2010/05/sintel/trailer.webm video-sink="queue2 ! mfw_v4lsink"
>> 
>> sometimes gives this:
>> 
>> [ 2445.396718] source:src: page allocation failure: order:11, mode:0xd1
>> [ 2445.403170] Backtrace:
>> [ 2445.405710] [<c003e358>] (dump_backtrace+0x0/0x104) from [<c0421760>] (dump_stack+0x18/0x1c)
>> [ 2445.414222]  r6:e9b80000 r5:000000d1 r4:00000001 r3:00000000
>> [ 2445.420029] [<c0421748>] (dump_stack+0x0/0x1c) from [<c00b7be8>] (warn_alloc_failed+0xe4/0x104)
>> [ 2445.428825] [<c00b7b04>] (warn_alloc_failed+0x0/0x104) from [<c00ba12c>] (__alloc_pages_nodemask+0x5b8/0x634)
>> [ 2445.438806]  r3:00000000 r2:00000000
>> [ 2445.442482]  r8:00000000 r7:00000000 r6:e9b80000 r5:0000000b r4:000000d1
>> [ 2445.449342] [<c00b9b74>] (__alloc_pages_nodemask+0x0/0x634) from [<c004400c>] (__dma_alloc+0xc4/0x2b0)
>> [ 2445.458728] [<c0043f48>] (__dma_alloc+0x0/0x2b0) from [<c0044550>] (dma_alloc_coherent+0x5c/0x68)
>> [ 2445.467695] [<c00444f4>] (dma_alloc_coherent+0x0/0x68) from [<c02f693c>] (vpu_alloc_dma_buffer+0x34/0x5c)
>> [ 2445.477323]  r7:e9b80000 r6:e9cb6f28 r5:e9cb6f20 r4:e9cb6f28
>> [ 2445.483422] [<c02f6908>] (vpu_alloc_dma_buffer+0x0/0x5c) from [<c02f6a38>] (vpu_ioctl+0xd4/0x7ac)
>> [ 2445.492366]  r4:41efe940 r3:00000000
>> [ 2445.496042] [<c02f6964>] (vpu_ioctl+0x0/0x7ac) from [<c00f144c>] (vfs_ioctl+0x28/0x44)
>> [ 2445.504025]  r8:e9b468d0 r7:00000007 r6:00005600 r5:e9b667a0 r4:e9b667a0
>> [ 2445.510888] [<c00f1424>] (vfs_ioctl+0x0/0x44) from [<c00f1e78>] (do_vfs_ioctl+0x414/0x508)
>> [ 2445.519167] [<c00f1a64>] (do_vfs_ioctl+0x0/0x508) from [<c00f1fa8>] (sys_ioctl+0x3c/0x68)
>> [ 2445.527381] [<c00f1f6c>] (sys_ioctl+0x0/0x68) from [<c003ac00>] (ret_fast_syscall+0x0/0x30)
>> [ 2445.535753]  r7:00000036 r6:415c11ac r5:420762a8 r4:41efe940
>> [ 2445.541491] Mem-info:
>> [ 2445.543770] DMA per-cpu:
>> [ 2445.546309] CPU    0: hi:   90, btch:  15 usd:  84
>> [ 2445.551119] CPU    1: hi:   90, btch:  15 usd:  84
>> [ 2445.555918] CPU    2: hi:   90, btch:  15 usd:   0
>> [ 2445.560733] CPU    3: hi:   90, btch:  15 usd:  98
>> [ 2445.565529] Normal per-cpu:
>> [ 2445.568330] CPU    0: hi:   90, btch:  15 usd:  80
>> [ 2445.573146] CPU    1: hi:   90, btch:  15 usd:  66
>> [ 2445.577944] CPU    2: hi:   90, btch:  15 usd:  77
>> [ 2445.582757] CPU    3: hi:   90, btch:  15 usd:  19
>> [ 2445.587553] HighMem per-cpu:
>> [ 2445.590455] CPU    0: hi:  186, btch:  31 usd:  16
>> [ 2445.595254] CPU    1: hi:  186, btch:  31 usd:  37
>> [ 2445.600052] CPU    2: hi:  186, btch:  31 usd: 172
>> [ 2445.604863] CPU    3: hi:  186, btch:  31 usd: 181
>> [ 2445.609675] active_anon:1321 inactive_anon:19 isolated_anon:0
>> [ 2445.609680]  active_file:2311 inactive_file:2276 isolated_file:0
>> [ 2445.609686]  unevictable:0 dirty:1 writeback:0 unstable:0
>> [ 2445.609690]  free:456985 slab_reclaimable:305 slab_unreclaimable:1705
>> [ 2445.609696]  mapped:1837 shmem:43 pagetables:153 bounce:0
>> [ 2445.638709] DMA free:48392kB min:1052kB low:1312kB high:1576kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:186944kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes
>> [ 2445.675225] lowmem_reserve[]: 0 308 1673 1673
>> [ 2445.679676] Normal free:392244kB min:1776kB low:2220kB high:2664kB active_anon:0kB inactive_anon:0kB active_file:2036kB inactive_file:1676kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:315584kB mlocked:0kB dirty:4kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:1220kB slab_unreclaimable:6820kB kernel_stack:656kB pagetables:612kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
>> [ 2445.717837] lowmem_reserve[]: 0 0 10922 10922
>> [ 2445.722299] HighMem free:1387304kB min:512kB low:2480kB high:4448kB active_anon:5284kB inactive_anon:76kB active_file:7208kB inactive_file:7428kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:1398016kB mlocked:0kB dirty:0kB writeback:0kB mapped:7348kB shmem:172kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
>> [ 2445.760550] lowmem_reserve[]: 0 0 0 0
>> [ 2445.764287] DMA: 10*4kB 12*8kB 14*16kB 13*32kB 12*64kB 12*128kB 7*256kB 3*512kB 5*1024kB 6*2048kB 6*4096kB 0*8192kB 0*16384kB 0*32768kB = 48392kB
>> [ 2445.777566] Normal: 99*4kB 71*8kB 41*16kB 9*32kB 5*64kB 3*128kB 2*256kB 2*512kB 1*1024kB 1*2048kB 2*4096kB 4*8192kB 3*16384kB 9*32768kB = 392244kB
>> [ 2445.790945] HighMem: 400*4kB 205*8kB 114*16kB 131*32kB 62*64kB 29*128kB 13*256kB 2*512kB 4*1024kB 1*2048kB 2*4096kB 1*8192kB 2*16384kB 40*32768kB = 1387304kB
>> [ 2445.805269] 4614 total pagecache pages
>> [ 2445.809023] 0 pages in swap cache
>> [ 2445.812355] Swap cache stats: add 0, delete 0, find 0/0
>> [ 2445.817586] Free swap  = 0kB
>> [ 2445.820483] Total swap = 0kB
>> [ 2445.870417] 524288 pages of RAM
>> [ 2445.873566] 458040 free pages
>> [ 2445.876537] 50744 reserved pages
>> [ 2445.879767] 2012 slab pages
>> [ 2445.882584] 5157 pages shared
>> [ 2445.885557] 0 pages swap cached
>> [ 2445.888705] Physical memory allocation error!
>> [ 2445.893083] Physical memory allocation error!
>> 
>> I'll have a go at getting the other one to show...
>> 
>> Chris Tapp
>> 
>> opensource@keylevel.com
>> www.keylevel.com
>> 
>> 
>> 

Chris Tapp

opensource@keylevel.com
www.keylevel.com





^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Wandboard Quad experience (good!)
  2013-07-14 19:38               ` John Weber
  2013-07-15  7:46                 ` Chris Tapp
@ 2013-07-15 20:27                 ` Chris Tapp
  2013-07-16  2:53                   ` John Weber
  1 sibling, 1 reply; 16+ messages in thread
From: Chris Tapp @ 2013-07-15 20:27 UTC (permalink / raw)
  To: John Weber; +Cc: meta-freescale Mailing List

Hi John,

This (2nd crash) still happens when CONFIG_SWAP is not set, but it's just possible that the Vivante one has gone! I've had a board running my app. for over 6 hours hours and the problem hasn't shown yet. I'll keep it running...

However, I now see lots of:

[ 1481.696860] Not power off before vpu open!

Any idea what these are about?

I've also noticed that the Wandboard is very sensitive to power quality. The specs say a 5v, 2A PSU is suitable - but it's not if it's got a very fast current limit :-) I get reboots (uboot reports 'POR' reset cause) if I use a lab PSU set to 2A.

On 14 Jul 2013, at 20:38, John Weber wrote:

> Hi Chris,
> 
> Thanks.  You've probably noticed, but this is a different error from the first one you sent.  The first one was in the Vivante driver.
> 
> This one seems related to a memory limitation and perhaps related to the CONFIG_SWAP being on by default in the kernel build.  Since there is no swap partition, having CONFIG_SWAP seems a little useless.  Oddly enough, all of the default i.MX6 defconfigs set CONFIG_SWAP.  I'm seeing if we can remove that option.
> 
> John
> 
> On 7/13/13 4:26 PM, Chris Tapp wrote:
>> Hi John,
>> 
>> On 13 Jul 2013, at 01:07, John Weber wrote:
>> 
>>> Chris,
>>> 
>>> This looks like it is coming from the Vivante GPU driver in:
>>> drivers/mxc/gpu-viv/hal/kernel/gc_hal_kernel.c, line 1315.
>>> 
>>> How would I replicate this problem?
>> 
>> Good question! Simply running (with GST_DEBUG="*:2")
>> 
>> gst-launch playbin2 uri=http://media.w3.org/2010/05/sintel/trailer.webm video-sink="queue2 ! mfw_v4lsink"
>> 
>> sometimes gives this:
>> 
>> [ 2445.396718] source:src: page allocation failure: order:11, mode:0xd1
>> [ 2445.403170] Backtrace:
>> [ 2445.405710] [<c003e358>] (dump_backtrace+0x0/0x104) from [<c0421760>] (dump_stack+0x18/0x1c)
>> [ 2445.414222]  r6:e9b80000 r5:000000d1 r4:00000001 r3:00000000
>> [ 2445.420029] [<c0421748>] (dump_stack+0x0/0x1c) from [<c00b7be8>] (warn_alloc_failed+0xe4/0x104)
>> [ 2445.428825] [<c00b7b04>] (warn_alloc_failed+0x0/0x104) from [<c00ba12c>] (__alloc_pages_nodemask+0x5b8/0x634)
>> [ 2445.438806]  r3:00000000 r2:00000000
>> [ 2445.442482]  r8:00000000 r7:00000000 r6:e9b80000 r5:0000000b r4:000000d1
>> [ 2445.449342] [<c00b9b74>] (__alloc_pages_nodemask+0x0/0x634) from [<c004400c>] (__dma_alloc+0xc4/0x2b0)
>> [ 2445.458728] [<c0043f48>] (__dma_alloc+0x0/0x2b0) from [<c0044550>] (dma_alloc_coherent+0x5c/0x68)
>> [ 2445.467695] [<c00444f4>] (dma_alloc_coherent+0x0/0x68) from [<c02f693c>] (vpu_alloc_dma_buffer+0x34/0x5c)
>> [ 2445.477323]  r7:e9b80000 r6:e9cb6f28 r5:e9cb6f20 r4:e9cb6f28
>> [ 2445.483422] [<c02f6908>] (vpu_alloc_dma_buffer+0x0/0x5c) from [<c02f6a38>] (vpu_ioctl+0xd4/0x7ac)
>> [ 2445.492366]  r4:41efe940 r3:00000000
>> [ 2445.496042] [<c02f6964>] (vpu_ioctl+0x0/0x7ac) from [<c00f144c>] (vfs_ioctl+0x28/0x44)
>> [ 2445.504025]  r8:e9b468d0 r7:00000007 r6:00005600 r5:e9b667a0 r4:e9b667a0
>> [ 2445.510888] [<c00f1424>] (vfs_ioctl+0x0/0x44) from [<c00f1e78>] (do_vfs_ioctl+0x414/0x508)
>> [ 2445.519167] [<c00f1a64>] (do_vfs_ioctl+0x0/0x508) from [<c00f1fa8>] (sys_ioctl+0x3c/0x68)
>> [ 2445.527381] [<c00f1f6c>] (sys_ioctl+0x0/0x68) from [<c003ac00>] (ret_fast_syscall+0x0/0x30)
>> [ 2445.535753]  r7:00000036 r6:415c11ac r5:420762a8 r4:41efe940
>> [ 2445.541491] Mem-info:
>> [ 2445.543770] DMA per-cpu:
>> [ 2445.546309] CPU    0: hi:   90, btch:  15 usd:  84
>> [ 2445.551119] CPU    1: hi:   90, btch:  15 usd:  84
>> [ 2445.555918] CPU    2: hi:   90, btch:  15 usd:   0
>> [ 2445.560733] CPU    3: hi:   90, btch:  15 usd:  98
>> [ 2445.565529] Normal per-cpu:
>> [ 2445.568330] CPU    0: hi:   90, btch:  15 usd:  80
>> [ 2445.573146] CPU    1: hi:   90, btch:  15 usd:  66
>> [ 2445.577944] CPU    2: hi:   90, btch:  15 usd:  77
>> [ 2445.582757] CPU    3: hi:   90, btch:  15 usd:  19
>> [ 2445.587553] HighMem per-cpu:
>> [ 2445.590455] CPU    0: hi:  186, btch:  31 usd:  16
>> [ 2445.595254] CPU    1: hi:  186, btch:  31 usd:  37
>> [ 2445.600052] CPU    2: hi:  186, btch:  31 usd: 172
>> [ 2445.604863] CPU    3: hi:  186, btch:  31 usd: 181
>> [ 2445.609675] active_anon:1321 inactive_anon:19 isolated_anon:0
>> [ 2445.609680]  active_file:2311 inactive_file:2276 isolated_file:0
>> [ 2445.609686]  unevictable:0 dirty:1 writeback:0 unstable:0
>> [ 2445.609690]  free:456985 slab_reclaimable:305 slab_unreclaimable:1705
>> [ 2445.609696]  mapped:1837 shmem:43 pagetables:153 bounce:0
>> [ 2445.638709] DMA free:48392kB min:1052kB low:1312kB high:1576kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:186944kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes
>> [ 2445.675225] lowmem_reserve[]: 0 308 1673 1673
>> [ 2445.679676] Normal free:392244kB min:1776kB low:2220kB high:2664kB active_anon:0kB inactive_anon:0kB active_file:2036kB inactive_file:1676kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:315584kB mlocked:0kB dirty:4kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:1220kB slab_unreclaimable:6820kB kernel_stack:656kB pagetables:612kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
>> [ 2445.717837] lowmem_reserve[]: 0 0 10922 10922
>> [ 2445.722299] HighMem free:1387304kB min:512kB low:2480kB high:4448kB active_anon:5284kB inactive_anon:76kB active_file:7208kB inactive_file:7428kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:1398016kB mlocked:0kB dirty:0kB writeback:0kB mapped:7348kB shmem:172kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
>> [ 2445.760550] lowmem_reserve[]: 0 0 0 0
>> [ 2445.764287] DMA: 10*4kB 12*8kB 14*16kB 13*32kB 12*64kB 12*128kB 7*256kB 3*512kB 5*1024kB 6*2048kB 6*4096kB 0*8192kB 0*16384kB 0*32768kB = 48392kB
>> [ 2445.777566] Normal: 99*4kB 71*8kB 41*16kB 9*32kB 5*64kB 3*128kB 2*256kB 2*512kB 1*1024kB 1*2048kB 2*4096kB 4*8192kB 3*16384kB 9*32768kB = 392244kB
>> [ 2445.790945] HighMem: 400*4kB 205*8kB 114*16kB 131*32kB 62*64kB 29*128kB 13*256kB 2*512kB 4*1024kB 1*2048kB 2*4096kB 1*8192kB 2*16384kB 40*32768kB = 1387304kB
>> [ 2445.805269] 4614 total pagecache pages
>> [ 2445.809023] 0 pages in swap cache
>> [ 2445.812355] Swap cache stats: add 0, delete 0, find 0/0
>> [ 2445.817586] Free swap  = 0kB
>> [ 2445.820483] Total swap = 0kB
>> [ 2445.870417] 524288 pages of RAM
>> [ 2445.873566] 458040 free pages
>> [ 2445.876537] 50744 reserved pages
>> [ 2445.879767] 2012 slab pages
>> [ 2445.882584] 5157 pages shared
>> [ 2445.885557] 0 pages swap cached
>> [ 2445.888705] Physical memory allocation error!
>> [ 2445.893083] Physical memory allocation error!
>> 
>> I'll have a go at getting the other one to show...
>> 
>> Chris Tapp
>> 
>> opensource@keylevel.com
>> www.keylevel.com
>> 
>> 
>> 

Chris Tapp

opensource@keylevel.com
www.keylevel.com





^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Wandboard Quad experience (good!)
  2013-07-15 20:27                 ` Chris Tapp
@ 2013-07-16  2:53                   ` John Weber
  2013-07-16 15:58                     ` Thomas Senyk
  0 siblings, 1 reply; 16+ messages in thread
From: John Weber @ 2013-07-16  2:53 UTC (permalink / raw)
  To: Chris Tapp; +Cc: meta-freescale Mailing List

Hi Chris,

You might want to consider emailing the wandboard-dev mailing list for Wandboard 
kernel questions.

wandboard-dev@lists.wandboard.org

You'll need to sign up of course:
http://wandboard.org/cgi-bin/mailman/listinfo/wandboard-dev

Or the main user group at
wandboard@googlegroups.com

I haven't seen that message in using the video encoder functionality of the VPU 
from Gstreamer.  Just greping the source, that message comes from the vpu driver 
(as expected).  During vpu_open(), it seems to be enabling the clock to the VPU, 
then checking to see if the clock is enabled by checking a program counter of 
what is I'm guessing the integrated bitstream processor in the VPU.  If that is 
greater than 0x0, then it pops that debug message.  Then it disables the vpu 
clock, then moves on.

John


On 7/15/13 3:27 PM, Chris Tapp wrote:
> Hi John,
>
> This (2nd crash) still happens when CONFIG_SWAP is not set, but it's just possible that the Vivante one has gone! I've had a board running my app. for over 6 hours hours and the problem hasn't shown yet. I'll keep it running...
>
> However, I now see lots of:
>
> [ 1481.696860] Not power off before vpu open!
>
> Any idea what these are about?
>
> I've also noticed that the Wandboard is very sensitive to power quality. The specs say a 5v, 2A PSU is suitable - but it's not if it's got a very fast current limit :-) I get reboots (uboot reports 'POR' reset cause) if I use a lab PSU set to 2A.
>
> On 14 Jul 2013, at 20:38, John Weber wrote:
>
>> Hi Chris,
>>
>> Thanks.  You've probably noticed, but this is a different error from the first one you sent.  The first one was in the Vivante driver.
>>
>> This one seems related to a memory limitation and perhaps related to the CONFIG_SWAP being on by default in the kernel build.  Since there is no swap partition, having CONFIG_SWAP seems a little useless.  Oddly enough, all of the default i.MX6 defconfigs set CONFIG_SWAP.  I'm seeing if we can remove that option.
>>
>> John
>>
>> On 7/13/13 4:26 PM, Chris Tapp wrote:
>>> Hi John,
>>>
>>> On 13 Jul 2013, at 01:07, John Weber wrote:
>>>
>>>> Chris,
>>>>
>>>> This looks like it is coming from the Vivante GPU driver in:
>>>> drivers/mxc/gpu-viv/hal/kernel/gc_hal_kernel.c, line 1315.
>>>>
>>>> How would I replicate this problem?
>>>
>>> Good question! Simply running (with GST_DEBUG="*:2")
>>>
>>> gst-launch playbin2 uri=http://media.w3.org/2010/05/sintel/trailer.webm video-sink="queue2 ! mfw_v4lsink"
>>>
>>> sometimes gives this:
>>>
>>> [ 2445.396718] source:src: page allocation failure: order:11, mode:0xd1
>>> [ 2445.403170] Backtrace:
>>> [ 2445.405710] [<c003e358>] (dump_backtrace+0x0/0x104) from [<c0421760>] (dump_stack+0x18/0x1c)
>>> [ 2445.414222]  r6:e9b80000 r5:000000d1 r4:00000001 r3:00000000
>>> [ 2445.420029] [<c0421748>] (dump_stack+0x0/0x1c) from [<c00b7be8>] (warn_alloc_failed+0xe4/0x104)
>>> [ 2445.428825] [<c00b7b04>] (warn_alloc_failed+0x0/0x104) from [<c00ba12c>] (__alloc_pages_nodemask+0x5b8/0x634)
>>> [ 2445.438806]  r3:00000000 r2:00000000
>>> [ 2445.442482]  r8:00000000 r7:00000000 r6:e9b80000 r5:0000000b r4:000000d1
>>> [ 2445.449342] [<c00b9b74>] (__alloc_pages_nodemask+0x0/0x634) from [<c004400c>] (__dma_alloc+0xc4/0x2b0)
>>> [ 2445.458728] [<c0043f48>] (__dma_alloc+0x0/0x2b0) from [<c0044550>] (dma_alloc_coherent+0x5c/0x68)
>>> [ 2445.467695] [<c00444f4>] (dma_alloc_coherent+0x0/0x68) from [<c02f693c>] (vpu_alloc_dma_buffer+0x34/0x5c)
>>> [ 2445.477323]  r7:e9b80000 r6:e9cb6f28 r5:e9cb6f20 r4:e9cb6f28
>>> [ 2445.483422] [<c02f6908>] (vpu_alloc_dma_buffer+0x0/0x5c) from [<c02f6a38>] (vpu_ioctl+0xd4/0x7ac)
>>> [ 2445.492366]  r4:41efe940 r3:00000000
>>> [ 2445.496042] [<c02f6964>] (vpu_ioctl+0x0/0x7ac) from [<c00f144c>] (vfs_ioctl+0x28/0x44)
>>> [ 2445.504025]  r8:e9b468d0 r7:00000007 r6:00005600 r5:e9b667a0 r4:e9b667a0
>>> [ 2445.510888] [<c00f1424>] (vfs_ioctl+0x0/0x44) from [<c00f1e78>] (do_vfs_ioctl+0x414/0x508)
>>> [ 2445.519167] [<c00f1a64>] (do_vfs_ioctl+0x0/0x508) from [<c00f1fa8>] (sys_ioctl+0x3c/0x68)
>>> [ 2445.527381] [<c00f1f6c>] (sys_ioctl+0x0/0x68) from [<c003ac00>] (ret_fast_syscall+0x0/0x30)
>>> [ 2445.535753]  r7:00000036 r6:415c11ac r5:420762a8 r4:41efe940
>>> [ 2445.541491] Mem-info:
>>> [ 2445.543770] DMA per-cpu:
>>> [ 2445.546309] CPU    0: hi:   90, btch:  15 usd:  84
>>> [ 2445.551119] CPU    1: hi:   90, btch:  15 usd:  84
>>> [ 2445.555918] CPU    2: hi:   90, btch:  15 usd:   0
>>> [ 2445.560733] CPU    3: hi:   90, btch:  15 usd:  98
>>> [ 2445.565529] Normal per-cpu:
>>> [ 2445.568330] CPU    0: hi:   90, btch:  15 usd:  80
>>> [ 2445.573146] CPU    1: hi:   90, btch:  15 usd:  66
>>> [ 2445.577944] CPU    2: hi:   90, btch:  15 usd:  77
>>> [ 2445.582757] CPU    3: hi:   90, btch:  15 usd:  19
>>> [ 2445.587553] HighMem per-cpu:
>>> [ 2445.590455] CPU    0: hi:  186, btch:  31 usd:  16
>>> [ 2445.595254] CPU    1: hi:  186, btch:  31 usd:  37
>>> [ 2445.600052] CPU    2: hi:  186, btch:  31 usd: 172
>>> [ 2445.604863] CPU    3: hi:  186, btch:  31 usd: 181
>>> [ 2445.609675] active_anon:1321 inactive_anon:19 isolated_anon:0
>>> [ 2445.609680]  active_file:2311 inactive_file:2276 isolated_file:0
>>> [ 2445.609686]  unevictable:0 dirty:1 writeback:0 unstable:0
>>> [ 2445.609690]  free:456985 slab_reclaimable:305 slab_unreclaimable:1705
>>> [ 2445.609696]  mapped:1837 shmem:43 pagetables:153 bounce:0
>>> [ 2445.638709] DMA free:48392kB min:1052kB low:1312kB high:1576kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:186944kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes
>>> [ 2445.675225] lowmem_reserve[]: 0 308 1673 1673
>>> [ 2445.679676] Normal free:392244kB min:1776kB low:2220kB high:2664kB active_anon:0kB inactive_anon:0kB active_file:2036kB inactive_file:1676kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:315584kB mlocked:0kB dirty:4kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:1220kB slab_unreclaimable:6820kB kernel_stack:656kB pagetables:612kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
>>> [ 2445.717837] lowmem_reserve[]: 0 0 10922 10922
>>> [ 2445.722299] HighMem free:1387304kB min:512kB low:2480kB high:4448kB active_anon:5284kB inactive_anon:76kB active_file:7208kB inactive_file:7428kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:1398016kB mlocked:0kB dirty:0kB writeback:0kB mapped:7348kB shmem:172kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
>>> [ 2445.760550] lowmem_reserve[]: 0 0 0 0
>>> [ 2445.764287] DMA: 10*4kB 12*8kB 14*16kB 13*32kB 12*64kB 12*128kB 7*256kB 3*512kB 5*1024kB 6*2048kB 6*4096kB 0*8192kB 0*16384kB 0*32768kB = 48392kB
>>> [ 2445.777566] Normal: 99*4kB 71*8kB 41*16kB 9*32kB 5*64kB 3*128kB 2*256kB 2*512kB 1*1024kB 1*2048kB 2*4096kB 4*8192kB 3*16384kB 9*32768kB = 392244kB
>>> [ 2445.790945] HighMem: 400*4kB 205*8kB 114*16kB 131*32kB 62*64kB 29*128kB 13*256kB 2*512kB 4*1024kB 1*2048kB 2*4096kB 1*8192kB 2*16384kB 40*32768kB = 1387304kB
>>> [ 2445.805269] 4614 total pagecache pages
>>> [ 2445.809023] 0 pages in swap cache
>>> [ 2445.812355] Swap cache stats: add 0, delete 0, find 0/0
>>> [ 2445.817586] Free swap  = 0kB
>>> [ 2445.820483] Total swap = 0kB
>>> [ 2445.870417] 524288 pages of RAM
>>> [ 2445.873566] 458040 free pages
>>> [ 2445.876537] 50744 reserved pages
>>> [ 2445.879767] 2012 slab pages
>>> [ 2445.882584] 5157 pages shared
>>> [ 2445.885557] 0 pages swap cached
>>> [ 2445.888705] Physical memory allocation error!
>>> [ 2445.893083] Physical memory allocation error!
>>>
>>> I'll have a go at getting the other one to show...
>>>
>>> Chris Tapp
>>>
>>> opensource@keylevel.com
>>> www.keylevel.com
>>>
>>>
>>>
>
> Chris Tapp
>
> opensource@keylevel.com
> www.keylevel.com
>
>
>


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Wandboard Quad experience (good!)
  2013-07-16  2:53                   ` John Weber
@ 2013-07-16 15:58                     ` Thomas Senyk
  2013-07-16 16:45                       ` Thomas Senyk
  0 siblings, 1 reply; 16+ messages in thread
From: Thomas Senyk @ 2013-07-16 15:58 UTC (permalink / raw)
  To: meta-freescale

I do get the 
[ 1685.143891] Physical memory allocation error!

errors as well! (same back-trace so I will not repeat)

I'm using a nitrogen6x right now.
I've already set CONFIG_SWAP=n, which didn't help for me.

This morning I used to somehow(! read a bit below why it's not a solution for 
me) work around this issue with this patch:
http://pastebin.com/hRgpitvP

inspired by a patch from:
https://community.freescale.com/message/316472#316472


However this morning I switched to master for all my layers (meta-fsl-arm-
extra is master-next), now if I apply this patch I get kernel-panic at start-
up:
< I don't have the log/back-trace right now, if requested I can provide it >



Anyway .. I'm not sure it's the right solution!... because although I got rid 
of the kernel errors I had a different problem then:

Somehow the kernel and/or the application started to misbehave at some point.
The framebuffer went black (and even /dev/urandom couldn't do anything about 
it), and if I tried to kill the application (SIGTERM or SIGKILL) it became a 
zombie and consumed ~25-30% CPU continuously. Only a reboot helped.

Sound like 'it ended up in a kernel endless loop' .. doesn't it? 

Greets
Thomas


On Monday, 15 July, 2013 21:53:03 John Weber wrote:
> Hi Chris,
> 
> You might want to consider emailing the wandboard-dev mailing list for
> Wandboard kernel questions.
> 
> wandboard-dev@lists.wandboard.org
> 
> You'll need to sign up of course:
> http://wandboard.org/cgi-bin/mailman/listinfo/wandboard-dev
> 
> Or the main user group at
> wandboard@googlegroups.com
> 
> I haven't seen that message in using the video encoder functionality of the
> VPU from Gstreamer.  Just greping the source, that message comes from the
> vpu driver (as expected).  During vpu_open(), it seems to be enabling the
> clock to the VPU, then checking to see if the clock is enabled by checking
> a program counter of what is I'm guessing the integrated bitstream
> processor in the VPU.  If that is greater than 0x0, then it pops that debug
> message.  Then it disables the vpu clock, then moves on.
> 
> John
> 
> On 7/15/13 3:27 PM, Chris Tapp wrote:
> > Hi John,
> > 
> > This (2nd crash) still happens when CONFIG_SWAP is not set, but it's just
> > possible that the Vivante one has gone! I've had a board running my app.
> > for over 6 hours hours and the problem hasn't shown yet. I'll keep it
> > running...
> > 
> > However, I now see lots of:
> > 
> > [ 1481.696860] Not power off before vpu open!
> > 
> > Any idea what these are about?
> > 
> > I've also noticed that the Wandboard is very sensitive to power quality.
> > The specs say a 5v, 2A PSU is suitable - but it's not if it's got a very
> > fast current limit :-) I get reboots (uboot reports 'POR' reset cause) if
> > I use a lab PSU set to 2A.> 
> > On 14 Jul 2013, at 20:38, John Weber wrote:
> >> Hi Chris,
> >> 
> >> Thanks.  You've probably noticed, but this is a different error from the
> >> first one you sent.  The first one was in the Vivante driver.
> >> 
> >> This one seems related to a memory limitation and perhaps related to the
> >> CONFIG_SWAP being on by default in the kernel build.  Since there is no
> >> swap partition, having CONFIG_SWAP seems a little useless.  Oddly
> >> enough, all of the default i.MX6 defconfigs set CONFIG_SWAP.  I'm seeing
> >> if we can remove that option.
> >> 
> >> John
> >> 
> >> On 7/13/13 4:26 PM, Chris Tapp wrote:
> >>> Hi John,
> >>> 
> >>> On 13 Jul 2013, at 01:07, John Weber wrote:
> >>>> Chris,
> >>>> 
> >>>> This looks like it is coming from the Vivante GPU driver in:
> >>>> drivers/mxc/gpu-viv/hal/kernel/gc_hal_kernel.c, line 1315.
> >>>> 
> >>>> How would I replicate this problem?
> >>> 
> >>> Good question! Simply running (with GST_DEBUG="*:2")
> >>> 
> >>> gst-launch playbin2 uri=http://media.w3.org/2010/05/sintel/trailer.webm
> >>> video-sink="queue2 ! mfw_v4lsink"
> >>> 
> >>> sometimes gives this:
> >>> 
> >>> [ 2445.396718] source:src: page allocation failure: order:11, mode:0xd1
> >>> [ 2445.403170] Backtrace:
> >>> [ 2445.405710] [<c003e358>] (dump_backtrace+0x0/0x104) from [<c0421760>]
> >>> (dump_stack+0x18/0x1c) [ 2445.414222]  r6:e9b80000 r5:000000d1
> >>> r4:00000001 r3:00000000
> >>> [ 2445.420029] [<c0421748>] (dump_stack+0x0/0x1c) from [<c00b7be8>]
> >>> (warn_alloc_failed+0xe4/0x104) [ 2445.428825] [<c00b7b04>]
> >>> (warn_alloc_failed+0x0/0x104) from [<c00ba12c>]
> >>> (__alloc_pages_nodemask+0x5b8/0x634) [ 2445.438806]  r3:00000000
> >>> r2:00000000
> >>> [ 2445.442482]  r8:00000000 r7:00000000 r6:e9b80000 r5:0000000b
> >>> r4:000000d1
> >>> [ 2445.449342] [<c00b9b74>] (__alloc_pages_nodemask+0x0/0x634) from
> >>> [<c004400c>] (__dma_alloc+0xc4/0x2b0) [ 2445.458728] [<c0043f48>]
> >>> (__dma_alloc+0x0/0x2b0) from [<c0044550>]
> >>> (dma_alloc_coherent+0x5c/0x68) [ 2445.467695] [<c00444f4>]
> >>> (dma_alloc_coherent+0x0/0x68) from [<c02f693c>]
> >>> (vpu_alloc_dma_buffer+0x34/0x5c) [ 2445.477323]  r7:e9b80000
> >>> r6:e9cb6f28 r5:e9cb6f20 r4:e9cb6f28
> >>> [ 2445.483422] [<c02f6908>] (vpu_alloc_dma_buffer+0x0/0x5c) from
> >>> [<c02f6a38>] (vpu_ioctl+0xd4/0x7ac) [ 2445.492366]  r4:41efe940
> >>> r3:00000000
> >>> [ 2445.496042] [<c02f6964>] (vpu_ioctl+0x0/0x7ac) from [<c00f144c>]
> >>> (vfs_ioctl+0x28/0x44) [ 2445.504025]  r8:e9b468d0 r7:00000007
> >>> r6:00005600 r5:e9b667a0 r4:e9b667a0 [ 2445.510888] [<c00f1424>]
> >>> (vfs_ioctl+0x0/0x44) from [<c00f1e78>] (do_vfs_ioctl+0x414/0x508) [
> >>> 2445.519167] [<c00f1a64>] (do_vfs_ioctl+0x0/0x508) from [<c00f1fa8>]
> >>> (sys_ioctl+0x3c/0x68) [ 2445.527381] [<c00f1f6c>] (sys_ioctl+0x0/0x68)
> >>> from [<c003ac00>] (ret_fast_syscall+0x0/0x30) [ 2445.535753] 
> >>> r7:00000036 r6:415c11ac r5:420762a8 r4:41efe940
> >>> [ 2445.541491] Mem-info:
> >>> [ 2445.543770] DMA per-cpu:
> >>> [ 2445.546309] CPU    0: hi:   90, btch:  15 usd:  84
> >>> [ 2445.551119] CPU    1: hi:   90, btch:  15 usd:  84
> >>> [ 2445.555918] CPU    2: hi:   90, btch:  15 usd:   0
> >>> [ 2445.560733] CPU    3: hi:   90, btch:  15 usd:  98
> >>> [ 2445.565529] Normal per-cpu:
> >>> [ 2445.568330] CPU    0: hi:   90, btch:  15 usd:  80
> >>> [ 2445.573146] CPU    1: hi:   90, btch:  15 usd:  66
> >>> [ 2445.577944] CPU    2: hi:   90, btch:  15 usd:  77
> >>> [ 2445.582757] CPU    3: hi:   90, btch:  15 usd:  19
> >>> [ 2445.587553] HighMem per-cpu:
> >>> [ 2445.590455] CPU    0: hi:  186, btch:  31 usd:  16
> >>> [ 2445.595254] CPU    1: hi:  186, btch:  31 usd:  37
> >>> [ 2445.600052] CPU    2: hi:  186, btch:  31 usd: 172
> >>> [ 2445.604863] CPU    3: hi:  186, btch:  31 usd: 181
> >>> [ 2445.609675] active_anon:1321 inactive_anon:19 isolated_anon:0
> >>> [ 2445.609680]  active_file:2311 inactive_file:2276 isolated_file:0
> >>> [ 2445.609686]  unevictable:0 dirty:1 writeback:0 unstable:0
> >>> [ 2445.609690]  free:456985 slab_reclaimable:305 slab_unreclaimable:1705
> >>> [ 2445.609696]  mapped:1837 shmem:43 pagetables:153 bounce:0
> >>> [ 2445.638709] DMA free:48392kB min:1052kB low:1312kB high:1576kB
> >>> active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB
> >>> unevictable:0kB isolated(anon):0kB isolated(file):0kB present:186944kB
> >>> mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB
> >>> slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB
> >>> pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB
> >>> pages_scanned:0 all_unreclaimable? yes [ 2445.675225] lowmem_reserve[]:
> >>> 0 308 1673 1673
> >>> [ 2445.679676] Normal free:392244kB min:1776kB low:2220kB high:2664kB
> >>> active_anon:0kB inactive_anon:0kB active_file:2036kB
> >>> inactive_file:1676kB unevictable:0kB isolated(anon):0kB
> >>> isolated(file):0kB present:315584kB mlocked:0kB dirty:4kB writeback:0kB
> >>> mapped:0kB shmem:0kB slab_reclaimable:1220kB slab_unreclaimable:6820kB
> >>> kernel_stack:656kB pagetables:612kB unstable:0kB bounce:0kB
> >>> writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no [ 2445.717837]
> >>> lowmem_reserve[]: 0 0 10922 10922
> >>> [ 2445.722299] HighMem free:1387304kB min:512kB low:2480kB high:4448kB
> >>> active_anon:5284kB inactive_anon:76kB active_file:7208kB
> >>> inactive_file:7428kB unevictable:0kB isolated(anon):0kB
> >>> isolated(file):0kB present:1398016kB mlocked:0kB dirty:0kB
> >>> writeback:0kB mapped:7348kB shmem:172kB slab_reclaimable:0kB
> >>> slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB
> >>> bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no [
> >>> 2445.760550] lowmem_reserve[]: 0 0 0 0
> >>> [ 2445.764287] DMA: 10*4kB 12*8kB 14*16kB 13*32kB 12*64kB 12*128kB
> >>> 7*256kB 3*512kB 5*1024kB 6*2048kB 6*4096kB 0*8192kB 0*16384kB 0*32768kB
> >>> = 48392kB [ 2445.777566] Normal: 99*4kB 71*8kB 41*16kB 9*32kB 5*64kB
> >>> 3*128kB 2*256kB 2*512kB 1*1024kB 1*2048kB 2*4096kB 4*8192kB 3*16384kB
> >>> 9*32768kB = 392244kB [ 2445.790945] HighMem: 400*4kB 205*8kB 114*16kB
> >>> 131*32kB 62*64kB 29*128kB 13*256kB 2*512kB 4*1024kB 1*2048kB 2*4096kB
> >>> 1*8192kB 2*16384kB 40*32768kB = 1387304kB [ 2445.805269] 4614 total
> >>> pagecache pages
> >>> [ 2445.809023] 0 pages in swap cache
> >>> [ 2445.812355] Swap cache stats: add 0, delete 0, find 0/0
> >>> [ 2445.817586] Free swap  = 0kB
> >>> [ 2445.820483] Total swap = 0kB
> >>> [ 2445.870417] 524288 pages of RAM
> >>> [ 2445.873566] 458040 free pages
> >>> [ 2445.876537] 50744 reserved pages
> >>> [ 2445.879767] 2012 slab pages
> >>> [ 2445.882584] 5157 pages shared
> >>> [ 2445.885557] 0 pages swap cached
> >>> [ 2445.888705] Physical memory allocation error!
> >>> [ 2445.893083] Physical memory allocation error!
> >>> 
> >>> I'll have a go at getting the other one to show...
> >>> 
> >>> Chris Tapp
> >>> 
> >>> opensource@keylevel.com
> >>> www.keylevel.com
> > 
> > Chris Tapp
> > 
> > opensource@keylevel.com
> > www.keylevel.com
> 
> _______________________________________________
> meta-freescale mailing list
> meta-freescale@yoctoproject.org
> https://lists.yoctoproject.org/listinfo/meta-freescale


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Wandboard Quad experience (good!)
  2013-07-16 15:58                     ` Thomas Senyk
@ 2013-07-16 16:45                       ` Thomas Senyk
  0 siblings, 0 replies; 16+ messages in thread
From: Thomas Senyk @ 2013-07-16 16:45 UTC (permalink / raw)
  To: meta-freescale

On Tuesday, 16 July, 2013 17:58:25 Thomas Senyk wrote:
> I do get the
> [ 1685.143891] Physical memory allocation error!
> 
> errors as well! (same back-trace so I will not repeat)
> 
> I'm using a nitrogen6x right now.
> I've already set CONFIG_SWAP=n, which didn't help for me.
> 
> This morning I used to somehow(! read a bit below why it's not a solution
> for me) work around this issue with this patch:
> http://pastebin.com/hRgpitvP
> 
> inspired by a patch from:
> https://community.freescale.com/message/316472#316472
> 
> 
> However this morning I switched to master for all my layers (meta-fsl-arm-
> extra is master-next), now if I apply this patch I get kernel-panic at
> start- up:
> < I don't have the log/back-trace right now, if requested I can provide it >
> 
> 
> 
> Anyway .. I'm not sure it's the right solution!... because although I got
> rid of the kernel errors I had a different problem then:
> 
> Somehow the kernel and/or the application started to misbehave at some
> point. The framebuffer went black (and even /dev/urandom couldn't do
> anything about it), and if I tried to kill the application (SIGTERM or
> SIGKILL) it became a zombie and consumed ~25-30% CPU continuously. Only a
> reboot helped.
> 
> Sound like 'it ended up in a kernel endless loop' .. doesn't it?

some clearification bout 'at some point':
I'm playing the same file (big bug bunny 1080p) over and over again and after 
the 3-5th loop the screen gets dark and the application becomes a zombie.

Just tested with the new kernel, I only got memory-error twice and could just 
continue playing with 'trying again'




One addition from wandsolo:
I just finished my wandsolo build and this error happens as soon as I try to 
play anything with 1080p. It's working with 720p.

Any advice?
Maybe it's related .. maybe it's just '512mb isn't enough  for 1080p' .. but 
that would be surprising, wouldn't it?



The gstreamer error:

[ERR]   mem allocation failed!


and as kernel error: 

[  848.073954] mxc_v4l2_output mxc_v4l2_output.0: Bypass IC.
[  848.191982] multiqueue0:src: page allocation failure: order:10, mode:0xd1
[  848.198814] [<c0043f3c>] (unwind_backtrace+0x0/0xf4) from [<c00bb154>] 
(warn_alloc_failed+0xd4/0x10c)
[  848.210079] [<c00bb154>] (warn_alloc_failed+0xd4/0x10c) from [<c00bd9c8>] 
(__alloc_pages_nodemask+0x540/0x6e4)
[  848.220690] [<c00bd9c8>] (__alloc_pages_nodemask+0x540/0x6e4) from 
[<c0046c68>] (__dma_alloc+0x9c/0x2fc)
[  848.230739] [<c0046c68>] (__dma_alloc+0x9c/0x2fc) from [<c0047200>] 
(dma_alloc_coherent+0x60/0x68)
[  848.240243] [<c0047200>] (dma_alloc_coherent+0x60/0x68) from [<c0362cf8>] 
(vpu_alloc_dma_buffer+0x2c/0x54)
[  848.250453] [<c0362cf8>] (vpu_alloc_dma_buffer+0x2c/0x54) from [<c03630c4>] 
(vpu_ioctl+0x3a4/0x884)
[  848.261304] [<c03630c4>] (vpu_ioctl+0x3a4/0x884) from [<c00f737c>] 
(do_vfs_ioctl+0x3b4/0x530)
[  848.273264] [<c00f737c>] (do_vfs_ioctl+0x3b4/0x530) from [<c00f752c>] 
(sys_ioctl+0x34/0x60)
[  848.282333] [<c00f752c>] (sys_ioctl+0x34/0x60) from [<c003cf80>] 
(ret_fast_syscall+0x0/0x30)
[  848.291320] Mem-info:
[  848.293598] DMA per-cpu:
[  848.296135] CPU    0: hi:   90, btch:  15 usd:  75
[  848.304645] Normal per-cpu:
[  848.307444] CPU    0: hi:   42, btch:   7 usd:  10
[  848.313230] active_anon:1874 inactive_anon:310 isolated_anon:0
[  848.313234]  active_file:4494 inactive_file:5604 isolated_file:0
[  848.313238]  unevictable:0 dirty:86 writeback:0 unstable:0
[  848.313241]  free:53231 slab_reclaimable:665 slab_unreclaimable:1546
[  848.313245]  mapped:2708 shmem:319 pagetables:174 bounce:0
[  848.344909] DMA free:16632kB min:1280kB low:1600kB high:1920kB 
active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB 
unevictable:0kB isolated(anon):0kB isolated(file):0kB present:186944kB mlocks
[  848.382001] lowmem_reserve[]: 0 149 149 149
[  848.386266] Normal free:196292kB min:1048kB low:1308kB high:1572kB 
active_anon:7496kB inactive_anon:1240kB active_file:17976kB 
inactive_file:22416kB unevictable:0kB isolated(anon):0kB isolated(file):0kB 
preso
[  848.426957] lowmem_reserve[]: 0 0 0 0
[  848.430730] DMA: 14*4kB 16*8kB 6*16kB 5*32kB 7*64kB 5*128kB 1*256kB 3*512kB 
13*1024kB 0*2048kB 0*4096kB 0*8192kB 0*16384kB 0*32768kB = 16632kB
[  848.444573] Normal: 495*4kB 227*8kB 541*16kB 347*32kB 217*64kB 99*128kB 
37*256kB 11*512kB 6*1024kB 5*2048kB 2*4096kB 3*8192kB 3*16384kB 1*32768kB = 
196292kB
[  848.459822] 10420 total pagecache pages
[  848.476162] 131072 pages of RAM
[  848.480052] 53427 free pages
[  848.482937] 47912 reserved pages
[  848.486166] 1654 slab pages
[  848.488960] 8246 pages shared
[  848.494135] 0 pages swap cached
[  848.497283] Physical memory allocation error!
[  848.502614] Physical memory allocation error!
[  853.599017] mxc_hdmi mxc_hdmi: same edid


> 
> Greets
> Thomas
> 
> On Monday, 15 July, 2013 21:53:03 John Weber wrote:
> > Hi Chris,
> > 
> > You might want to consider emailing the wandboard-dev mailing list for
> > Wandboard kernel questions.
> > 
> > wandboard-dev@lists.wandboard.org
> > 
> > You'll need to sign up of course:
> > http://wandboard.org/cgi-bin/mailman/listinfo/wandboard-dev
> > 
> > Or the main user group at
> > wandboard@googlegroups.com
> > 
> > I haven't seen that message in using the video encoder functionality of
> > the
> > VPU from Gstreamer.  Just greping the source, that message comes from the
> > vpu driver (as expected).  During vpu_open(), it seems to be enabling the
> > clock to the VPU, then checking to see if the clock is enabled by checking
> > a program counter of what is I'm guessing the integrated bitstream
> > processor in the VPU.  If that is greater than 0x0, then it pops that
> > debug
> > message.  Then it disables the vpu clock, then moves on.
> > 
> > John
> > 
> > On 7/15/13 3:27 PM, Chris Tapp wrote:
> > > Hi John,
> > > 
> > > This (2nd crash) still happens when CONFIG_SWAP is not set, but it's
> > > just
> > > possible that the Vivante one has gone! I've had a board running my app.
> > > for over 6 hours hours and the problem hasn't shown yet. I'll keep it
> > > running...
> > > 
> > > However, I now see lots of:
> > > 
> > > [ 1481.696860] Not power off before vpu open!
> > > 
> > > Any idea what these are about?
> > > 
> > > I've also noticed that the Wandboard is very sensitive to power quality.
> > > The specs say a 5v, 2A PSU is suitable - but it's not if it's got a very
> > > fast current limit :-) I get reboots (uboot reports 'POR' reset cause)
> > > if
> > > I use a lab PSU set to 2A.>
> > > 
> > > On 14 Jul 2013, at 20:38, John Weber wrote:
> > >> Hi Chris,
> > >> 
> > >> Thanks.  You've probably noticed, but this is a different error from
> > >> the
> > >> first one you sent.  The first one was in the Vivante driver.
> > >> 
> > >> This one seems related to a memory limitation and perhaps related to
> > >> the
> > >> CONFIG_SWAP being on by default in the kernel build.  Since there is no
> > >> swap partition, having CONFIG_SWAP seems a little useless.  Oddly
> > >> enough, all of the default i.MX6 defconfigs set CONFIG_SWAP.  I'm
> > >> seeing
> > >> if we can remove that option.
> > >> 
> > >> John
> > >> 
> > >> On 7/13/13 4:26 PM, Chris Tapp wrote:
> > >>> Hi John,
> > >>> 
> > >>> On 13 Jul 2013, at 01:07, John Weber wrote:
> > >>>> Chris,
> > >>>> 
> > >>>> This looks like it is coming from the Vivante GPU driver in:
> > >>>> drivers/mxc/gpu-viv/hal/kernel/gc_hal_kernel.c, line 1315.
> > >>>> 
> > >>>> How would I replicate this problem?
> > >>> 
> > >>> Good question! Simply running (with GST_DEBUG="*:2")
> > >>> 
> > >>> gst-launch playbin2
> > >>> uri=http://media.w3.org/2010/05/sintel/trailer.webm
> > >>> video-sink="queue2 ! mfw_v4lsink"
> > >>> 
> > >>> sometimes gives this:
> > >>> 
> > >>> [ 2445.396718] source:src: page allocation failure: order:11,
> > >>> mode:0xd1
> > >>> [ 2445.403170] Backtrace:
> > >>> [ 2445.405710] [<c003e358>] (dump_backtrace+0x0/0x104) from
> > >>> [<c0421760>]
> > >>> (dump_stack+0x18/0x1c) [ 2445.414222]  r6:e9b80000 r5:000000d1
> > >>> r4:00000001 r3:00000000
> > >>> [ 2445.420029] [<c0421748>] (dump_stack+0x0/0x1c) from [<c00b7be8>]
> > >>> (warn_alloc_failed+0xe4/0x104) [ 2445.428825] [<c00b7b04>]
> > >>> (warn_alloc_failed+0x0/0x104) from [<c00ba12c>]
> > >>> (__alloc_pages_nodemask+0x5b8/0x634) [ 2445.438806]  r3:00000000
> > >>> r2:00000000
> > >>> [ 2445.442482]  r8:00000000 r7:00000000 r6:e9b80000 r5:0000000b
> > >>> r4:000000d1
> > >>> [ 2445.449342] [<c00b9b74>] (__alloc_pages_nodemask+0x0/0x634) from
> > >>> [<c004400c>] (__dma_alloc+0xc4/0x2b0) [ 2445.458728] [<c0043f48>]
> > >>> (__dma_alloc+0x0/0x2b0) from [<c0044550>]
> > >>> (dma_alloc_coherent+0x5c/0x68) [ 2445.467695] [<c00444f4>]
> > >>> (dma_alloc_coherent+0x0/0x68) from [<c02f693c>]
> > >>> (vpu_alloc_dma_buffer+0x34/0x5c) [ 2445.477323]  r7:e9b80000
> > >>> r6:e9cb6f28 r5:e9cb6f20 r4:e9cb6f28
> > >>> [ 2445.483422] [<c02f6908>] (vpu_alloc_dma_buffer+0x0/0x5c) from
> > >>> [<c02f6a38>] (vpu_ioctl+0xd4/0x7ac) [ 2445.492366]  r4:41efe940
> > >>> r3:00000000
> > >>> [ 2445.496042] [<c02f6964>] (vpu_ioctl+0x0/0x7ac) from [<c00f144c>]
> > >>> (vfs_ioctl+0x28/0x44) [ 2445.504025]  r8:e9b468d0 r7:00000007
> > >>> r6:00005600 r5:e9b667a0 r4:e9b667a0 [ 2445.510888] [<c00f1424>]
> > >>> (vfs_ioctl+0x0/0x44) from [<c00f1e78>] (do_vfs_ioctl+0x414/0x508) [
> > >>> 2445.519167] [<c00f1a64>] (do_vfs_ioctl+0x0/0x508) from [<c00f1fa8>]
> > >>> (sys_ioctl+0x3c/0x68) [ 2445.527381] [<c00f1f6c>] (sys_ioctl+0x0/0x68)
> > >>> from [<c003ac00>] (ret_fast_syscall+0x0/0x30) [ 2445.535753]
> > >>> r7:00000036 r6:415c11ac r5:420762a8 r4:41efe940
> > >>> [ 2445.541491] Mem-info:
> > >>> [ 2445.543770] DMA per-cpu:
> > >>> [ 2445.546309] CPU    0: hi:   90, btch:  15 usd:  84
> > >>> [ 2445.551119] CPU    1: hi:   90, btch:  15 usd:  84
> > >>> [ 2445.555918] CPU    2: hi:   90, btch:  15 usd:   0
> > >>> [ 2445.560733] CPU    3: hi:   90, btch:  15 usd:  98
> > >>> [ 2445.565529] Normal per-cpu:
> > >>> [ 2445.568330] CPU    0: hi:   90, btch:  15 usd:  80
> > >>> [ 2445.573146] CPU    1: hi:   90, btch:  15 usd:  66
> > >>> [ 2445.577944] CPU    2: hi:   90, btch:  15 usd:  77
> > >>> [ 2445.582757] CPU    3: hi:   90, btch:  15 usd:  19
> > >>> [ 2445.587553] HighMem per-cpu:
> > >>> [ 2445.590455] CPU    0: hi:  186, btch:  31 usd:  16
> > >>> [ 2445.595254] CPU    1: hi:  186, btch:  31 usd:  37
> > >>> [ 2445.600052] CPU    2: hi:  186, btch:  31 usd: 172
> > >>> [ 2445.604863] CPU    3: hi:  186, btch:  31 usd: 181
> > >>> [ 2445.609675] active_anon:1321 inactive_anon:19 isolated_anon:0
> > >>> [ 2445.609680]  active_file:2311 inactive_file:2276 isolated_file:0
> > >>> [ 2445.609686]  unevictable:0 dirty:1 writeback:0 unstable:0
> > >>> [ 2445.609690]  free:456985 slab_reclaimable:305
> > >>> slab_unreclaimable:1705
> > >>> [ 2445.609696]  mapped:1837 shmem:43 pagetables:153 bounce:0
> > >>> [ 2445.638709] DMA free:48392kB min:1052kB low:1312kB high:1576kB
> > >>> active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB
> > >>> unevictable:0kB isolated(anon):0kB isolated(file):0kB present:186944kB
> > >>> mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB
> > >>> slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB
> > >>> pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB
> > >>> pages_scanned:0 all_unreclaimable? yes [ 2445.675225]
> > >>> lowmem_reserve[]:
> > >>> 0 308 1673 1673
> > >>> [ 2445.679676] Normal free:392244kB min:1776kB low:2220kB high:2664kB
> > >>> active_anon:0kB inactive_anon:0kB active_file:2036kB
> > >>> inactive_file:1676kB unevictable:0kB isolated(anon):0kB
> > >>> isolated(file):0kB present:315584kB mlocked:0kB dirty:4kB
> > >>> writeback:0kB
> > >>> mapped:0kB shmem:0kB slab_reclaimable:1220kB slab_unreclaimable:6820kB
> > >>> kernel_stack:656kB pagetables:612kB unstable:0kB bounce:0kB
> > >>> writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no [ 2445.717837]
> > >>> lowmem_reserve[]: 0 0 10922 10922
> > >>> [ 2445.722299] HighMem free:1387304kB min:512kB low:2480kB high:4448kB
> > >>> active_anon:5284kB inactive_anon:76kB active_file:7208kB
> > >>> inactive_file:7428kB unevictable:0kB isolated(anon):0kB
> > >>> isolated(file):0kB present:1398016kB mlocked:0kB dirty:0kB
> > >>> writeback:0kB mapped:7348kB shmem:172kB slab_reclaimable:0kB
> > >>> slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB
> > >>> bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no [
> > >>> 2445.760550] lowmem_reserve[]: 0 0 0 0
> > >>> [ 2445.764287] DMA: 10*4kB 12*8kB 14*16kB 13*32kB 12*64kB 12*128kB
> > >>> 7*256kB 3*512kB 5*1024kB 6*2048kB 6*4096kB 0*8192kB 0*16384kB
> > >>> 0*32768kB
> > >>> = 48392kB [ 2445.777566] Normal: 99*4kB 71*8kB 41*16kB 9*32kB 5*64kB
> > >>> 3*128kB 2*256kB 2*512kB 1*1024kB 1*2048kB 2*4096kB 4*8192kB 3*16384kB
> > >>> 9*32768kB = 392244kB [ 2445.790945] HighMem: 400*4kB 205*8kB 114*16kB
> > >>> 131*32kB 62*64kB 29*128kB 13*256kB 2*512kB 4*1024kB 1*2048kB 2*4096kB
> > >>> 1*8192kB 2*16384kB 40*32768kB = 1387304kB [ 2445.805269] 4614 total
> > >>> pagecache pages
> > >>> [ 2445.809023] 0 pages in swap cache
> > >>> [ 2445.812355] Swap cache stats: add 0, delete 0, find 0/0
> > >>> [ 2445.817586] Free swap  = 0kB
> > >>> [ 2445.820483] Total swap = 0kB
> > >>> [ 2445.870417] 524288 pages of RAM
> > >>> [ 2445.873566] 458040 free pages
> > >>> [ 2445.876537] 50744 reserved pages
> > >>> [ 2445.879767] 2012 slab pages
> > >>> [ 2445.882584] 5157 pages shared
> > >>> [ 2445.885557] 0 pages swap cached
> > >>> [ 2445.888705] Physical memory allocation error!
> > >>> [ 2445.893083] Physical memory allocation error!
> > >>> 
> > >>> I'll have a go at getting the other one to show...
> > >>> 
> > >>> Chris Tapp
> > >>> 
> > >>> opensource@keylevel.com
> > >>> www.keylevel.com
> > > 
> > > Chris Tapp
> > > 
> > > opensource@keylevel.com
> > > www.keylevel.com
> > 
> > _______________________________________________
> > meta-freescale mailing list
> > meta-freescale@yoctoproject.org
> > https://lists.yoctoproject.org/listinfo/meta-freescale


^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2013-07-16 16:46 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-07-09 21:38 Wandboard Quad experience (good!) Chris Tapp
2013-07-09 21:51 ` Otavio Salvador
2013-07-09 22:03   ` Chris Tapp
2013-07-09 23:05     ` Chris Tapp
2013-07-10  2:57       ` John Weber
2013-07-10  7:50         ` Chris Tapp
2013-07-10 12:44           ` Otavio Salvador
2013-07-12 22:50         ` Chris Tapp
2013-07-13  0:07           ` John Weber
2013-07-13 21:26             ` Chris Tapp
2013-07-14 19:38               ` John Weber
2013-07-15  7:46                 ` Chris Tapp
2013-07-15 20:27                 ` Chris Tapp
2013-07-16  2:53                   ` John Weber
2013-07-16 15:58                     ` Thomas Senyk
2013-07-16 16:45                       ` Thomas Senyk

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.