* Loss of several MB of run-time memory
@ 2018-10-09 16:53 Patrick Venture
2018-10-09 16:57 ` Tanous, Ed
0 siblings, 1 reply; 11+ messages in thread
From: Patrick Venture @ 2018-10-09 16:53 UTC (permalink / raw)
To: OpenBMC Maillist
Just jumped from 4.7 kernel to 4.18 running the latest openbmc image
on the quanta-q71l board. And I see now I have ~20MiB of RAM free for
stuff once things are settled, whereas before I could have up to
35MiB.
Here are some dumps:
Now:
[ 0.000000] CPU: ARM926EJ-S [41069265] revision 5 (ARMv5TEJ), cr=0005317f
[ 0.000000] CPU: VIVT data cache, VIVT instruction cache
[ 0.000000] OF: fdt: Machine model: Quanta Q71L BMC
[ 0.000000] Memory policy: Data cache writeback
[ 0.000000] On node 0 totalpages: 30720
[ 0.000000] Normal zone: 240 pages used for memmap
[ 0.000000] Normal zone: 0 pages reserved
[ 0.000000] Normal zone: 30720 pages, LIFO batch:7
[ 0.000000] random: get_random_bytes called from
start_kernel+0x8c/0x4c0 with crng_init=0
[ 0.000000] pcpu-alloc: s0 r0 d32768 u32768 alloc=1*32768
[ 0.000000] pcpu-alloc: [0] 0
[ 0.000000] Built 1 zonelists, mobility grouping on. Total pages: 30480
[ 0.000000] Kernel command line: console=ttyS4,115200n8
root=/dev/ram rw clk_ignore_unused
[ 0.000000] Dentry cache hash table entries: 16384 (order: 4, 65536 bytes)
[ 0.000000] Inode-cache hash table entries: 8192 (order: 3, 32768 bytes)
[ 0.000000] Memory: 111076K/122880K available (5120K kernel code,
365K rwdata, 1104K rodata, 1024K init, 143K bss, 11804K reserved, 0K
cma-reserved)
[ 0.000000] Virtual kernel memory layout:
[ 0.000000] vector : 0xffff0000 - 0xffff1000 ( 4 kB)
[ 0.000000] fixmap : 0xffc00000 - 0xfff00000 (3072 kB)
[ 0.000000] vmalloc : 0x88000000 - 0xff800000 (1912 MB)
[ 0.000000] lowmem : 0x80000000 - 0x87800000 ( 120 MB)
[ 0.000000] .text : 0x(ptrval) - 0x(ptrval) (6112 kB)
[ 0.000000] .init : 0x(ptrval) - 0x(ptrval) (1024 kB)
[ 0.000000] .data : 0x(ptrval) - 0x(ptrval) ( 366 kB)
[ 0.000000] .bss : 0x(ptrval) - 0x(ptrval) ( 144 kB)
[ 0.000000] ftrace: allocating 18546 entries in 55 pages
cat /proc/meminfo
MemTotal: 113952 kB
MemFree: 19944 kB
MemAvailable: 62432 kB
Buffers: 11032 kB
Cached: 48732 kB
SwapCached: 0 kB
Active: 40940 kB
Inactive: 26728 kB
Active(anon): 17068 kB
Inactive(anon): 9316 kB
Active(file): 23872 kB
Inactive(file): 17412 kB
Unevictable: 9088 kB
Mlocked: 0 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 0 kB
Writeback: 0 kB
AnonPages: 17008 kB
Mapped: 21120 kB
Shmem: 9392 kB
Slab: 11956 kB
SReclaimable: 6472 kB
SUnreclaim: 5484 kB
KernelStack: 560 kB
PageTables: 1384 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 56976 kB
Committed_AS: 124224 kB
VmallocTotal: 1957888 kB
VmallocUsed: 0 kB
VmallocChunk: 0 kB
Before:
dmesg
Normal zone: 30720 pages, LIFO batch:7
pcpu-alloc: s0 r0 d32768 u32768 alloc=1*32768
pcpu-alloc: [0] 0
Built 1 zonelists in Zone order, mobility grouping on. Total pages: 30480
Kernel command line: console=ttyS4,115200n8 root=/dev/ram rw
PID hash table entries: 512 (order: -1, 2048 bytes)
Dentry cache hash table entries: 16384 (order: 4, 65536 bytes)
Inode-cache hash table entries: 8192 (order: 3, 32768 bytes)
Memory: 113644K/122880K available (4206K kernel code, 150K rwdata,
860K rodata, 1024K init, 111K bss, 9236K reserved, 0K cma-reserved)
Virtual kernel memory layout:
vector : 0xffff0000 - 0xffff1000 ( 4 kB)
fixmap : 0xffc00000 - 0xfff00000 (3072 kB)
vmalloc : 0xc8000000 - 0xff800000 ( 888 MB)
lowmem : 0xc0000000 - 0xc7800000 ( 120 MB)
.text : 0xc0008000 - 0xc05f28ec (6059 kB)
.init : 0xc0600000 - 0xc0700000 (1024 kB)
.data : 0xc0700000 - 0xc0725be8 ( 151 kB)
.bss : 0xc0725be8 - 0xc0741a38 ( 112 kB)
cat /proc/meminfo
MemTotal: 116224 kB
MemFree: 35832 kB
MemAvailable: 76952 kB
Buffers: 9596 kB
Cached: 39776 kB
SwapCached: 0 kB
Active: 40516 kB
Inactive: 25432 kB
Active(anon): 17004 kB
Inactive(anon): 6968 kB
Active(file): 23512 kB
Inactive(file): 18464 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 0 kB
Writeback: 0 kB
AnonPages: 16588 kB
Mapped: 20064 kB
Shmem: 7396 kB
Slab: 9424 kB
SReclaimable: 4532 kB
SUnreclaim: 4892 kB
KernelStack: 720 kB
PageTables: 1328 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 58112 kB
Committed_AS: 142324 kB
VmallocTotal: 909312 kB
VmallocUsed: 0 kB
VmallocChunk: 0 kB
This matters for a few reasons:
1) my memory chip is too small to be practical and I need all the
bytes I can get.
2) I need at least 32MiB to load a new firmware image.
I dropped all the python except the mapper, and I dropped the newer
daemons from my build to clear out that difference. It was originally
about 16MiB difference, so I was thinking that something is now being
mapped by default that wasn't before, such as part of a flash image.
Patrick
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Loss of several MB of run-time memory
2018-10-09 16:53 Loss of several MB of run-time memory Patrick Venture
@ 2018-10-09 16:57 ` Tanous, Ed
2018-10-09 17:06 ` Kun Yi
2018-10-09 17:06 ` Patrick Venture
0 siblings, 2 replies; 11+ messages in thread
From: Tanous, Ed @ 2018-10-09 16:57 UTC (permalink / raw)
To: Patrick Venture; +Cc: OpenBMC Maillist
Was this only the kernel version jump, or did you jump in openbmc/phosphor levels as well? There have been quite a few daemons added in the last 6 months or so that could explain your memory footprint increase.
-Ed
> On Oct 9, 2018, at 9:54 AM, Patrick Venture <venture@google.com> wrote:
>
> Just jumped from 4.7 kernel to 4.18 running the latest openbmc image
> on the quanta-q71l board. And I see now I have ~20MiB of RAM free for
> stuff once things are settled, whereas before I could have up to
> 35MiB.
>
> Here are some dumps:
>
> Now:
> [ 0.000000] CPU: ARM926EJ-S [41069265] revision 5 (ARMv5TEJ), cr=0005317f
> [ 0.000000] CPU: VIVT data cache, VIVT instruction cache
> [ 0.000000] OF: fdt: Machine model: Quanta Q71L BMC
> [ 0.000000] Memory policy: Data cache writeback
> [ 0.000000] On node 0 totalpages: 30720
> [ 0.000000] Normal zone: 240 pages used for memmap
> [ 0.000000] Normal zone: 0 pages reserved
> [ 0.000000] Normal zone: 30720 pages, LIFO batch:7
> [ 0.000000] random: get_random_bytes called from
> start_kernel+0x8c/0x4c0 with crng_init=0
> [ 0.000000] pcpu-alloc: s0 r0 d32768 u32768 alloc=1*32768
> [ 0.000000] pcpu-alloc: [0] 0
> [ 0.000000] Built 1 zonelists, mobility grouping on. Total pages: 30480
> [ 0.000000] Kernel command line: console=ttyS4,115200n8
> root=/dev/ram rw clk_ignore_unused
> [ 0.000000] Dentry cache hash table entries: 16384 (order: 4, 65536 bytes)
> [ 0.000000] Inode-cache hash table entries: 8192 (order: 3, 32768 bytes)
> [ 0.000000] Memory: 111076K/122880K available (5120K kernel code,
> 365K rwdata, 1104K rodata, 1024K init, 143K bss, 11804K reserved, 0K
> cma-reserved)
> [ 0.000000] Virtual kernel memory layout:
> [ 0.000000] vector : 0xffff0000 - 0xffff1000 ( 4 kB)
> [ 0.000000] fixmap : 0xffc00000 - 0xfff00000 (3072 kB)
> [ 0.000000] vmalloc : 0x88000000 - 0xff800000 (1912 MB)
> [ 0.000000] lowmem : 0x80000000 - 0x87800000 ( 120 MB)
> [ 0.000000] .text : 0x(ptrval) - 0x(ptrval) (6112 kB)
> [ 0.000000] .init : 0x(ptrval) - 0x(ptrval) (1024 kB)
> [ 0.000000] .data : 0x(ptrval) - 0x(ptrval) ( 366 kB)
> [ 0.000000] .bss : 0x(ptrval) - 0x(ptrval) ( 144 kB)
> [ 0.000000] ftrace: allocating 18546 entries in 55 pages
>
> cat /proc/meminfo
> MemTotal: 113952 kB
> MemFree: 19944 kB
> MemAvailable: 62432 kB
> Buffers: 11032 kB
> Cached: 48732 kB
> SwapCached: 0 kB
> Active: 40940 kB
> Inactive: 26728 kB
> Active(anon): 17068 kB
> Inactive(anon): 9316 kB
> Active(file): 23872 kB
> Inactive(file): 17412 kB
> Unevictable: 9088 kB
> Mlocked: 0 kB
> SwapTotal: 0 kB
> SwapFree: 0 kB
> Dirty: 0 kB
> Writeback: 0 kB
> AnonPages: 17008 kB
> Mapped: 21120 kB
> Shmem: 9392 kB
> Slab: 11956 kB
> SReclaimable: 6472 kB
> SUnreclaim: 5484 kB
> KernelStack: 560 kB
> PageTables: 1384 kB
> NFS_Unstable: 0 kB
> Bounce: 0 kB
> WritebackTmp: 0 kB
> CommitLimit: 56976 kB
> Committed_AS: 124224 kB
> VmallocTotal: 1957888 kB
> VmallocUsed: 0 kB
> VmallocChunk: 0 kB
>
> Before:
> dmesg
> Normal zone: 30720 pages, LIFO batch:7
> pcpu-alloc: s0 r0 d32768 u32768 alloc=1*32768
> pcpu-alloc: [0] 0
> Built 1 zonelists in Zone order, mobility grouping on. Total pages: 30480
> Kernel command line: console=ttyS4,115200n8 root=/dev/ram rw
> PID hash table entries: 512 (order: -1, 2048 bytes)
> Dentry cache hash table entries: 16384 (order: 4, 65536 bytes)
> Inode-cache hash table entries: 8192 (order: 3, 32768 bytes)
> Memory: 113644K/122880K available (4206K kernel code, 150K rwdata,
> 860K rodata, 1024K init, 111K bss, 9236K reserved, 0K cma-reserved)
> Virtual kernel memory layout:
> vector : 0xffff0000 - 0xffff1000 ( 4 kB)
> fixmap : 0xffc00000 - 0xfff00000 (3072 kB)
> vmalloc : 0xc8000000 - 0xff800000 ( 888 MB)
> lowmem : 0xc0000000 - 0xc7800000 ( 120 MB)
> .text : 0xc0008000 - 0xc05f28ec (6059 kB)
> .init : 0xc0600000 - 0xc0700000 (1024 kB)
> .data : 0xc0700000 - 0xc0725be8 ( 151 kB)
> .bss : 0xc0725be8 - 0xc0741a38 ( 112 kB)
>
> cat /proc/meminfo
> MemTotal: 116224 kB
> MemFree: 35832 kB
> MemAvailable: 76952 kB
> Buffers: 9596 kB
> Cached: 39776 kB
> SwapCached: 0 kB
> Active: 40516 kB
> Inactive: 25432 kB
> Active(anon): 17004 kB
> Inactive(anon): 6968 kB
> Active(file): 23512 kB
> Inactive(file): 18464 kB
> Unevictable: 0 kB
> Mlocked: 0 kB
> SwapTotal: 0 kB
> SwapFree: 0 kB
> Dirty: 0 kB
> Writeback: 0 kB
> AnonPages: 16588 kB
> Mapped: 20064 kB
> Shmem: 7396 kB
> Slab: 9424 kB
> SReclaimable: 4532 kB
> SUnreclaim: 4892 kB
> KernelStack: 720 kB
> PageTables: 1328 kB
> NFS_Unstable: 0 kB
> Bounce: 0 kB
> WritebackTmp: 0 kB
> CommitLimit: 58112 kB
> Committed_AS: 142324 kB
> VmallocTotal: 909312 kB
> VmallocUsed: 0 kB
> VmallocChunk: 0 kB
>
> This matters for a few reasons:
> 1) my memory chip is too small to be practical and I need all the
> bytes I can get.
> 2) I need at least 32MiB to load a new firmware image.
>
> I dropped all the python except the mapper, and I dropped the newer
> daemons from my build to clear out that difference. It was originally
> about 16MiB difference, so I was thinking that something is now being
> mapped by default that wasn't before, such as part of a flash image.
>
> Patrick
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Loss of several MB of run-time memory
2018-10-09 16:57 ` Tanous, Ed
@ 2018-10-09 17:06 ` Kun Yi
2018-10-09 17:20 ` Patrick Venture
2018-10-09 22:25 ` Joel Stanley
2018-10-09 17:06 ` Patrick Venture
1 sibling, 2 replies; 11+ messages in thread
From: Kun Yi @ 2018-10-09 17:06 UTC (permalink / raw)
To: ed.tanous; +Cc: Patrick Venture, OpenBMC Maillist
[-- Attachment #1: Type: text/plain, Size: 6527 bytes --]
A somewhat tedious way to test would be to build and boot with 'bitbake
core-image-minimal' to ensure no phosphor-daemons are loaded, and then
compare the kernel memory footprint.
On Tue, Oct 9, 2018 at 10:01 AM Tanous, Ed <ed.tanous@intel.com> wrote:
> Was this only the kernel version jump, or did you jump in openbmc/phosphor
> levels as well? There have been quite a few daemons added in the last 6
> months or so that could explain your memory footprint increase.
>
> -Ed
>
>
>
> > On Oct 9, 2018, at 9:54 AM, Patrick Venture <venture@google.com> wrote:
> >
> > Just jumped from 4.7 kernel to 4.18 running the latest openbmc image
> > on the quanta-q71l board. And I see now I have ~20MiB of RAM free for
> > stuff once things are settled, whereas before I could have up to
> > 35MiB.
> >
> > Here are some dumps:
> >
> > Now:
> > [ 0.000000] CPU: ARM926EJ-S [41069265] revision 5 (ARMv5TEJ),
> cr=0005317f
> > [ 0.000000] CPU: VIVT data cache, VIVT instruction cache
> > [ 0.000000] OF: fdt: Machine model: Quanta Q71L BMC
> > [ 0.000000] Memory policy: Data cache writeback
> > [ 0.000000] On node 0 totalpages: 30720
> > [ 0.000000] Normal zone: 240 pages used for memmap
> > [ 0.000000] Normal zone: 0 pages reserved
> > [ 0.000000] Normal zone: 30720 pages, LIFO batch:7
> > [ 0.000000] random: get_random_bytes called from
> > start_kernel+0x8c/0x4c0 with crng_init=0
> > [ 0.000000] pcpu-alloc: s0 r0 d32768 u32768 alloc=1*32768
> > [ 0.000000] pcpu-alloc: [0] 0
> > [ 0.000000] Built 1 zonelists, mobility grouping on. Total pages:
> 30480
> > [ 0.000000] Kernel command line: console=ttyS4,115200n8
> > root=/dev/ram rw clk_ignore_unused
> > [ 0.000000] Dentry cache hash table entries: 16384 (order: 4, 65536
> bytes)
> > [ 0.000000] Inode-cache hash table entries: 8192 (order: 3, 32768
> bytes)
> > [ 0.000000] Memory: 111076K/122880K available (5120K kernel code,
> > 365K rwdata, 1104K rodata, 1024K init, 143K bss, 11804K reserved, 0K
> > cma-reserved)
> > [ 0.000000] Virtual kernel memory layout:
> > [ 0.000000] vector : 0xffff0000 - 0xffff1000 ( 4 kB)
> > [ 0.000000] fixmap : 0xffc00000 - 0xfff00000 (3072 kB)
> > [ 0.000000] vmalloc : 0x88000000 - 0xff800000 (1912 MB)
> > [ 0.000000] lowmem : 0x80000000 - 0x87800000 ( 120 MB)
> > [ 0.000000] .text : 0x(ptrval) - 0x(ptrval) (6112 kB)
> > [ 0.000000] .init : 0x(ptrval) - 0x(ptrval) (1024 kB)
> > [ 0.000000] .data : 0x(ptrval) - 0x(ptrval) ( 366 kB)
> > [ 0.000000] .bss : 0x(ptrval) - 0x(ptrval) ( 144 kB)
> > [ 0.000000] ftrace: allocating 18546 entries in 55 pages
> >
> > cat /proc/meminfo
> > MemTotal: 113952 kB
> > MemFree: 19944 kB
> > MemAvailable: 62432 kB
> > Buffers: 11032 kB
> > Cached: 48732 kB
> > SwapCached: 0 kB
> > Active: 40940 kB
> > Inactive: 26728 kB
> > Active(anon): 17068 kB
> > Inactive(anon): 9316 kB
> > Active(file): 23872 kB
> > Inactive(file): 17412 kB
> > Unevictable: 9088 kB
> > Mlocked: 0 kB
> > SwapTotal: 0 kB
> > SwapFree: 0 kB
> > Dirty: 0 kB
> > Writeback: 0 kB
> > AnonPages: 17008 kB
> > Mapped: 21120 kB
> > Shmem: 9392 kB
> > Slab: 11956 kB
> > SReclaimable: 6472 kB
> > SUnreclaim: 5484 kB
> > KernelStack: 560 kB
> > PageTables: 1384 kB
> > NFS_Unstable: 0 kB
> > Bounce: 0 kB
> > WritebackTmp: 0 kB
> > CommitLimit: 56976 kB
> > Committed_AS: 124224 kB
> > VmallocTotal: 1957888 kB
> > VmallocUsed: 0 kB
> > VmallocChunk: 0 kB
> >
> > Before:
> > dmesg
> > Normal zone: 30720 pages, LIFO batch:7
> > pcpu-alloc: s0 r0 d32768 u32768 alloc=1*32768
> > pcpu-alloc: [0] 0
> > Built 1 zonelists in Zone order, mobility grouping on. Total pages:
> 30480
> > Kernel command line: console=ttyS4,115200n8 root=/dev/ram rw
> > PID hash table entries: 512 (order: -1, 2048 bytes)
> > Dentry cache hash table entries: 16384 (order: 4, 65536 bytes)
> > Inode-cache hash table entries: 8192 (order: 3, 32768 bytes)
> > Memory: 113644K/122880K available (4206K kernel code, 150K rwdata,
> > 860K rodata, 1024K init, 111K bss, 9236K reserved, 0K cma-reserved)
> > Virtual kernel memory layout:
> > vector : 0xffff0000 - 0xffff1000 ( 4 kB)
> > fixmap : 0xffc00000 - 0xfff00000 (3072 kB)
> > vmalloc : 0xc8000000 - 0xff800000 ( 888 MB)
> > lowmem : 0xc0000000 - 0xc7800000 ( 120 MB)
> > .text : 0xc0008000 - 0xc05f28ec (6059 kB)
> > .init : 0xc0600000 - 0xc0700000 (1024 kB)
> > .data : 0xc0700000 - 0xc0725be8 ( 151 kB)
> > .bss : 0xc0725be8 - 0xc0741a38 ( 112 kB)
> >
> > cat /proc/meminfo
> > MemTotal: 116224 kB
> > MemFree: 35832 kB
> > MemAvailable: 76952 kB
> > Buffers: 9596 kB
> > Cached: 39776 kB
> > SwapCached: 0 kB
> > Active: 40516 kB
> > Inactive: 25432 kB
> > Active(anon): 17004 kB
> > Inactive(anon): 6968 kB
> > Active(file): 23512 kB
> > Inactive(file): 18464 kB
> > Unevictable: 0 kB
> > Mlocked: 0 kB
> > SwapTotal: 0 kB
> > SwapFree: 0 kB
> > Dirty: 0 kB
> > Writeback: 0 kB
> > AnonPages: 16588 kB
> > Mapped: 20064 kB
> > Shmem: 7396 kB
> > Slab: 9424 kB
> > SReclaimable: 4532 kB
> > SUnreclaim: 4892 kB
> > KernelStack: 720 kB
> > PageTables: 1328 kB
> > NFS_Unstable: 0 kB
> > Bounce: 0 kB
> > WritebackTmp: 0 kB
> > CommitLimit: 58112 kB
> > Committed_AS: 142324 kB
> > VmallocTotal: 909312 kB
> > VmallocUsed: 0 kB
> > VmallocChunk: 0 kB
> >
> > This matters for a few reasons:
> > 1) my memory chip is too small to be practical and I need all the
> > bytes I can get.
> > 2) I need at least 32MiB to load a new firmware image.
> >
> > I dropped all the python except the mapper, and I dropped the newer
> > daemons from my build to clear out that difference. It was originally
> > about 16MiB difference, so I was thinking that something is now being
> > mapped by default that wasn't before, such as part of a flash image.
> >
> > Patrick
>
--
Regards,
Kun
[-- Attachment #2: Type: text/html, Size: 8388 bytes --]
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Loss of several MB of run-time memory
2018-10-09 16:57 ` Tanous, Ed
2018-10-09 17:06 ` Kun Yi
@ 2018-10-09 17:06 ` Patrick Venture
1 sibling, 0 replies; 11+ messages in thread
From: Patrick Venture @ 2018-10-09 17:06 UTC (permalink / raw)
To: Tanous, Ed; +Cc: OpenBMC Maillist
On Tue, Oct 9, 2018 at 10:01 AM Tanous, Ed <ed.tanous@intel.com> wrote:
>
> Was this only the kernel version jump, or did you jump in openbmc/phosphor levels as well? There have been quite a few daemons added in the last 6 months or so that could explain your memory footprint increase.
So, I did also jump phosphor versions, but my running process list is
nearly identical to what it was, so I don't expect to lose
substantially more memory from just this.
ps -eaf
UID PID PPID C STIME TTY TIME CMD
root 1 0 1 14:44 ? 00:00:21 /sbin/init
root 2 0 0 14:44 ? 00:00:00 [kthreadd]
root 4 2 0 14:44 ? 00:00:00 [kworker/0:0H-kb]
root 6 2 0 14:44 ? 00:00:00 [mm_percpu_wq]
root 7 2 0 14:44 ? 00:00:03 [ksoftirqd/0]
root 8 2 0 14:44 ? 00:00:00 [watchdog/0]
root 9 2 0 14:44 ? 00:00:00 [kdevtmpfs]
root 10 2 0 14:44 ? 00:00:00 [oom_reaper]
root 71 2 0 14:44 ? 00:00:00 [kworker/u2:2-ev]
root 171 2 0 14:44 ? 00:00:00 [writeback]
root 173 2 0 14:44 ? 00:00:00 [crypto]
root 175 2 0 14:44 ? 00:00:00 [kblockd]
root 194 2 0 14:44 ? 00:00:00 [watchdogd]
root 200 2 0 14:44 ? 00:00:05 [kworker/0:1-eve]
root 220 2 0 14:44 ? 00:00:00 [kswapd0]
root 269 2 0 14:44 ? 00:00:00 [hwrng]
root 508 2 0 14:44 ? 00:00:00 [ipv6_addrconf]
root 538 2 0 14:44 ? 00:00:00 [kworker/0:1H-kb]
root 566 2 0 14:44 ? 00:00:00 [jffs2_gcd_mtd5]
root 684 1 0 14:44 ? 00:00:17 /lib/systemd/systemd-journald
root 704 1 0 14:44 ? 00:00:01 /lib/systemd/systemd-udevd
systemd+ 927 1 0 14:44 ? 00:00:01 /lib/systemd/systemd-timesyncd
systemd+ 965 1 0 14:44 ? 00:00:01 /lib/systemd/systemd-resolved
root 969 1 0 14:44 ? 00:00:02 /usr/sbin/rsyslogd -n
root 970 1 0 14:44 ttyS5 00:00:02 obmc-console-server
--config /etc/obmc-console.conf ttyVUART0
message+ 974 1 5 14:44 ? 00:01:46 /usr/bin/dbus-daemon
--system --address=systemd: --nofork --nopidfile --systemd-activation
--syslog-only
root 982 1 0 14:44 ? 00:00:14 btbridged
root 983 1 0 14:44 ? 00:00:01 /bin/bash
/usr/sbin/set_gateway_arp.sh eth1
root 984 1 0 14:44 ? 00:00:00 /usr/sbin/snoopd -d
/dev/aspeed-lpc-snoop0 -b 1
root 985 1 0 14:44 ? 00:00:00 phosphor-log-manager
root 986 1 0 14:44 ? 00:00:00 phosphor-settings-manager
root 987 1 0 14:44 ? 00:00:00 phosphor-rsyslog-conf
root 989 1 2 14:44 ? 00:00:47 python
/usr/sbin/phosphor-mapper --path_namespaces=/org/openbmc
/xyz/openbmc_project --interface_namespaces=xyz.openbmc_project
org.openbmc --blacklists= --
root 990 1 0 14:44 ? 00:00:02 phosphor-watchdog
--service=xyz.openbmc_project.Watchdog
--path=/xyz/openbmc_project/watchdog/host0
--target=iceblink-reset.target --continue
root 991 1 0 14:44 ? 00:00:00 /usr/sbin/rngd -f
root 993 1 0 14:44 ? 00:00:01 phosphor-inventory
avahi 994 1 0 14:44 ? 00:00:01 avahi-daemon: running
[iceblink.local]
avahi 995 994 0 14:45 ? 00:00:00 avahi-daemon: chroot helper
root 998 1 1 14:45 ? 00:00:28 ipmid
root 1000 1 0 14:45 ? 00:00:00 slpd
root 1001 1 0 14:45 ttyS4 00:00:00 /bin/login --
root 1005 1 0 14:45 ? 00:00:00 phosphor-hwmon-readd
-o /iio-hwmon-battery
root 1008 1 0 14:45 ? 00:00:02 phosphor-hwmon-readd
-o /ahb/apb/i2c@1e78a000/i2c-bus@80/tmp75@4c
root 1016 1 0 14:45 ? 00:00:00 phosphor-network-manager
root 1017 1 0 14:45 ? 00:00:00 phosphor-dbus-monitor
root 1021 1 0 14:45 ? 00:00:00 phosphor-network-snmpconf
root 1024 1 0 14:45 ? 00:00:00 phosphor-hwmon-readd
-o /ahb/apb/i2c@1e78a000/i2c-bus@300/i2c-switch@70/i2c@0/psu@59
root 1025 1 0 14:45 ? 00:00:02 phosphor-hwmon-readd
-o /ahb/apb/i2c@1e78a000/i2c-bus@80/tmp75@4f
root 1026 1 0 14:45 ? 00:00:00 phosphor-hwmon-readd
-o /ahb/apb/i2c@1e78a000/i2c-bus@300/i2c-switch@70/i2c@1/psu@58
root 1027 1 0 14:45 ? 00:00:02 phosphor-hwmon-readd
-o /ahb/apb/i2c@1e78a000/i2c-bus@80/tmp75@4e
root 1028 1 0 14:45 ? 00:00:01 phosphor-hwmon-readd
-o /ahb/apb/i2c@1e78a000/i2c-bus@300/i2c-switch@70/i2c@2/psu@58
root 1029 1 0 14:45 ? 00:00:16 phosphor-hwmon-readd
-o /ahb/apb/pwm-tacho-controller@1e786000
root 1030 1 0 14:45 ? 00:00:00 phosphor-hwmon-readd
-o /ahb/apb/i2c@1e78a000/i2c-bus@300/i2c-switch@70/i2c@3/psu@59
root 1034 1 1 14:45 ? 00:00:22 phosphor-hwmon-readd
-o /iio-hwmon
systemd+ 1046 1 0 14:45 ? 00:00:00 /lib/systemd/systemd-networkd
root 1057 1 0 14:45 ? 00:00:00 /usr/sbin/watchdog
root 1070 1 2 14:45 ? 00:00:56 /usr/sbin/swampd
root 1090 1001 0 14:46 ttyS4 00:00:00 -sh
root 1123 2 0 14:50 ? 00:00:00 [kworker/u2:0-ev]
root 1272 2 0 15:13 ? 00:00:00 [kworker/0:2-eve]
root 1304 983 1 15:18 ? 00:00:00 sleep 10
root 1305 1090 0 15:18 ttyS4 00:00:00 ps -eaf
>
> -Ed
>
>
>
> > On Oct 9, 2018, at 9:54 AM, Patrick Venture <venture@google.com> wrote:
> >
> > Just jumped from 4.7 kernel to 4.18 running the latest openbmc image
> > on the quanta-q71l board. And I see now I have ~20MiB of RAM free for
> > stuff once things are settled, whereas before I could have up to
> > 35MiB.
> >
> > Here are some dumps:
> >
> > Now:
> > [ 0.000000] CPU: ARM926EJ-S [41069265] revision 5 (ARMv5TEJ), cr=0005317f
> > [ 0.000000] CPU: VIVT data cache, VIVT instruction cache
> > [ 0.000000] OF: fdt: Machine model: Quanta Q71L BMC
> > [ 0.000000] Memory policy: Data cache writeback
> > [ 0.000000] On node 0 totalpages: 30720
> > [ 0.000000] Normal zone: 240 pages used for memmap
> > [ 0.000000] Normal zone: 0 pages reserved
> > [ 0.000000] Normal zone: 30720 pages, LIFO batch:7
> > [ 0.000000] random: get_random_bytes called from
> > start_kernel+0x8c/0x4c0 with crng_init=0
> > [ 0.000000] pcpu-alloc: s0 r0 d32768 u32768 alloc=1*32768
> > [ 0.000000] pcpu-alloc: [0] 0
> > [ 0.000000] Built 1 zonelists, mobility grouping on. Total pages: 30480
> > [ 0.000000] Kernel command line: console=ttyS4,115200n8
> > root=/dev/ram rw clk_ignore_unused
> > [ 0.000000] Dentry cache hash table entries: 16384 (order: 4, 65536 bytes)
> > [ 0.000000] Inode-cache hash table entries: 8192 (order: 3, 32768 bytes)
> > [ 0.000000] Memory: 111076K/122880K available (5120K kernel code,
> > 365K rwdata, 1104K rodata, 1024K init, 143K bss, 11804K reserved, 0K
> > cma-reserved)
> > [ 0.000000] Virtual kernel memory layout:
> > [ 0.000000] vector : 0xffff0000 - 0xffff1000 ( 4 kB)
> > [ 0.000000] fixmap : 0xffc00000 - 0xfff00000 (3072 kB)
> > [ 0.000000] vmalloc : 0x88000000 - 0xff800000 (1912 MB)
> > [ 0.000000] lowmem : 0x80000000 - 0x87800000 ( 120 MB)
> > [ 0.000000] .text : 0x(ptrval) - 0x(ptrval) (6112 kB)
> > [ 0.000000] .init : 0x(ptrval) - 0x(ptrval) (1024 kB)
> > [ 0.000000] .data : 0x(ptrval) - 0x(ptrval) ( 366 kB)
> > [ 0.000000] .bss : 0x(ptrval) - 0x(ptrval) ( 144 kB)
> > [ 0.000000] ftrace: allocating 18546 entries in 55 pages
> >
> > cat /proc/meminfo
> > MemTotal: 113952 kB
> > MemFree: 19944 kB
> > MemAvailable: 62432 kB
> > Buffers: 11032 kB
> > Cached: 48732 kB
> > SwapCached: 0 kB
> > Active: 40940 kB
> > Inactive: 26728 kB
> > Active(anon): 17068 kB
> > Inactive(anon): 9316 kB
> > Active(file): 23872 kB
> > Inactive(file): 17412 kB
> > Unevictable: 9088 kB
> > Mlocked: 0 kB
> > SwapTotal: 0 kB
> > SwapFree: 0 kB
> > Dirty: 0 kB
> > Writeback: 0 kB
> > AnonPages: 17008 kB
> > Mapped: 21120 kB
> > Shmem: 9392 kB
> > Slab: 11956 kB
> > SReclaimable: 6472 kB
> > SUnreclaim: 5484 kB
> > KernelStack: 560 kB
> > PageTables: 1384 kB
> > NFS_Unstable: 0 kB
> > Bounce: 0 kB
> > WritebackTmp: 0 kB
> > CommitLimit: 56976 kB
> > Committed_AS: 124224 kB
> > VmallocTotal: 1957888 kB
> > VmallocUsed: 0 kB
> > VmallocChunk: 0 kB
> >
> > Before:
> > dmesg
> > Normal zone: 30720 pages, LIFO batch:7
> > pcpu-alloc: s0 r0 d32768 u32768 alloc=1*32768
> > pcpu-alloc: [0] 0
> > Built 1 zonelists in Zone order, mobility grouping on. Total pages: 30480
> > Kernel command line: console=ttyS4,115200n8 root=/dev/ram rw
> > PID hash table entries: 512 (order: -1, 2048 bytes)
> > Dentry cache hash table entries: 16384 (order: 4, 65536 bytes)
> > Inode-cache hash table entries: 8192 (order: 3, 32768 bytes)
> > Memory: 113644K/122880K available (4206K kernel code, 150K rwdata,
> > 860K rodata, 1024K init, 111K bss, 9236K reserved, 0K cma-reserved)
> > Virtual kernel memory layout:
> > vector : 0xffff0000 - 0xffff1000 ( 4 kB)
> > fixmap : 0xffc00000 - 0xfff00000 (3072 kB)
> > vmalloc : 0xc8000000 - 0xff800000 ( 888 MB)
> > lowmem : 0xc0000000 - 0xc7800000 ( 120 MB)
> > .text : 0xc0008000 - 0xc05f28ec (6059 kB)
> > .init : 0xc0600000 - 0xc0700000 (1024 kB)
> > .data : 0xc0700000 - 0xc0725be8 ( 151 kB)
> > .bss : 0xc0725be8 - 0xc0741a38 ( 112 kB)
> >
> > cat /proc/meminfo
> > MemTotal: 116224 kB
> > MemFree: 35832 kB
> > MemAvailable: 76952 kB
> > Buffers: 9596 kB
> > Cached: 39776 kB
> > SwapCached: 0 kB
> > Active: 40516 kB
> > Inactive: 25432 kB
> > Active(anon): 17004 kB
> > Inactive(anon): 6968 kB
> > Active(file): 23512 kB
> > Inactive(file): 18464 kB
> > Unevictable: 0 kB
> > Mlocked: 0 kB
> > SwapTotal: 0 kB
> > SwapFree: 0 kB
> > Dirty: 0 kB
> > Writeback: 0 kB
> > AnonPages: 16588 kB
> > Mapped: 20064 kB
> > Shmem: 7396 kB
> > Slab: 9424 kB
> > SReclaimable: 4532 kB
> > SUnreclaim: 4892 kB
> > KernelStack: 720 kB
> > PageTables: 1328 kB
> > NFS_Unstable: 0 kB
> > Bounce: 0 kB
> > WritebackTmp: 0 kB
> > CommitLimit: 58112 kB
> > Committed_AS: 142324 kB
> > VmallocTotal: 909312 kB
> > VmallocUsed: 0 kB
> > VmallocChunk: 0 kB
> >
> > This matters for a few reasons:
> > 1) my memory chip is too small to be practical and I need all the
> > bytes I can get.
> > 2) I need at least 32MiB to load a new firmware image.
> >
> > I dropped all the python except the mapper, and I dropped the newer
> > daemons from my build to clear out that difference. It was originally
> > about 16MiB difference, so I was thinking that something is now being
> > mapped by default that wasn't before, such as part of a flash image.
> >
> > Patrick
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Loss of several MB of run-time memory
2018-10-09 17:06 ` Kun Yi
@ 2018-10-09 17:20 ` Patrick Venture
2018-10-09 18:55 ` Patrick Venture
2018-10-09 22:25 ` Joel Stanley
1 sibling, 1 reply; 11+ messages in thread
From: Patrick Venture @ 2018-10-09 17:20 UTC (permalink / raw)
To: Kun Yi; +Cc: Tanous, Ed, OpenBMC Maillist
On Tue, Oct 9, 2018 at 10:06 AM Kun Yi <kunyi@google.com> wrote:
>
> A somewhat tedious way to test would be to build and boot with 'bitbake core-image-minimal' to ensure no phosphor-daemons are loaded, and
then compare the kernel memory footprint.
So, I went through and dropped all the unused image_Features will
dropped a lot of daemons. I should have about the same between the
two images, but I can do some side-by-sides of the filesystem. The ps
is nearly identical.
>
> On Tue, Oct 9, 2018 at 10:01 AM Tanous, Ed <ed.tanous@intel.com> wrote:
>>
>> Was this only the kernel version jump, or did you jump in openbmc/phosphor levels as well? There have been quite a few daemons added in the last 6 months or so that could explain your memory footprint increase.
>>
>> -Ed
>>
>>
>>
>> > On Oct 9, 2018, at 9:54 AM, Patrick Venture <venture@google.com> wrote:
>> >
>> > Just jumped from 4.7 kernel to 4.18 running the latest openbmc image
>> > on the quanta-q71l board. And I see now I have ~20MiB of RAM free for
>> > stuff once things are settled, whereas before I could have up to
>> > 35MiB.
>> >
>> > Here are some dumps:
>> >
>> > Now:
>> > [ 0.000000] CPU: ARM926EJ-S [41069265] revision 5 (ARMv5TEJ), cr=0005317f
>> > [ 0.000000] CPU: VIVT data cache, VIVT instruction cache
>> > [ 0.000000] OF: fdt: Machine model: Quanta Q71L BMC
>> > [ 0.000000] Memory policy: Data cache writeback
>> > [ 0.000000] On node 0 totalpages: 30720
>> > [ 0.000000] Normal zone: 240 pages used for memmap
>> > [ 0.000000] Normal zone: 0 pages reserved
>> > [ 0.000000] Normal zone: 30720 pages, LIFO batch:7
>> > [ 0.000000] random: get_random_bytes called from
>> > start_kernel+0x8c/0x4c0 with crng_init=0
>> > [ 0.000000] pcpu-alloc: s0 r0 d32768 u32768 alloc=1*32768
>> > [ 0.000000] pcpu-alloc: [0] 0
>> > [ 0.000000] Built 1 zonelists, mobility grouping on. Total pages: 30480
>> > [ 0.000000] Kernel command line: console=ttyS4,115200n8
>> > root=/dev/ram rw clk_ignore_unused
>> > [ 0.000000] Dentry cache hash table entries: 16384 (order: 4, 65536 bytes)
>> > [ 0.000000] Inode-cache hash table entries: 8192 (order: 3, 32768 bytes)
>> > [ 0.000000] Memory: 111076K/122880K available (5120K kernel code,
>> > 365K rwdata, 1104K rodata, 1024K init, 143K bss, 11804K reserved, 0K
>> > cma-reserved)
>> > [ 0.000000] Virtual kernel memory layout:
>> > [ 0.000000] vector : 0xffff0000 - 0xffff1000 ( 4 kB)
>> > [ 0.000000] fixmap : 0xffc00000 - 0xfff00000 (3072 kB)
>> > [ 0.000000] vmalloc : 0x88000000 - 0xff800000 (1912 MB)
>> > [ 0.000000] lowmem : 0x80000000 - 0x87800000 ( 120 MB)
>> > [ 0.000000] .text : 0x(ptrval) - 0x(ptrval) (6112 kB)
>> > [ 0.000000] .init : 0x(ptrval) - 0x(ptrval) (1024 kB)
>> > [ 0.000000] .data : 0x(ptrval) - 0x(ptrval) ( 366 kB)
>> > [ 0.000000] .bss : 0x(ptrval) - 0x(ptrval) ( 144 kB)
>> > [ 0.000000] ftrace: allocating 18546 entries in 55 pages
>> >
>> > cat /proc/meminfo
>> > MemTotal: 113952 kB
>> > MemFree: 19944 kB
>> > MemAvailable: 62432 kB
>> > Buffers: 11032 kB
>> > Cached: 48732 kB
>> > SwapCached: 0 kB
>> > Active: 40940 kB
>> > Inactive: 26728 kB
>> > Active(anon): 17068 kB
>> > Inactive(anon): 9316 kB
>> > Active(file): 23872 kB
>> > Inactive(file): 17412 kB
>> > Unevictable: 9088 kB
>> > Mlocked: 0 kB
>> > SwapTotal: 0 kB
>> > SwapFree: 0 kB
>> > Dirty: 0 kB
>> > Writeback: 0 kB
>> > AnonPages: 17008 kB
>> > Mapped: 21120 kB
>> > Shmem: 9392 kB
>> > Slab: 11956 kB
>> > SReclaimable: 6472 kB
>> > SUnreclaim: 5484 kB
>> > KernelStack: 560 kB
>> > PageTables: 1384 kB
>> > NFS_Unstable: 0 kB
>> > Bounce: 0 kB
>> > WritebackTmp: 0 kB
>> > CommitLimit: 56976 kB
>> > Committed_AS: 124224 kB
>> > VmallocTotal: 1957888 kB
>> > VmallocUsed: 0 kB
>> > VmallocChunk: 0 kB
>> >
>> > Before:
>> > dmesg
>> > Normal zone: 30720 pages, LIFO batch:7
>> > pcpu-alloc: s0 r0 d32768 u32768 alloc=1*32768
>> > pcpu-alloc: [0] 0
>> > Built 1 zonelists in Zone order, mobility grouping on. Total pages: 30480
>> > Kernel command line: console=ttyS4,115200n8 root=/dev/ram rw
>> > PID hash table entries: 512 (order: -1, 2048 bytes)
>> > Dentry cache hash table entries: 16384 (order: 4, 65536 bytes)
>> > Inode-cache hash table entries: 8192 (order: 3, 32768 bytes)
>> > Memory: 113644K/122880K available (4206K kernel code, 150K rwdata,
>> > 860K rodata, 1024K init, 111K bss, 9236K reserved, 0K cma-reserved)
>> > Virtual kernel memory layout:
>> > vector : 0xffff0000 - 0xffff1000 ( 4 kB)
>> > fixmap : 0xffc00000 - 0xfff00000 (3072 kB)
>> > vmalloc : 0xc8000000 - 0xff800000 ( 888 MB)
>> > lowmem : 0xc0000000 - 0xc7800000 ( 120 MB)
>> > .text : 0xc0008000 - 0xc05f28ec (6059 kB)
>> > .init : 0xc0600000 - 0xc0700000 (1024 kB)
>> > .data : 0xc0700000 - 0xc0725be8 ( 151 kB)
>> > .bss : 0xc0725be8 - 0xc0741a38 ( 112 kB)
>> >
>> > cat /proc/meminfo
>> > MemTotal: 116224 kB
>> > MemFree: 35832 kB
>> > MemAvailable: 76952 kB
>> > Buffers: 9596 kB
>> > Cached: 39776 kB
>> > SwapCached: 0 kB
>> > Active: 40516 kB
>> > Inactive: 25432 kB
>> > Active(anon): 17004 kB
>> > Inactive(anon): 6968 kB
>> > Active(file): 23512 kB
>> > Inactive(file): 18464 kB
>> > Unevictable: 0 kB
>> > Mlocked: 0 kB
>> > SwapTotal: 0 kB
>> > SwapFree: 0 kB
>> > Dirty: 0 kB
>> > Writeback: 0 kB
>> > AnonPages: 16588 kB
>> > Mapped: 20064 kB
>> > Shmem: 7396 kB
>> > Slab: 9424 kB
>> > SReclaimable: 4532 kB
>> > SUnreclaim: 4892 kB
>> > KernelStack: 720 kB
>> > PageTables: 1328 kB
>> > NFS_Unstable: 0 kB
>> > Bounce: 0 kB
>> > WritebackTmp: 0 kB
>> > CommitLimit: 58112 kB
>> > Committed_AS: 142324 kB
>> > VmallocTotal: 909312 kB
>> > VmallocUsed: 0 kB
>> > VmallocChunk: 0 kB
>> >
>> > This matters for a few reasons:
>> > 1) my memory chip is too small to be practical and I need all the
>> > bytes I can get.
>> > 2) I need at least 32MiB to load a new firmware image.
>> >
>> > I dropped all the python except the mapper, and I dropped the newer
>> > daemons from my build to clear out that difference. It was originally
>> > about 16MiB difference, so I was thinking that something is now being
>> > mapped by default that wasn't before, such as part of a flash image.
>> >
>> > Patrick
>
>
>
> --
> Regards,
> Kun
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Loss of several MB of run-time memory
2018-10-09 17:20 ` Patrick Venture
@ 2018-10-09 18:55 ` Patrick Venture
0 siblings, 0 replies; 11+ messages in thread
From: Patrick Venture @ 2018-10-09 18:55 UTC (permalink / raw)
To: Kun Yi; +Cc: Tanous, Ed, OpenBMC Maillist
On Tue, Oct 9, 2018 at 10:20 AM Patrick Venture <venture@google.com> wrote:
>
> On Tue, Oct 9, 2018 at 10:06 AM Kun Yi <kunyi@google.com> wrote:
> >
> > A somewhat tedious way to test would be to build and boot with 'bitbake core-image-minimal' to ensure no phosphor-daemons are loaded, and
> then compare the kernel memory footprint.
>
> So, I went through and dropped all the unused image_Features will
> dropped a lot of daemons. I should have about the same between the
> two images, but I can do some side-by-sides of the filesystem. The ps
> is nearly identical.
Item | 4.17 | 4.7
| kB | kB | kB
MemTotal | 113944 | 116224 | -2280
MemFree | 19316 | 35888 | -16572
MemAvailable | 61700 | 77012 | -15312
Buffers | 11072 | 9596 | 1476
Cached | 48872 | 39616 | 9256
Active | 39012 | 40468 | -1456
Inactive | 29200 | 25432 | 3768
Active(anon) | 17432 | 16956 | 476
Inactive(anon) | 9332 | 6968 | 2364
Active(file) | 21580 | 23512 | -1932
Inactive(file) | 19868 | 18464 | 1404
Unevitable | 9088 | 0 | 9088
AnonPages | 17372 | 16700 | 672
Mapped | 22704 | 19712 | 2992
Shmem | 9408 | 7236 | 2172
Slab | 11704 | 9428 | 2276
SReclaimable | 6204 | 4536 | 1668
SUnreclaim | 5500 | 4892 | 608
KernelStack | 648 | 720 | -72
PageTables | 1472 | 1328 | 144
CommitLimit | 56972 | 58112 | -1140
Commited_AS | 125448 | 142372 | -16924
VmallocTota | 1957888 | 909312 | 1048576
^^^ view the above in monospace and it'll look nice.
>
> >
> > On Tue, Oct 9, 2018 at 10:01 AM Tanous, Ed <ed.tanous@intel.com> wrote:
> >>
> >> Was this only the kernel version jump, or did you jump in openbmc/phosphor levels as well? There have been quite a few daemons added in the last 6 months or so that could explain your memory footprint increase.
> >>
> >> -Ed
> >>
> >>
> >>
> >> > On Oct 9, 2018, at 9:54 AM, Patrick Venture <venture@google.com> wrote:
> >> >
> >> > Just jumped from 4.7 kernel to 4.18 running the latest openbmc image
> >> > on the quanta-q71l board. And I see now I have ~20MiB of RAM free for
> >> > stuff once things are settled, whereas before I could have up to
> >> > 35MiB.
> >> >
> >> > Here are some dumps:
> >> >
> >> > Now:
> >> > [ 0.000000] CPU: ARM926EJ-S [41069265] revision 5 (ARMv5TEJ), cr=0005317f
> >> > [ 0.000000] CPU: VIVT data cache, VIVT instruction cache
> >> > [ 0.000000] OF: fdt: Machine model: Quanta Q71L BMC
> >> > [ 0.000000] Memory policy: Data cache writeback
> >> > [ 0.000000] On node 0 totalpages: 30720
> >> > [ 0.000000] Normal zone: 240 pages used for memmap
> >> > [ 0.000000] Normal zone: 0 pages reserved
> >> > [ 0.000000] Normal zone: 30720 pages, LIFO batch:7
> >> > [ 0.000000] random: get_random_bytes called from
> >> > start_kernel+0x8c/0x4c0 with crng_init=0
> >> > [ 0.000000] pcpu-alloc: s0 r0 d32768 u32768 alloc=1*32768
> >> > [ 0.000000] pcpu-alloc: [0] 0
> >> > [ 0.000000] Built 1 zonelists, mobility grouping on. Total pages: 30480
> >> > [ 0.000000] Kernel command line: console=ttyS4,115200n8
> >> > root=/dev/ram rw clk_ignore_unused
> >> > [ 0.000000] Dentry cache hash table entries: 16384 (order: 4, 65536 bytes)
> >> > [ 0.000000] Inode-cache hash table entries: 8192 (order: 3, 32768 bytes)
> >> > [ 0.000000] Memory: 111076K/122880K available (5120K kernel code,
> >> > 365K rwdata, 1104K rodata, 1024K init, 143K bss, 11804K reserved, 0K
> >> > cma-reserved)
> >> > [ 0.000000] Virtual kernel memory layout:
> >> > [ 0.000000] vector : 0xffff0000 - 0xffff1000 ( 4 kB)
> >> > [ 0.000000] fixmap : 0xffc00000 - 0xfff00000 (3072 kB)
> >> > [ 0.000000] vmalloc : 0x88000000 - 0xff800000 (1912 MB)
> >> > [ 0.000000] lowmem : 0x80000000 - 0x87800000 ( 120 MB)
> >> > [ 0.000000] .text : 0x(ptrval) - 0x(ptrval) (6112 kB)
> >> > [ 0.000000] .init : 0x(ptrval) - 0x(ptrval) (1024 kB)
> >> > [ 0.000000] .data : 0x(ptrval) - 0x(ptrval) ( 366 kB)
> >> > [ 0.000000] .bss : 0x(ptrval) - 0x(ptrval) ( 144 kB)
> >> > [ 0.000000] ftrace: allocating 18546 entries in 55 pages
> >> >
> >> > cat /proc/meminfo
> >> > MemTotal: 113952 kB
> >> > MemFree: 19944 kB
> >> > MemAvailable: 62432 kB
> >> > Buffers: 11032 kB
> >> > Cached: 48732 kB
> >> > SwapCached: 0 kB
> >> > Active: 40940 kB
> >> > Inactive: 26728 kB
> >> > Active(anon): 17068 kB
> >> > Inactive(anon): 9316 kB
> >> > Active(file): 23872 kB
> >> > Inactive(file): 17412 kB
> >> > Unevictable: 9088 kB
> >> > Mlocked: 0 kB
> >> > SwapTotal: 0 kB
> >> > SwapFree: 0 kB
> >> > Dirty: 0 kB
> >> > Writeback: 0 kB
> >> > AnonPages: 17008 kB
> >> > Mapped: 21120 kB
> >> > Shmem: 9392 kB
> >> > Slab: 11956 kB
> >> > SReclaimable: 6472 kB
> >> > SUnreclaim: 5484 kB
> >> > KernelStack: 560 kB
> >> > PageTables: 1384 kB
> >> > NFS_Unstable: 0 kB
> >> > Bounce: 0 kB
> >> > WritebackTmp: 0 kB
> >> > CommitLimit: 56976 kB
> >> > Committed_AS: 124224 kB
> >> > VmallocTotal: 1957888 kB
> >> > VmallocUsed: 0 kB
> >> > VmallocChunk: 0 kB
> >> >
> >> > Before:
> >> > dmesg
> >> > Normal zone: 30720 pages, LIFO batch:7
> >> > pcpu-alloc: s0 r0 d32768 u32768 alloc=1*32768
> >> > pcpu-alloc: [0] 0
> >> > Built 1 zonelists in Zone order, mobility grouping on. Total pages: 30480
> >> > Kernel command line: console=ttyS4,115200n8 root=/dev/ram rw
> >> > PID hash table entries: 512 (order: -1, 2048 bytes)
> >> > Dentry cache hash table entries: 16384 (order: 4, 65536 bytes)
> >> > Inode-cache hash table entries: 8192 (order: 3, 32768 bytes)
> >> > Memory: 113644K/122880K available (4206K kernel code, 150K rwdata,
> >> > 860K rodata, 1024K init, 111K bss, 9236K reserved, 0K cma-reserved)
> >> > Virtual kernel memory layout:
> >> > vector : 0xffff0000 - 0xffff1000 ( 4 kB)
> >> > fixmap : 0xffc00000 - 0xfff00000 (3072 kB)
> >> > vmalloc : 0xc8000000 - 0xff800000 ( 888 MB)
> >> > lowmem : 0xc0000000 - 0xc7800000 ( 120 MB)
> >> > .text : 0xc0008000 - 0xc05f28ec (6059 kB)
> >> > .init : 0xc0600000 - 0xc0700000 (1024 kB)
> >> > .data : 0xc0700000 - 0xc0725be8 ( 151 kB)
> >> > .bss : 0xc0725be8 - 0xc0741a38 ( 112 kB)
> >> >
> >> > cat /proc/meminfo
> >> > MemTotal: 116224 kB
> >> > MemFree: 35832 kB
> >> > MemAvailable: 76952 kB
> >> > Buffers: 9596 kB
> >> > Cached: 39776 kB
> >> > SwapCached: 0 kB
> >> > Active: 40516 kB
> >> > Inactive: 25432 kB
> >> > Active(anon): 17004 kB
> >> > Inactive(anon): 6968 kB
> >> > Active(file): 23512 kB
> >> > Inactive(file): 18464 kB
> >> > Unevictable: 0 kB
> >> > Mlocked: 0 kB
> >> > SwapTotal: 0 kB
> >> > SwapFree: 0 kB
> >> > Dirty: 0 kB
> >> > Writeback: 0 kB
> >> > AnonPages: 16588 kB
> >> > Mapped: 20064 kB
> >> > Shmem: 7396 kB
> >> > Slab: 9424 kB
> >> > SReclaimable: 4532 kB
> >> > SUnreclaim: 4892 kB
> >> > KernelStack: 720 kB
> >> > PageTables: 1328 kB
> >> > NFS_Unstable: 0 kB
> >> > Bounce: 0 kB
> >> > WritebackTmp: 0 kB
> >> > CommitLimit: 58112 kB
> >> > Committed_AS: 142324 kB
> >> > VmallocTotal: 909312 kB
> >> > VmallocUsed: 0 kB
> >> > VmallocChunk: 0 kB
> >> >
> >> > This matters for a few reasons:
> >> > 1) my memory chip is too small to be practical and I need all the
> >> > bytes I can get.
> >> > 2) I need at least 32MiB to load a new firmware image.
> >> >
> >> > I dropped all the python except the mapper, and I dropped the newer
> >> > daemons from my build to clear out that difference. It was originally
> >> > about 16MiB difference, so I was thinking that something is now being
> >> > mapped by default that wasn't before, such as part of a flash image.
> >> >
> >> > Patrick
> >
> >
> >
> > --
> > Regards,
> > Kun
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Loss of several MB of run-time memory
2018-10-09 17:06 ` Kun Yi
2018-10-09 17:20 ` Patrick Venture
@ 2018-10-09 22:25 ` Joel Stanley
2018-10-10 15:31 ` Patrick Venture
2018-10-16 1:30 ` Brad Bishop
1 sibling, 2 replies; 11+ messages in thread
From: Joel Stanley @ 2018-10-09 22:25 UTC (permalink / raw)
To: Patrick Venture, Brad Bishop; +Cc: Ed Tanous, OpenBMC Maillist, Kun Yi
On Wed, 10 Oct 2018 at 03:38, Kun Yi <kunyi@google.com> wrote:
>
> A somewhat tedious way to test would be to build and boot with 'bitbake core-image-minimal' to ensure no phosphor-daemons are loaded, and then compare the kernel memory footprint.
>
> On Tue, Oct 9, 2018 at 10:01 AM Tanous, Ed <ed.tanous@intel.com> wrote:
>>
>> Was this only the kernel version jump, or did you jump in openbmc/phosphor levels as well? There have been quite a few daemons added in the last 6 months or so that could explain your memory footprint increase.
HI team. Just a reminder about email etiquette on the mailing lists
that Brad posted a while back:
https://fedoraproject.org/wiki/Mailing_list_guidelines#Proper_posting_style
In particular the top posting bit, which makes it hard to reply to this thread.
Back to the issue at hand:
>> > On Oct 9, 2018, at 9:54 AM, Patrick Venture <venture@google.com> wrote:
>> >
>> > Just jumped from 4.7 kernel to 4.18 running the latest openbmc image
>> > on the quanta-q71l board. And I see now I have ~20MiB of RAM free for
>> > stuff once things are settled, whereas before I could have up to
>> > 35MiB.
Are you able to boot the new kernel with the old userspace? This will
allow you to compare like for like (even if the system is not fully
functional in that state). Alternatively, boot it with a small
non-openbmc initrd to allow comparisons as Kun suggested.
The kernel has grown a bunch of new drivers. Most of them should not
probe, and therefore won't allocate memory at run time, but there may
be some new ones.
I've not spent much time looking at runtime memory usage, so if these
suggestions don't provide answers we might need to investigate a bit
deeper.
>> > This matters for a few reasons:
>> > 1) my memory chip is too small to be practical and I need all the
>> > bytes I can get.
>> > 2) I need at least 32MiB to load a new firmware image.
>> >
>> > I dropped all the python except the mapper, and I dropped the newer
>> > daemons from my build to clear out that difference. It was originally
>> > about 16MiB difference, so I was thinking that something is now being
>> > mapped by default that wasn't before, such as part of a flash image.
I initially thought you were confusing RAM with flash size, but on a
second read I now understand.
There has been recent work to create phosphor-tiny, is that relevant here Brad?
Cheers,
Joel
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Loss of several MB of run-time memory
2018-10-09 22:25 ` Joel Stanley
@ 2018-10-10 15:31 ` Patrick Venture
2018-10-10 16:17 ` Patrick Venture
2018-10-16 1:30 ` Brad Bishop
1 sibling, 1 reply; 11+ messages in thread
From: Patrick Venture @ 2018-10-10 15:31 UTC (permalink / raw)
To: Joel Stanley; +Cc: Brad Bishop, Tanous, Ed, OpenBMC Maillist, Kun Yi
On Tue, Oct 9, 2018 at 3:25 PM Joel Stanley <joel@jms.id.au> wrote:
>
> On Wed, 10 Oct 2018 at 03:38, Kun Yi <kunyi@google.com> wrote:
> >
> > A somewhat tedious way to test would be to build and boot with 'bitbake core-image-minimal' to ensure no phosphor-daemons are loaded, and then compare the kernel memory footprint.
> >
> > On Tue, Oct 9, 2018 at 10:01 AM Tanous, Ed <ed.tanous@intel.com> wrote:
> >>
> >> Was this only the kernel version jump, or did you jump in openbmc/phosphor levels as well? There have been quite a few daemons added in the last 6 months or so that could explain your memory footprint increase.
>
> HI team. Just a reminder about email etiquette on the mailing lists
> that Brad posted a while back:
>
> https://fedoraproject.org/wiki/Mailing_list_guidelines#Proper_posting_style
>
> In particular the top posting bit, which makes it hard to reply to this thread.
>
> Back to the issue at hand:
>
> >> > On Oct 9, 2018, at 9:54 AM, Patrick Venture <venture@google.com> wrote:
> >> >
> >> > Just jumped from 4.7 kernel to 4.18 running the latest openbmc image
> >> > on the quanta-q71l board. And I see now I have ~20MiB of RAM free for
> >> > stuff once things are settled, whereas before I could have up to
> >> > 35MiB.
>
> Are you able to boot the new kernel with the old userspace? This will
> allow you to compare like for like (even if the system is not fully
> functional in that state). Alternatively, boot it with a small
> non-openbmc initrd to allow comparisons as Kun suggested.
I'm going to try building an older kernel with the newer userspace
first (it's a slightly easier experiment). I'll post back on this
thread once I have a little more information.
>
> The kernel has grown a bunch of new drivers. Most of them should not
> probe, and therefore won't allocate memory at run time, but there may
> be some new ones.
>
> I've not spent much time looking at runtime memory usage, so if these
> suggestions don't provide answers we might need to investigate a bit
> deeper.
>
> >> > This matters for a few reasons:
> >> > 1) my memory chip is too small to be practical and I need all the
> >> > bytes I can get.
> >> > 2) I need at least 32MiB to load a new firmware image.
> >> >
> >> > I dropped all the python except the mapper, and I dropped the newer
> >> > daemons from my build to clear out that difference. It was originally
> >> > about 16MiB difference, so I was thinking that something is now being
> >> > mapped by default that wasn't before, such as part of a flash image.
>
> I initially thought you were confusing RAM with flash size, but on a
> second read I now understand.
>
> There has been recent work to create phosphor-tiny, is that relevant here Brad?
>
> Cheers,
>
> Joel
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Loss of several MB of run-time memory
2018-10-10 15:31 ` Patrick Venture
@ 2018-10-10 16:17 ` Patrick Venture
2018-10-11 14:25 ` Patrick Venture
0 siblings, 1 reply; 11+ messages in thread
From: Patrick Venture @ 2018-10-10 16:17 UTC (permalink / raw)
To: Joel Stanley; +Cc: Brad Bishop, Tanous, Ed, OpenBMC Maillist, Kun Yi
On Wed, Oct 10, 2018 at 8:31 AM Patrick Venture <venture@google.com> wrote:
>
> On Tue, Oct 9, 2018 at 3:25 PM Joel Stanley <joel@jms.id.au> wrote:
> >
> > On Wed, 10 Oct 2018 at 03:38, Kun Yi <kunyi@google.com> wrote:
> > >
> > > A somewhat tedious way to test would be to build and boot with 'bitbake core-image-minimal' to ensure no phosphor-daemons are loaded, and then compare the kernel memory footprint.
> > >
> > > On Tue, Oct 9, 2018 at 10:01 AM Tanous, Ed <ed.tanous@intel.com> wrote:
> > >>
> > >> Was this only the kernel version jump, or did you jump in openbmc/phosphor levels as well? There have been quite a few daemons added in the last 6 months or so that could explain your memory footprint increase.
> >
> > HI team. Just a reminder about email etiquette on the mailing lists
> > that Brad posted a while back:
> >
> > https://fedoraproject.org/wiki/Mailing_list_guidelines#Proper_posting_style
> >
> > In particular the top posting bit, which makes it hard to reply to this thread.
> >
> > Back to the issue at hand:
> >
> > >> > On Oct 9, 2018, at 9:54 AM, Patrick Venture <venture@google.com> wrote:
> > >> >
> > >> > Just jumped from 4.7 kernel to 4.18 running the latest openbmc image
> > >> > on the quanta-q71l board. And I see now I have ~20MiB of RAM free for
> > >> > stuff once things are settled, whereas before I could have up to
> > >> > 35MiB.
> >
> > Are you able to boot the new kernel with the old userspace? This will
> > allow you to compare like for like (even if the system is not fully
> > functional in that state). Alternatively, boot it with a small
> > non-openbmc initrd to allow comparisons as Kun suggested.
>
> I'm going to try building an older kernel with the newer userspace
> first (it's a slightly easier experiment). I'll post back on this
> thread once I have a little more information.
Running 4.17 kernel with new userspace:
MemTotal 113944
MemFree 19316
MemAvailable 61700
Buffers 11072
Cached 48872
Active 39012
Inactive 29200
Active(anon) 17432
Inactive(anon) 9332
Active(file) 21580
Inactive(file) 19868
Unevitable 9088
AnonPages 17372
Mapped 22704
Shmem 9408
Slab 11704
SReclaimable 6204
SUnreclaim 5500
KernelStack 648
PageTables 1472
CommitLimit 56972
Commited_AS 125448
VmallocTota 1957888
Running 4.7.10 kernel with new userspace:
MemTotal: 116224 kB
MemFree: 27396 kB <--- older kernel same userspace, ~8MiB difference right off.
MemAvailable: 74212 kB
Buffers: 10380 kB
Cached: 46456 kB
Active: 40516 kB
Inactive: 33380 kB
Active(anon): 17124 kB
Inactive(anon): 9356 kB
Active(file): 23392 kB
Inactive(file): 24024 kB
Unevictable: 0 kB
AnonPages: 17076 kB
Mapped: 20696 kB
Shmem: 9420 kB
Slab: 9660 kB
SReclaimable: 4776 kB
SUnreclaim: 4884 kB
KernelStack: 744 kB
PageTables: 1364 kB
CommitLimit: 58112 kB
Committed_AS: 116540 kB
VmallocTotal: 909312 kB
Running 4.7.10 kernel with old (openbmc 1.99) userspace:
MemTotal: 116224 kB
MemFree: 39636 kB <--- everything before ~12MiB difference.
MemAvailable: 80224 kB
Buffers: 9500 kB
Cached: 38940 kB
SwapCached: 0 kB
Active: 36472 kB
Inactive: 26156 kB
Active(anon): 14452 kB
Inactive(anon): 6456 kB
Active(file): 22020 kB
Inactive(file): 19700 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 0 kB
Writeback: 0 kB
AnonPages: 14200 kB
Mapped: 19584 kB
Shmem: 6720 kB
Slab: 8928 kB
SReclaimable: 4256 kB
SUnreclaim: 4672 kB
KernelStack: 744 kB
PageTables: 1320 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 58112 kB
Committed_AS: 139392 kB
VmallocTotal: 909312 kB
VmallocUsed: 0 kB
VmallocChunk: 0 kB
Losing those 8MiB with just a kernel difference, makes me think it is
always loading more or there's some default 8MiB window being eaten
now that wasn't before. I deliberately have the VGA window, but
that's in both kernel's device-trees, so that isn't it -- but it makes
me think it's something along those lines.
As far as the rest of the loss in userspace :( :( I'll take a look at
what is built and compare sizes -- my process lists are basically
identical between old and new userspace, so something in them must
have changed?
>
> >
> > The kernel has grown a bunch of new drivers. Most of them should not
> > probe, and therefore won't allocate memory at run time, but there may
> > be some new ones.
> >
> > I've not spent much time looking at runtime memory usage, so if these
> > suggestions don't provide answers we might need to investigate a bit
> > deeper.
> >
> > >> > This matters for a few reasons:
> > >> > 1) my memory chip is too small to be practical and I need all the
> > >> > bytes I can get.
> > >> > 2) I need at least 32MiB to load a new firmware image.
> > >> >
> > >> > I dropped all the python except the mapper, and I dropped the newer
> > >> > daemons from my build to clear out that difference. It was originally
> > >> > about 16MiB difference, so I was thinking that something is now being
> > >> > mapped by default that wasn't before, such as part of a flash image.
> >
> > I initially thought you were confusing RAM with flash size, but on a
> > second read I now understand.
> >
> > There has been recent work to create phosphor-tiny, is that relevant here Brad?
So, the image fits on my flash chip, but it may have extra daemons
that are being mapped in from flash for the fs?
> >
> > Cheers,
> >
> > Joel
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Loss of several MB of run-time memory
2018-10-10 16:17 ` Patrick Venture
@ 2018-10-11 14:25 ` Patrick Venture
0 siblings, 0 replies; 11+ messages in thread
From: Patrick Venture @ 2018-10-11 14:25 UTC (permalink / raw)
To: Joel Stanley; +Cc: Brad Bishop, Tanous, Ed, OpenBMC Maillist, Kun Yi
On Wed, Oct 10, 2018 at 9:17 AM Patrick Venture <venture@google.com> wrote:
>
> On Wed, Oct 10, 2018 at 8:31 AM Patrick Venture <venture@google.com> wrote:
> >
> > On Tue, Oct 9, 2018 at 3:25 PM Joel Stanley <joel@jms.id.au> wrote:
> > >
> > > On Wed, 10 Oct 2018 at 03:38, Kun Yi <kunyi@google.com> wrote:
> > > >
> > > > A somewhat tedious way to test would be to build and boot with 'bitbake core-image-minimal' to ensure no phosphor-daemons are loaded, and then compare the kernel memory footprint.
> > > >
> > > > On Tue, Oct 9, 2018 at 10:01 AM Tanous, Ed <ed.tanous@intel.com> wrote:
> > > >>
> > > >> Was this only the kernel version jump, or did you jump in openbmc/phosphor levels as well? There have been quite a few daemons added in the last 6 months or so that could explain your memory footprint increase.
> > >
> > > HI team. Just a reminder about email etiquette on the mailing lists
> > > that Brad posted a while back:
> > >
> > > https://fedoraproject.org/wiki/Mailing_list_guidelines#Proper_posting_style
> > >
> > > In particular the top posting bit, which makes it hard to reply to this thread.
> > >
> > > Back to the issue at hand:
> > >
> > > >> > On Oct 9, 2018, at 9:54 AM, Patrick Venture <venture@google.com> wrote:
> > > >> >
> > > >> > Just jumped from 4.7 kernel to 4.18 running the latest openbmc image
> > > >> > on the quanta-q71l board. And I see now I have ~20MiB of RAM free for
> > > >> > stuff once things are settled, whereas before I could have up to
> > > >> > 35MiB.
> > >
> > > Are you able to boot the new kernel with the old userspace? This will
> > > allow you to compare like for like (even if the system is not fully
> > > functional in that state). Alternatively, boot it with a small
> > > non-openbmc initrd to allow comparisons as Kun suggested.
> >
> > I'm going to try building an older kernel with the newer userspace
> > first (it's a slightly easier experiment). I'll post back on this
> > thread once I have a little more information.
>
> Running 4.17 kernel with new userspace:
>
> MemTotal 113944
> MemFree 19316
> MemAvailable 61700
> Buffers 11072
> Cached 48872
> Active 39012
> Inactive 29200
> Active(anon) 17432
> Inactive(anon) 9332
> Active(file) 21580
> Inactive(file) 19868
> Unevitable 9088
> AnonPages 17372
> Mapped 22704
> Shmem 9408
> Slab 11704
> SReclaimable 6204
> SUnreclaim 5500
> KernelStack 648
> PageTables 1472
> CommitLimit 56972
> Commited_AS 125448
> VmallocTota 1957888
>
> Running 4.7.10 kernel with new userspace:
>
> MemTotal: 116224 kB
> MemFree: 27396 kB <--- older kernel same userspace, ~8MiB difference right off.
> MemAvailable: 74212 kB
> Buffers: 10380 kB
> Cached: 46456 kB
> Active: 40516 kB
> Inactive: 33380 kB
> Active(anon): 17124 kB
> Inactive(anon): 9356 kB
> Active(file): 23392 kB
> Inactive(file): 24024 kB
> Unevictable: 0 kB
> AnonPages: 17076 kB
> Mapped: 20696 kB
> Shmem: 9420 kB
> Slab: 9660 kB
> SReclaimable: 4776 kB
> SUnreclaim: 4884 kB
> KernelStack: 744 kB
> PageTables: 1364 kB
> CommitLimit: 58112 kB
> Committed_AS: 116540 kB
> VmallocTotal: 909312 kB
>
> Running 4.7.10 kernel with old (openbmc 1.99) userspace:
>
> MemTotal: 116224 kB
> MemFree: 39636 kB <--- everything before ~12MiB difference.
> MemAvailable: 80224 kB
> Buffers: 9500 kB
> Cached: 38940 kB
> SwapCached: 0 kB
> Active: 36472 kB
> Inactive: 26156 kB
> Active(anon): 14452 kB
> Inactive(anon): 6456 kB
> Active(file): 22020 kB
> Inactive(file): 19700 kB
> Unevictable: 0 kB
> Mlocked: 0 kB
> SwapTotal: 0 kB
> SwapFree: 0 kB
> Dirty: 0 kB
> Writeback: 0 kB
> AnonPages: 14200 kB
> Mapped: 19584 kB
> Shmem: 6720 kB
> Slab: 8928 kB
> SReclaimable: 4256 kB
> SUnreclaim: 4672 kB
> KernelStack: 744 kB
> PageTables: 1320 kB
> NFS_Unstable: 0 kB
> Bounce: 0 kB
> WritebackTmp: 0 kB
> CommitLimit: 58112 kB
> Committed_AS: 139392 kB
> VmallocTotal: 909312 kB
> VmallocUsed: 0 kB
> VmallocChunk: 0 kB
>
> Losing those 8MiB with just a kernel difference, makes me think it is
> always loading more or there's some default 8MiB window being eaten
> now that wasn't before. I deliberately have the VGA window, but
> that's in both kernel's device-trees, so that isn't it -- but it makes
> me think it's something along those lines.
>
> As far as the rest of the loss in userspace :( :( I'll take a look at
> what is built and compare sizes -- my process lists are basically
> identical between old and new userspace, so something in them must
> have changed?
Running with 2.3+ openbmc:
~# ps -eaf
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 Oct08 ? 00:00:31 /sbin/init
root 2 0 0 Oct08 ? 00:00:00 [kthreadd]
root 4 2 0 Oct08 ? 00:00:00 [kworker/0:0H]
root 5 2 0 Oct08 ? 00:00:00 [kworker/u2:0]
root 6 2 0 Oct08 ? 00:00:00 [mm_percpu_wq]
root 7 2 0 Oct08 ? 00:01:16 [ksoftirqd/0]
root 8 2 0 Oct08 ? 00:00:01 [watchdog/0]
root 9 2 0 Oct08 ? 00:00:00 [kdevtmpfs]
root 10 2 0 Oct08 ? 00:00:00 [oom_reaper]
root 11 2 0 Oct08 ? 00:00:00 [kworker/u2:1]
root 151 2 0 Oct08 ? 00:00:00 [writeback]
root 153 2 0 Oct08 ? 00:00:00 [crypto]
root 155 2 0 Oct08 ? 00:00:00 [kblockd]
root 156 2 0 Oct08 ? 00:00:00 [watchdogd]
root 176 2 0 Oct08 ? 00:00:22 [kworker/0:1]
root 197 2 0 Oct08 ? 00:00:00 [kswapd0]
root 245 2 0 Oct08 ? 00:00:00 [hwrng]
root 467 2 0 Oct08 ? 00:00:00 [ipv6_addrconf]
root 497 2 0 Oct08 ? 00:00:00 [kworker/0:1H]
root 525 2 0 Oct08 ? 00:00:00 [jffs2_gcd_mtd5]
root 551 1 0 Oct08 ? 00:02:46 /lib/systemd/systemd-journald
root 647 1 0 Oct08 ? 00:00:01 /lib/systemd/systemd-udevd
systemd+ 866 1 0 Oct08 ? 00:00:28 /lib/systemd/systemd-timesyncd
systemd+ 905 1 0 Oct08 ? 00:00:01 /lib/systemd/systemd-resolved
root 909 1 0 Oct08 ? 00:00:00 phosphor-log-manager
root 910 1 0 Oct08 ? 00:00:01 phosphor-settings-manager
message+ 911 1 1 Oct08 ? 00:16:18 /usr/bin/dbus-daemon
--system --address=systemd: --nofork --nopidfile --systemd-activation
--syslog-only
root 912 1 0 Oct08 ? 00:00:46 python
/usr/sbin/phosphor-mapper --path_namespaces=/org/openbmc
/xyz/openbmc_project --interface_namespaces=xyz.openbmc_project
org.openbmc --blacklists= --
root 913 1 0 Oct08 ? 00:00:10 btbridged
root 914 1 0 Oct08 ? 00:00:00 /sbin/klogd -n
root 915 1 0 Oct08 ? 00:00:03 phosphor-inventory
avahi 916 1 0 Oct08 ? 00:00:08 avahi-daemon: running
[iceblink.local]
root 917 1 0 Oct08 ? 00:00:00 /usr/sbin/snoopd -d
/dev/aspeed-lpc-snoop0 -b 1
root 918 1 0 Oct08 ? 00:00:06 /usr/sbin/rngd -f
root 919 1 0 Oct08 ? 00:00:00 /sbin/syslogd -n
root 921 1 0 Oct08 ? 00:01:50 /bin/bash
/usr/sbin/set_gateway_arp.sh eth1
root 924 1 0 Oct08 ? 00:00:00 phosphor-watchdog
--service=xyz.openbmc_project.Watchdog
--path=/xyz/openbmc_project/watchdog/host0
--target=iceblink-reset.target --continue
root 926 1 0 Oct08 ttyS5 00:00:00 obmc-console-server
--config /etc/obmc-console.conf ttyVUART0
root 931 2 0 Oct08 ? 00:00:00 [kworker/0:2]
avahi 936 916 0 Oct08 ? 00:00:00 avahi-daemon: chroot helper
root 938 1 0 Oct08 ? 00:00:01 ipmid
root 939 1 0 Oct08 ? 00:00:00 slpd
root 941 1 0 Oct08 ttyS4 00:00:00 /bin/login --
root 945 1 0 Oct08 ? 00:00:14 phosphor-hwmon-readd
-o /ahb/apb/i2c@1e78a000/i2c-bus@300/i2c-switch@70/i2c@2/psu@58
root 947 1 0 Oct08 ? 00:00:12 phosphor-hwmon-readd
-o /ahb/apb/i2c@1e78a000/i2c-bus@300/i2c-switch@70/i2c@1/psu@58
root 949 1 0 Oct08 ? 00:00:00 phosphor-hwmon-readd
-o /iio-hwmon-battery
root 950 1 0 Oct08 ? 00:01:05 phosphor-hwmon-readd
-o /ahb/apb/i2c@1e78a000/i2c-bus@80/tmp75@4c
root 954 1 0 Oct08 ? 00:00:00 phosphor-dbus-monitor
root 960 1 0 Oct08 ? 00:00:00 phosphor-network-snmpconf
root 965 1 0 Oct08 ? 00:00:03 phosphor-network-manager
root 969 1 0 Oct08 ? 00:00:18 phosphor-hwmon-readd
-o /ahb/apb/i2c@1e78a000/i2c-bus@300/i2c-switch@70/i2c@3/psu@59
root 970 1 0 Oct08 ? 00:01:03 phosphor-hwmon-readd
-o /ahb/apb/i2c@1e78a000/i2c-bus@80/tmp75@4e
root 971 1 0 Oct08 ? 00:00:21 phosphor-hwmon-readd
-o /ahb/apb/i2c@1e78a000/i2c-bus@300/i2c-switch@70/i2c@0/psu@59
root 972 1 0 Oct08 ? 00:07:59 phosphor-hwmon-readd
-o /ahb/apb/pwm-tacho-controller@1e786000
root 973 1 0 Oct08 ? 00:01:05 phosphor-hwmon-readd
-o /ahb/apb/i2c@1e78a000/i2c-bus@80/tmp75@4f
root 975 1 1 Oct08 ? 00:11:36 phosphor-hwmon-readd
-o /iio-hwmon
systemd+ 992 1 0 Oct08 ? 00:00:00 /lib/systemd/systemd-networkd
root 1006 1 0 Oct08 ? 00:02:00 /usr/sbin/ncsid eth1
root 1009 1 2 Oct08 ? 00:28:22 /usr/sbin/swampd
root 1012 1 0 Oct08 ? 00:00:04 /usr/sbin/watchdog
root 12836 1 0 01:46 ? 00:00:09 /usr/sbin/nemorad eth1
root 19051 2 0 07:33 ? 00:00:00 [kworker/u2:2]
root 19054 921 0 07:33 ? 00:00:00 sleep 10
root 19055 941 3 07:33 ttyS4 00:00:00 -sh
root 19059 19055 0 07:33 ttyS4 00:00:00 ps -eaf
The phosphor-hwmon each take 4%
top - 07:36:25 up 16:52, 1 user, load average: 0.15, 0.14, 0.10
Tasks: 65 total, 1 running, 52 sleeping, 0 stopped, 0 zombie
%Cpu(s): 2.9 us, 2.7 sy, 0.0 ni, 94.3 id, 0.0 wa, 0.0 hi, 0.1 si, 0.0 st
KiB Mem : 113960 total, 21252 free, 27920 used, 64788 buff/cache
KiB Swap: 0 total, 0 free, 0 used. 62284 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
912 root 20 0 14664 10356 5444 S 0.0 9.1 0:46.25
python <-- phosphor-mapper python
1 root 20 0 25568 6192 4492 S 0.0 5.4 0:31.74 systemd
12836 root 20 0 42860 5668 5304 S 0.0 5.0 0:09.80 nemorad
938 root 20 0 8236 5204 4644 S 0.0 4.6 0:01.12 ipmid
915 root 20 0 6892 4752 4496 S 0.0 4.2 0:03.57
phosphor-invent
965 root 20 0 6512 4700 4452 S 0.0 4.1 0:03.61
phosphor-networ
1009 root 20 0 41272 4676 4328 S 0.0 4.1 28:26.95 swampd
969 root 20 0 6576 4620 4336 S 0.0 4.1 0:18.10
phosphor-hwmon-
947 root 20 0 6576 4616 4336 S 0.0 4.1 0:12.03
phosphor-hwmon-
971 root 20 0 6576 4608 4336 S 0.0 4.0 0:21.64
phosphor-hwmon-
972 root 20 0 6576 4604 4336 S 0.0 4.0 8:00.20
phosphor-hwmon-
975 root 20 0 6680 4560 4272 S 0.0 4.0 11:38.04
phosphor-hwmon-
945 root 20 0 6576 4544 4272 S 0.0 4.0 0:14.57
phosphor-hwmon-
950 root 20 0 6576 4540 4272 S 0.0 4.0 1:05.53
phosphor-hwmon-
973 root 20 0 6576 4540 4272 S 0.0 4.0 1:06.05
phosphor-hwmon-
949 root 20 0 6576 4528 4272 S 0.0 4.0 0:00.24
phosphor-hwmon-
970 root 20 0 6576 4528 4272 S 0.0 4.0 1:04.10
phosphor-hwmon-
954 root 20 0 8204 4440 4176 S 0.0 3.9 0:00.67
phosphor-dbus-m
551 root 20 0 22400 4336 3932 S 0.0 3.8 2:46.93
systemd-journal
910 root 20 0 6480 4248 4004 S 0.0 3.7 0:01.35
phosphor-settin
924 root 20 0 6208 4128 3920 S 0.0 3.6 0:00.15
phosphor-watchd
909 root 20 0 6336 4088 3860 S 0.0 3.6 0:00.76
phosphor-log-ma
917 root 20 0 6268 3932 3720 S 0.0 3.5 0:00.18 snoopd
960 root 20 0 6256 3856 3648 S 0.0 3.4 0:00.23
phosphor-networ
992 systemd+ 20 0 5472 3800 3472 S 0.0 3.3 0:00.71
systemd-network
905 systemd+ 20 0 5940 3660 3384 S 0.0 3.2 0:01.35
systemd-resolve
866 systemd+ 20 0 24508 3388 3068 S 0.0 3.0 0:28.22
systemd-timesyn
1006 root 20 0 5316 3300 3128 S 0.0 2.9 2:01.05 ncsid
911 message+ 20 0 4404 3072 2456 S 0.0 2.7 16:21.02 dbus-daemon
939 root 20 0 5212 2852 2716 S 0.0 2.5 0:00.15 slpd
916 avahi 20 0 4492 2760 2532 S 0.0 2.4 0:08.32 avahi-daemon
647 root 20 0 3836 2604 2032 S 0.0 2.3 0:01.86 systemd-udevd
19055 root 20 0 3104 2528 2300 S 0.0 2.2 0:00.26 sh
921 root 20 0 3004 2304 2132 S 0.0 2.0 1:51.07
set_gateway_arp
913 root 20 0 2916 1912 1800 S 0.0 1.7 0:10.42 btbridged
941 root 20 0 3520 1900 1640 S 0.0 1.7 0:00.59 login
19111 root 20 0 2804 1796 1588 R 36.0 1.6 0:00.20 top
914 root 20 0 3272 1668 1588 S 0.0 1.5 0:00.07 klogd
919 root 20 0 3272 1644 1588 S 0.0 1.4 0:00.12 syslogd
936 avahi 20 0 4368 1392 1252 S 0.0 1.2 0:00.01 avahi-daemon
19110 root 20 0 3140 1252 1204 S 0.0 1.1 0:00.01 sleep
926 root 20 0 1896 1124 1052 S 0.0 1.0 0:00.10
obmc-console-se
918 root 20 0 1756 1088 1044 S 0.0 1.0 0:06.81 rngd
1012 root 20 0 1836 1052 988 S 0.0 0.9 0:04.59 watchdog
Running with 1.99.10 openbmc:
ps -eaf
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 May24 ? 00:01:07 /sbin/init
root 2 0 0 May24 ? 00:00:00 [kthreadd]
root 3 2 0 May24 ? 00:01:55 [ksoftirqd/0]
root 5 2 0 May24 ? 00:00:00 [kworker/0:0H]
root 7 2 0 May24 ? 00:00:00 [lru-add-drain]
root 8 2 0 May24 ? 00:00:00 [kdevtmpfs]
root 9 2 0 May24 ? 00:00:00 [kworker/u2:1]
root 10 2 0 May24 ? 00:00:00 [oom_reaper]
root 133 2 0 May24 ? 00:00:00 [writeback]
root 134 2 0 May24 ? 00:00:00 [kcompactd0]
root 136 2 0 May24 ? 00:00:00 [bioset]
root 138 2 0 May24 ? 00:00:00 [kblockd]
root 143 2 0 May24 ? 00:00:00 [watchdogd]
root 166 2 0 May24 ? 00:00:00 [rpciod]
root 175 2 0 May24 ? 00:00:00 [kswapd0]
root 176 2 0 May24 ? 00:00:00 [nfsiod]
root 210 2 0 May24 ? 00:00:00 [bioset]
root 211 2 0 May24 ? 00:00:00 [bioset]
root 212 2 0 May24 ? 00:00:00 [bioset]
root 213 2 0 May24 ? 00:00:00 [bioset]
root 223 2 0 May24 ? 00:00:00 [bioset]
root 226 2 0 May24 ? 00:00:00 [bioset]
root 229 2 0 May24 ? 00:00:00 [bioset]
root 232 2 0 May24 ? 00:00:00 [bioset]
root 235 2 0 May24 ? 00:00:00 [bioset]
root 238 2 0 May24 ? 00:00:00 [bioset]
root 241 2 0 May24 ? 00:00:00 [bioset]
root 244 2 0 May24 ? 00:00:00 [bioset]
root 256 2 0 May24 ? 00:00:00 [bioset]
root 261 2 0 May24 ? 00:00:00 [bioset]
root 266 2 0 May24 ? 00:00:00 [bioset]
root 271 2 0 May24 ? 00:00:00 [bioset]
root 276 2 0 May24 ? 00:00:00 [bioset]
root 281 2 0 May24 ? 00:00:00 [bioset]
root 447 2 0 May24 ? 00:00:00 [ipv6_addrconf]
root 452 2 0 May24 ? 00:00:00 [deferwq]
root 500 2 0 May24 ? 00:00:00 [jffs2_gcd_mtd5]
root 505 2 0 May24 ? 00:00:00 [kworker/0:1H]
root 520 1 3 May24 ? 00:39:30 /lib/systemd/systemd-journald
root 612 1 0 May24 ? 00:00:01 /lib/systemd/systemd-udevd
systemd+ 835 1 0 May24 ? 00:03:00 /lib/systemd/systemd-timesyncd
root 923 1 0 May24 ttyS0 00:00:00 obmc-console-server
--config /etc/obmc-console.conf ttyVUART0
root 924 1 0 May24 ? 00:00:00 phosphor-settings-manager
root 925 1 0 May24 ? 00:00:02 phosphor-inventory
root 927 1 0 May24 ? 00:00:00 /usr/sbin/snoopd
root 928 1 0 May24 ? 00:00:00 phosphor-log-manager
root 929 1 0 May24 ? 00:06:36 btbridged
root 930 1 0 May24 ? 00:02:48 phosphor-watchdog
--service=xyz.openbmc_project.Watchdog
--path=/xyz/openbmc_project/watchdog/host0
--target=iceblink-reset.target --continue
root 931 1 36 May24 ? 07:39:09 python
/usr/sbin/phosphor-mapper --path_namespaces=/org/openbmc
/xyz/openbmc_project --interface_namespaces=org.openbmc
xyz.openbmc_project --blacklists= --
message+ 937 1 6 May24 ? 01:22:47 /usr/bin/dbus-daemon
--system --address=systemd: --nofork --nopidfile --systemd-activation
root 960 1 0 May24 ? 00:01:39 /sbin/syslogd -n
root 961 1 0 May24 ? 00:00:00 /sbin/klogd -n
avahi 962 1 0 May24 ? 00:00:05 avahi-daemon: running
[iceblink.local]
root 963 1 0 May24 ? 00:02:01 /bin/bash
/usr/sbin/set_gateway_arp.sh eth1
avahi 974 962 0 May24 ? 00:00:00 avahi-daemon: chroot helper
root 976 1 0 May24 ? 00:00:00 slpd
root 977 1 1 May24 ? 00:23:49 ipmid
root 979 1 0 May24 ttyS4 00:00:00 /bin/login --
systemd+ 985 1 0 May24 ? 00:00:00 /lib/systemd/systemd-networkd
root 990 979 0 May24 ttyS4 00:00:00 -sh
root 995 1 0 May24 ? 00:00:40 phosphor-hwmon-readd
-o /iio-hwmon
root 996 1 7 May24 ? 01:31:52 phosphor-hwmon-readd
-o /pwm-tacho-controller@1e786000
root 997 1 0 May24 ? 00:03:21 phosphor-hwmon-readd
-o /ahb/apb/i2c@1e78a000/i2c-bus@300/i2c-switch@70/i2c@0/psu@59
root 998 1 0 May24 ? 00:03:18 phosphor-hwmon-readd
-o /ahb/apb/i2c@1e78a000/i2c-bus@300/i2c-switch@70/i2c@2/psu@58
root 999 1 0 May24 ? 00:00:36 phosphor-hwmon-readd
-o /ahb/apb/i2c@1e78a000/i2c-bus@80/tmp75@4c
root 1000 1 0 May24 ? 00:00:36 phosphor-hwmon-readd
-o /ahb/apb/i2c@1e78a000/i2c-bus@80/tmp75@4e
root 1001 1 0 May24 ? 00:03:23 phosphor-hwmon-readd
-o /ahb/apb/i2c@1e78a000/i2c-bus@300/i2c-switch@70/i2c@3/psu@59
root 1002 1 0 May24 ? 00:00:37 phosphor-hwmon-readd
-o /ahb/apb/i2c@1e78a000/i2c-bus@80/tmp75@4f
root 1003 1 0 May24 ? 00:03:31 phosphor-hwmon-readd
-o /ahb/apb/i2c@1e78a000/i2c-bus@300/i2c-switch@70/i2c@1/psu@58
root 1006 1 0 May24 ? 00:00:03 phosphor-network-manager
root 1013 1 0 May24 ? 00:00:00 phosphor-dbus-monitor
root 1017 1 0 May24 ? 00:00:00 phosphor-hwmon-readd
-o /iio-hwmon-battery
root 1073 1 0 May24 ? 00:00:34 /usr/sbin/ncsid eth1
root 1088 1 1 May24 ? 00:18:51 /usr/sbin/swampd
root 1090 1 0 May24 ? 00:00:02 /usr/sbin/watchdog
root 14996 2 0 07:21 ? 00:00:00 [kworker/u2:0]
root 25090 2 0 16:00 ? 00:00:00 [kworker/0:0]
root 25220 2 0 16:07 ? 00:00:00 [kworker/0:1]
root 25520 2 0 16:21 ? 00:00:00 [kworker/0:2]
root 25625 2 0 16:26 ? 00:00:00 [kworker/0:3]
root 25655 1 1 16:27 ? 00:00:00 /usr/sbin/nemorad eth1
root 25660 963 0 16:27 ? 00:00:00 sleep 10
root 25661 990 0 16:27 ttyS4 00:00:00 ps -eaf
top - 16:27:46 up 20:51, 1 user, load average: 0.74, 0.98, 1.02
Tasks: 83 total, 4 running, 79 sleeping, 0 stopped, 0 zombie
%Cpu(s): 38.3 us, 10.5 sy, 0.0 ni, 51.1 id, 0.0 wa, 0.0 hi, 0.1 si, 0.0 st
KiB Mem : 116224 total, 36440 free, 25856 used, 53928 buff/cache
KiB Swap: 0 total, 0 free, 0 used. 77640 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
931 root 20 0 16252 10664 5824 R 39.2 9.2 459:16.10 python
977 root 20 0 9872 7364 4932 S 0.0 6.3 23:49.80 ipmid
1 root 20 0 25124 5332 4420 S 0.0 4.6 1:07.47 systemd
25655 root 20 0 26104 4964 4656 S 0.0 4.3 0:00.12 nemorad
1006 root 20 0 6264 4560 4328 S 0.0 3.9 0:03.63
phosphor-networ
925 root 20 0 6596 4520 4224 S 0.0 3.9 0:02.76
phosphor-invent
1017 root 20 0 6228 4468 4228 S 0.0 3.8 0:00.72
phosphor-hwmon-
995 root 20 0 6228 4416 4164 S 0.0 3.8 0:40.10
phosphor-hwmon-
996 root 20 0 6228 4404 4164 S 0.0 3.8 91:53.64
phosphor-hwmon-
997 root 20 0 6228 4404 4164 S 0.0 3.8 3:21.94
phosphor-hwmon-
998 root 20 0 6228 4404 4164 S 0.0 3.8 3:18.27
phosphor-hwmon-
1001 root 20 0 6228 4404 4164 S 0.0 3.8 3:23.29
phosphor-hwmon-
1003 root 20 0 6228 4404 4164 S 0.0 3.8 3:31.54
phosphor-hwmon-
1000 root 20 0 6228 4400 4164 S 0.0 3.8 0:36.22
phosphor-hwmon-
1002 root 20 0 6228 4396 4164 S 0.0 3.8 0:37.46
phosphor-hwmon-
999 root 20 0 6228 4392 4164 S 0.0 3.8 0:36.41
phosphor-hwmon-
1088 root 20 0 41008 4368 4064 S 0.0 3.8 18:52.12 swampd
520 root 20 0 22776 4320 3964 S 0.0 3.7 39:30.70
systemd-journal
928 root 20 0 6048 4016 3800 S 0.0 3.5 0:00.14
phosphor-log-ma
985 systemd+ 20 0 5748 3880 3600 S 0.0 3.3 0:00.25
systemd-network
924 root 20 0 6208 3724 3516 S 0.0 3.2 0:00.40
phosphor-settin
835 systemd+ 20 0 14976 3512 3252 S 0.0 3.0 3:00.64
systemd-timesyn
930 root 20 0 5948 3456 3260 S 0.0 3.0 2:48.19
phosphor-watchd
927 root 20 0 15052 3296 3092 S 0.0 2.8 0:00.31 snoopd
1073 root 20 0 5244 3132 2980 S 0.0 2.7 0:34.43 ncsid
937 message+ 20 0 5072 3124 2584 R 0.0 2.7 82:48.94 dbus-daemon
962 avahi 20 0 5304 2880 2656 S 0.0 2.5 0:05.99 avahi-daemon
976 root 20 0 5160 2756 2620 S 0.0 2.4 0:00.10 slpd
979 root 20 0 5580 2716 2432 S 0.0 2.3 0:00.57 login
1013 root 20 0 5152 2592 2444 S 0.0 2.2 0:00.00
phosphor-dbus-m
990 root 20 0 3072 2520 2320 S 0.0 2.2 0:00.06 sh
612 root 20 0 3724 2444 2116 S 0.0 2.1 0:01.96 systemd-udevd
963 root 20 0 3072 2304 2144 S 0.0 2.0 2:01.88
set_gateway_arp
929 root 20 0 3816 2232 2120 S 0.0 1.9 6:36.91 btbridged
25668 root 20 0 3068 1876 1648 R 28.4 1.6 0:00.46 top
960 root 20 0 3040 1604 1556 S 0.0 1.4 1:39.19 syslogd
961 root 20 0 3040 1572 1508 S 0.0 1.4 0:00.13 klogd
974 avahi 20 0 5180 1384 1224 S 0.0 1.2 0:00.00 avahi-daemon
25667 root 20 0 2908 1216 1172 S 0.0 1.0 0:00.03 sleep
923 root 20 0 1876 1184 1116 S 0.0 1.0 0:00.56
obmc-console-se
1090 root 20 0 1816 1048 984 S 0.0 0.9 0:02.48 watchdog
>
> >
> > >
> > > The kernel has grown a bunch of new drivers. Most of them should not
> > > probe, and therefore won't allocate memory at run time, but there may
> > > be some new ones.
> > >
> > > I've not spent much time looking at runtime memory usage, so if these
> > > suggestions don't provide answers we might need to investigate a bit
> > > deeper.
> > >
> > > >> > This matters for a few reasons:
> > > >> > 1) my memory chip is too small to be practical and I need all the
> > > >> > bytes I can get.
> > > >> > 2) I need at least 32MiB to load a new firmware image.
> > > >> >
> > > >> > I dropped all the python except the mapper, and I dropped the newer
> > > >> > daemons from my build to clear out that difference. It was originally
> > > >> > about 16MiB difference, so I was thinking that something is now being
> > > >> > mapped by default that wasn't before, such as part of a flash image.
> > >
> > > I initially thought you were confusing RAM with flash size, but on a
> > > second read I now understand.
> > >
> > > There has been recent work to create phosphor-tiny, is that relevant here Brad?
>
> So, the image fits on my flash chip, but it may have extra daemons
> that are being mapped in from flash for the fs?
>
> > >
> > > Cheers,
> > >
> > > Joel
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Loss of several MB of run-time memory
2018-10-09 22:25 ` Joel Stanley
2018-10-10 15:31 ` Patrick Venture
@ 2018-10-16 1:30 ` Brad Bishop
1 sibling, 0 replies; 11+ messages in thread
From: Brad Bishop @ 2018-10-16 1:30 UTC (permalink / raw)
To: Joel Stanley; +Cc: Patrick Venture, Ed Tanous, OpenBMC Maillist, Kun Yi
> On Oct 9, 2018, at 6:25 PM, Joel Stanley <joel@jms.id.au> wrote:
>
> On Wed, 10 Oct 2018 at 03:38, Kun Yi <kunyi@google.com> wrote:
>>
>> A somewhat tedious way to test would be to build and boot with 'bitbake core-image-minimal' to ensure no phosphor-daemons are loaded, and then compare the kernel memory footprint.
>>
>> On Tue, Oct 9, 2018 at 10:01 AM Tanous, Ed <ed.tanous@intel.com> wrote:
>>>
>>> Was this only the kernel version jump, or did you jump in openbmc/phosphor levels as well? There have been quite a few daemons added in the last 6 months or so that could explain your memory footprint increase.
>
> HI team. Just a reminder about email etiquette on the mailing lists
> that Brad posted a while back:
>
> https://fedoraproject.org/wiki/Mailing_list_guidelines#Proper_posting_style
>
> In particular the top posting bit, which makes it hard to reply to this thread.
>
> Back to the issue at hand:
>
>>>> On Oct 9, 2018, at 9:54 AM, Patrick Venture <venture@google.com> wrote:
>>>>
>>>> Just jumped from 4.7 kernel to 4.18 running the latest openbmc image
>>>> on the quanta-q71l board. And I see now I have ~20MiB of RAM free for
>>>> stuff once things are settled, whereas before I could have up to
>>>> 35MiB.
>
> Are you able to boot the new kernel with the old userspace? This will
> allow you to compare like for like (even if the system is not fully
> functional in that state). Alternatively, boot it with a small
> non-openbmc initrd to allow comparisons as Kun suggested.
>
> The kernel has grown a bunch of new drivers. Most of them should not
> probe, and therefore won't allocate memory at run time, but there may
> be some new ones.
>
> I've not spent much time looking at runtime memory usage, so if these
> suggestions don't provide answers we might need to investigate a bit
> deeper.
>
>>>> This matters for a few reasons:
>>>> 1) my memory chip is too small to be practical and I need all the
>>>> bytes I can get.
>>>> 2) I need at least 32MiB to load a new firmware image.
>>>>
>>>> I dropped all the python except the mapper, and I dropped the newer
>>>> daemons from my build to clear out that difference. It was originally
>>>> about 16MiB difference, so I was thinking that something is now being
>>>> mapped by default that wasn't before, such as part of a flash image.
>
> I initially thought you were confusing RAM with flash size, but on a
> second read I now understand.
>
> There has been recent work to create phosphor-tiny, is that relevant here Brad?
No, at least not yet. At the moment all phosphor-tiny does is remove all the python
source (leaving just the bytecode) - so flash footprint only.
>
> Cheers,
>
> Joel
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2018-10-16 1:30 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-10-09 16:53 Loss of several MB of run-time memory Patrick Venture
2018-10-09 16:57 ` Tanous, Ed
2018-10-09 17:06 ` Kun Yi
2018-10-09 17:20 ` Patrick Venture
2018-10-09 18:55 ` Patrick Venture
2018-10-09 22:25 ` Joel Stanley
2018-10-10 15:31 ` Patrick Venture
2018-10-10 16:17 ` Patrick Venture
2018-10-11 14:25 ` Patrick Venture
2018-10-16 1:30 ` Brad Bishop
2018-10-09 17:06 ` Patrick Venture
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.