All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Rosen, Rami" <rami.rosen@intel.com>
To: Monika Mails <mails.monika@gmail.com>,
	Stephen Hemminger <stephen@networkplumber.org>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: Testpmd fails in initialization
Date: Sat, 2 Jun 2018 03:16:12 +0000	[thread overview]
Message-ID: <9B0331B6EBBD0E4684FBFAEDA55776F95A3C4D14@HASMSX110.ger.corp.intel.com> (raw)
In-Reply-To: <CALPXby94L36YiAwH2oTyODh-wjqq+LyzgAyCjkhqJo5XDWceZA@mail.gmail.com>

Hi Monika,

> For last command I just declared 10 hugepages.

This is not enough.

I would suggest that you will try with 256 pages.
I assume you have 2 sockets on the host (this can be verified by running "lspcu") and use 2MB hugepages.

So, this can be done by:
echo 256 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
and 
echo 256 > /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages

And run the same command that you used to launch testpmd.

(And if testpmd will still fail, please post the output of 
cat /proc/meminfo | grep -i huge)

Also for the future, this type of queries should be posted on dpdk-users mailing list.

Regards,
Rami Rosen


-----Original Message-----
From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Monika Mails
Sent: Saturday, June 02, 2018 01:18
To: Stephen Hemminger <stephen@networkplumber.org>
Cc: dev@dpdk.org
Subject: Re: [dpdk-dev] Testpmd fails in initialization

Hi Stephen

For last command I just declared 10 hugepages.

Regards,
Monika

On Fri, Jun 1, 2018 at 3:11 PM, Stephen Hemminger < stephen@networkplumber.org> wrote:

> How much huge page memory did you confirm?
> $ cat /proc/meminfo
>
> On Fri, Jun 1, 2018, 3:06 PM Monika Mails <mails.monika@gmail.com> wrote:
>
>> Hi All,
>>
>> This is Monika.
>> I am new to DPDK.
>> I just installed DPDK release - 18.05 on my server and trying to run 
>> "testpmd"
>>
>> Here is my command and its output :-
>>
>>  sudo testpmd -l 8 -n 4 -c 0x3 --socket-mem=512,512
>>
>> EAL: Detected lcore 0 as core 3 on socket 0
>> EAL: Detected lcore 1 as core 3 on socket 1
>> EAL: Detected lcore 2 as core 4 on socket 0
>> EAL: Detected lcore 3 as core 4 on socket 1
>> EAL: Detected lcore 4 as core 5 on socket 0
>> EAL: Detected lcore 5 as core 5 on socket 1
>> EAL: Detected lcore 6 as core 6 on socket 0
>> EAL: Detected lcore 7 as core 6 on socket 1
>> EAL: Detected lcore 8 as core 10 on socket 0
>> EAL: Detected lcore 9 as core 10 on socket 1
>> EAL: Detected lcore 10 as core 11 on socket 0
>> EAL: Detected lcore 11 as core 11 on socket 1
>> EAL: Detected lcore 12 as core 3 on socket 0
>> EAL: Detected lcore 13 as core 3 on socket 1
>> EAL: Detected lcore 14 as core 4 on socket 0
>> EAL: Detected lcore 15 as core 4 on socket 1
>> EAL: Detected lcore 16 as core 5 on socket 0
>> EAL: Detected lcore 17 as core 5 on socket 1
>> EAL: Detected lcore 18 as core 6 on socket 0
>> EAL: Detected lcore 19 as core 6 on socket 1
>> EAL: Detected lcore 20 as core 10 on socket 0
>> EAL: Detected lcore 21 as core 10 on socket 1
>> EAL: Detected lcore 22 as core 11 on socket 0
>> EAL: Detected lcore 23 as core 11 on socket 1
>> EAL: Support maximum 128 logical core(s) by configuration.
>> EAL: Detected 24 lcore(s)
>> EAL: No free hugepages reported in hugepages-1048576kB
>> EAL: Setting up physically contiguous memory...
>> EAL: Ask a virtual area of 0x200000 bytes
>> EAL: Virtual area found at 0x7f2d14600000 (size = 0x200000)
>> EAL: Ask a virtual area of 0x200000 bytes
>> EAL: Virtual area found at 0x7f2d14200000 (size = 0x200000)
>> EAL: Ask a virtual area of 0x200000 bytes
>> EAL: Virtual area found at 0x7f2d13e00000 (size = 0x200000)
>> EAL: Ask a virtual area of 0x200000 bytes
>> EAL: Virtual area found at 0x7f2d13a00000 (size = 0x200000)
>> EAL: Ask a virtual area of 0x400000 bytes
>> EAL: Virtual area found at 0x7f2d13400000 (size = 0x400000)
>> EAL: Ask a virtual area of 0x200000 bytes
>> EAL: Virtual area found at 0x7f2d13000000 (size = 0x200000)
>> EAL: Ask a virtual area of 0x200000 bytes
>> EAL: Virtual area found at 0x7f2d12c00000 (size = 0x200000)
>> EAL: Ask a virtual area of 0x400000 bytes
>> EAL: Virtual area found at 0x7f2d12600000 (size = 0x400000)
>> *EAL: Not enough memory available on socket 0! Requested: 512MB,
>> available:
>> 0MB*
>> PANIC in rte_eal_init():
>> Cannot init memory
>> 6: [testpmd(_start+0x29) [0x4155b9]]
>> 5: [/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0)
>> [0x7f2d158b5830]]
>> 4: [testpmd(main+0x51) [0x414fb1]]
>> 3: [/usr/lib/x86_64-linux-gnu/libdpdk.so.0(rte_eal_init+0x1111)
>> [0x7f2d16e59c91]]
>> 2: [/usr/lib/x86_64-linux-gnu/libdpdk.so.0(__rte_panic+0xd0)
>> [0x7f2d16e38e18]]
>> 1: [/usr/lib/x86_64-linux-gnu/libdpdk.so.0(rte_dump_stack+0x2b)
>> [0x7f2d16fa840b]]
>> Aborted (core dumped)
>>
>> Here are the other details of my server :-
>>
>> *./deps/dpdk-18.05/usertools/cpu_layout.py*
>> =====================================================================
>> = Core and Socket Information (as reported by 
>> '/sys/devices/system/cpu') 
>> =====================================================================
>> =
>>
>> cores =  [3, 4, 5, 6, 10, 11]
>> sockets =  [0, 1]
>>
>>         Socket 0        Socket 1
>>         --------        --------
>> Core 3  [0, 12]         [1, 13]
>> Core 4  [2, 14]         [3, 15]
>> Core 5  [4, 16]         [5, 17]
>> Core 6  [6, 18]         [7, 19]
>> Core 10 [8, 20]         [9, 21]
>> Core 11 [10, 22]        [11, 23]
>>
>> I tried selecting different number of hugepages from 1024-10, but 
>> getting the same initialization error.
>> Can someone please suggest some resolution.
>> Appreciate some advice.
>>
>> Regards,
>> Monika
>>
>

      parent reply	other threads:[~2018-06-02  3:16 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-06-01 22:06 Testpmd fails in initialization Monika Mails
2018-06-01 22:11 ` Stephen Hemminger
2018-06-01 22:17   ` Monika Mails
2018-06-01 22:25     ` Monika Mails
2018-06-02  3:16     ` Rosen, Rami [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=9B0331B6EBBD0E4684FBFAEDA55776F95A3C4D14@HASMSX110.ger.corp.intel.com \
    --to=rami.rosen@intel.com \
    --cc=dev@dpdk.org \
    --cc=mails.monika@gmail.com \
    --cc=stephen@networkplumber.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.