Hi Ben,


Thanks for the response, I'll stay tuned for updates on the dpdk side.

One small question, are there places where large memory blocks are expected to be contiguous? Before preallocating memory, we would sometimes fail to open an RDMA session since there was not enough contiguous memory, but it seems that this is not a requirement, since the large allocated block is then split into a mempool.


Shahar


From: SPDK <spdk-bounces@lists.01.org> on behalf of Walker, Benjamin <benjamin.walker@intel.com>
Sent: Wednesday, January 31, 2018 7:00:51 PM
To: spdk@lists.01.org
Subject: Re: [SPDK] Hugepage allocation, and issue with non contiguous memory
 
On Mon, 2018-01-29 at 12:11 +0000, Shahar Salzman wrote:
> On our system, we make extensive use of hugepages, so only a fraction of the
> hugepages are for spdk usage, and the memory allocated may be fragmented at
> the hugepage level.
> Initially we used "--socket-mem=2048,0", but init time was very long, probably
> since dpdk built its hugepage info from all the hugepages on the system.
>  For the fragmentation I am running a small program that initializes dpdk
> before the rest of the hugepage owners start allocating their pages.
>
> Is there a better way to limit the # of pages that dpdk works on, and to
> preallocate a contiguous amount of hugepages?

This is a common scenario that many users of SPDK have run into. However, SPDK
is just using DPDK's memory allocation framework, so SPDK can't solve it by
itself. We've been working with the DPDK team for nearly a year now to capture
all of the challenges that SPDK users have had and to turn them into
improvements to the core memory management system in DPDK. There is currently a
very large patch series that makes memory allocation in DPDK dynamic. See this
thread:

https://dpdk.org/ml/archives/dev/2017-December/084302.html

I don't know when that will land, but as soon as it does land SPDK will take
advantage of it. That's the "right" way to fix this, in the long term.

In the short term, what you're doing is probably the best practice.

_______________________________________________
SPDK mailing list
SPDK@lists.01.org
https://lists.01.org/mailman/listinfo/spdk