All of lore.kernel.org
 help / color / mirror / Atom feed
From: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
To: dev@dpdk.org
Cc: Dmitry Malloy <dmitrym@microsoft.com>,
	Narcisa Ana Maria Vasile <Narcisa.Vasile@microsoft.com>,
	Fady Bader <fady@mellanox.com>,
	Tal Shnaiderman <talshn@mellanox.com>,
	Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>,
	Thomas Monjalon <thomas@monjalon.net>,
	Anatoly Burakov <anatoly.burakov@intel.com>,
	Bruce Richardson <bruce.richardson@intel.com>
Subject: [dpdk-dev] [PATCH v6 05/11] eal/mem: extract common code for dynamic memory allocation
Date: Wed,  3 Jun 2020 02:03:23 +0300	[thread overview]
Message-ID: <20200602230329.17838-6-dmitry.kozliuk@gmail.com> (raw)
In-Reply-To: <20200602230329.17838-1-dmitry.kozliuk@gmail.com>

Code in Linux EAL that supports dynamic memory allocation (as opposed to
static allocation used by FreeBSD) is not OS-dependent and can be reused
by Windows EAL. Move such code to a file compiled only for the OS that
require it. Keep Anatoly Burakov maintainer of extracted code.

Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
---
 MAINTAINERS                               |   1 +
 lib/librte_eal/common/eal_common_dynmem.c | 521 +++++++++++++++++++++
 lib/librte_eal/common/eal_private.h       |  43 +-
 lib/librte_eal/common/meson.build         |   4 +
 lib/librte_eal/freebsd/eal_memory.c       |  12 +-
 lib/librte_eal/linux/Makefile             |   1 +
 lib/librte_eal/linux/eal_memory.c         | 523 +---------------------
 7 files changed, 582 insertions(+), 523 deletions(-)
 create mode 100644 lib/librte_eal/common/eal_common_dynmem.c

diff --git a/MAINTAINERS b/MAINTAINERS
index 1d9aff26d..a1722ca73 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -208,6 +208,7 @@ F: lib/librte_eal/include/rte_fbarray.h
 F: lib/librte_eal/include/rte_mem*
 F: lib/librte_eal/include/rte_malloc.h
 F: lib/librte_eal/common/*malloc*
+F: lib/librte_eal/common/eal_common_dynmem.c
 F: lib/librte_eal/common/eal_common_fbarray.c
 F: lib/librte_eal/common/eal_common_mem*
 F: lib/librte_eal/common/eal_hugepages.h
diff --git a/lib/librte_eal/common/eal_common_dynmem.c b/lib/librte_eal/common/eal_common_dynmem.c
new file mode 100644
index 000000000..6b07672d0
--- /dev/null
+++ b/lib/librte_eal/common/eal_common_dynmem.c
@@ -0,0 +1,521 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2014 Intel Corporation.
+ * Copyright(c) 2013 6WIND S.A.
+ */
+
+#include <inttypes.h>
+#include <string.h>
+
+#include <rte_log.h>
+#include <rte_string_fns.h>
+
+#include "eal_internal_cfg.h"
+#include "eal_memalloc.h"
+#include "eal_memcfg.h"
+#include "eal_private.h"
+
+/** @file Functions common to EALs that support dynamic memory allocation. */
+
+int
+eal_dynmem_memseg_lists_init(void)
+{
+	struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+	struct memtype {
+		uint64_t page_sz;
+		int socket_id;
+	} *memtypes = NULL;
+	int i, hpi_idx, msl_idx, ret = -1; /* fail unless told to succeed */
+	struct rte_memseg_list *msl;
+	uint64_t max_mem, max_mem_per_type;
+	unsigned int max_seglists_per_type;
+	unsigned int n_memtypes, cur_type;
+
+	/* no-huge does not need this at all */
+	if (internal_config.no_hugetlbfs)
+		return 0;
+
+	/*
+	 * figuring out amount of memory we're going to have is a long and very
+	 * involved process. the basic element we're operating with is a memory
+	 * type, defined as a combination of NUMA node ID and page size (so that
+	 * e.g. 2 sockets with 2 page sizes yield 4 memory types in total).
+	 *
+	 * deciding amount of memory going towards each memory type is a
+	 * balancing act between maximum segments per type, maximum memory per
+	 * type, and number of detected NUMA nodes. the goal is to make sure
+	 * each memory type gets at least one memseg list.
+	 *
+	 * the total amount of memory is limited by RTE_MAX_MEM_MB value.
+	 *
+	 * the total amount of memory per type is limited by either
+	 * RTE_MAX_MEM_MB_PER_TYPE, or by RTE_MAX_MEM_MB divided by the number
+	 * of detected NUMA nodes. additionally, maximum number of segments per
+	 * type is also limited by RTE_MAX_MEMSEG_PER_TYPE. this is because for
+	 * smaller page sizes, it can take hundreds of thousands of segments to
+	 * reach the above specified per-type memory limits.
+	 *
+	 * additionally, each type may have multiple memseg lists associated
+	 * with it, each limited by either RTE_MAX_MEM_MB_PER_LIST for bigger
+	 * page sizes, or RTE_MAX_MEMSEG_PER_LIST segments for smaller ones.
+	 *
+	 * the number of memseg lists per type is decided based on the above
+	 * limits, and also taking number of detected NUMA nodes, to make sure
+	 * that we don't run out of memseg lists before we populate all NUMA
+	 * nodes with memory.
+	 *
+	 * we do this in three stages. first, we collect the number of types.
+	 * then, we figure out memory constraints and populate the list of
+	 * would-be memseg lists. then, we go ahead and allocate the memseg
+	 * lists.
+	 */
+
+	/* create space for mem types */
+	n_memtypes = internal_config.num_hugepage_sizes * rte_socket_count();
+	memtypes = calloc(n_memtypes, sizeof(*memtypes));
+	if (memtypes == NULL) {
+		RTE_LOG(ERR, EAL, "Cannot allocate space for memory types\n");
+		return -1;
+	}
+
+	/* populate mem types */
+	cur_type = 0;
+	for (hpi_idx = 0; hpi_idx < (int) internal_config.num_hugepage_sizes;
+			hpi_idx++) {
+		struct hugepage_info *hpi;
+		uint64_t hugepage_sz;
+
+		hpi = &internal_config.hugepage_info[hpi_idx];
+		hugepage_sz = hpi->hugepage_sz;
+
+		for (i = 0; i < (int) rte_socket_count(); i++, cur_type++) {
+			int socket_id = rte_socket_id_by_idx(i);
+
+#ifndef RTE_EAL_NUMA_AWARE_HUGEPAGES
+			/* we can still sort pages by socket in legacy mode */
+			if (!internal_config.legacy_mem && socket_id > 0)
+				break;
+#endif
+			memtypes[cur_type].page_sz = hugepage_sz;
+			memtypes[cur_type].socket_id = socket_id;
+
+			RTE_LOG(DEBUG, EAL, "Detected memory type: "
+				"socket_id:%u hugepage_sz:%" PRIu64 "\n",
+				socket_id, hugepage_sz);
+		}
+	}
+	/* number of memtypes could have been lower due to no NUMA support */
+	n_memtypes = cur_type;
+
+	/* set up limits for types */
+	max_mem = (uint64_t)RTE_MAX_MEM_MB << 20;
+	max_mem_per_type = RTE_MIN((uint64_t)RTE_MAX_MEM_MB_PER_TYPE << 20,
+			max_mem / n_memtypes);
+	/*
+	 * limit maximum number of segment lists per type to ensure there's
+	 * space for memseg lists for all NUMA nodes with all page sizes
+	 */
+	max_seglists_per_type = RTE_MAX_MEMSEG_LISTS / n_memtypes;
+
+	if (max_seglists_per_type == 0) {
+		RTE_LOG(ERR, EAL, "Cannot accommodate all memory types, please increase %s\n",
+			RTE_STR(CONFIG_RTE_MAX_MEMSEG_LISTS));
+		goto out;
+	}
+
+	/* go through all mem types and create segment lists */
+	msl_idx = 0;
+	for (cur_type = 0; cur_type < n_memtypes; cur_type++) {
+		unsigned int cur_seglist, n_seglists, n_segs;
+		unsigned int max_segs_per_type, max_segs_per_list;
+		struct memtype *type = &memtypes[cur_type];
+		uint64_t max_mem_per_list, pagesz;
+		int socket_id;
+
+		pagesz = type->page_sz;
+		socket_id = type->socket_id;
+
+		/*
+		 * we need to create segment lists for this type. we must take
+		 * into account the following things:
+		 *
+		 * 1. total amount of memory we can use for this memory type
+		 * 2. total amount of memory per memseg list allowed
+		 * 3. number of segments needed to fit the amount of memory
+		 * 4. number of segments allowed per type
+		 * 5. number of segments allowed per memseg list
+		 * 6. number of memseg lists we are allowed to take up
+		 */
+
+		/* calculate how much segments we will need in total */
+		max_segs_per_type = max_mem_per_type / pagesz;
+		/* limit number of segments to maximum allowed per type */
+		max_segs_per_type = RTE_MIN(max_segs_per_type,
+				(unsigned int)RTE_MAX_MEMSEG_PER_TYPE);
+		/* limit number of segments to maximum allowed per list */
+		max_segs_per_list = RTE_MIN(max_segs_per_type,
+				(unsigned int)RTE_MAX_MEMSEG_PER_LIST);
+
+		/* calculate how much memory we can have per segment list */
+		max_mem_per_list = RTE_MIN(max_segs_per_list * pagesz,
+				(uint64_t)RTE_MAX_MEM_MB_PER_LIST << 20);
+
+		/* calculate how many segments each segment list will have */
+		n_segs = RTE_MIN(max_segs_per_list, max_mem_per_list / pagesz);
+
+		/* calculate how many segment lists we can have */
+		n_seglists = RTE_MIN(max_segs_per_type / n_segs,
+				max_mem_per_type / max_mem_per_list);
+
+		/* limit number of segment lists according to our maximum */
+		n_seglists = RTE_MIN(n_seglists, max_seglists_per_type);
+
+		RTE_LOG(DEBUG, EAL, "Creating %i segment lists: "
+				"n_segs:%i socket_id:%i hugepage_sz:%" PRIu64 "\n",
+			n_seglists, n_segs, socket_id, pagesz);
+
+		/* create all segment lists */
+		for (cur_seglist = 0; cur_seglist < n_seglists; cur_seglist++) {
+			if (msl_idx >= RTE_MAX_MEMSEG_LISTS) {
+				RTE_LOG(ERR, EAL,
+					"No more space in memseg lists, please increase %s\n",
+					RTE_STR(CONFIG_RTE_MAX_MEMSEG_LISTS));
+				goto out;
+			}
+			msl = &mcfg->memsegs[msl_idx++];
+
+			if (eal_memseg_list_init(msl, pagesz, n_segs,
+					socket_id, cur_seglist, true))
+				goto out;
+
+			if (eal_memseg_list_alloc(msl, 0)) {
+				RTE_LOG(ERR, EAL, "Cannot allocate VA space for memseg list\n");
+				goto out;
+			}
+		}
+	}
+	/* we're successful */
+	ret = 0;
+out:
+	free(memtypes);
+	return ret;
+}
+
+static int __rte_unused
+hugepage_count_walk(const struct rte_memseg_list *msl, void *arg)
+{
+	struct hugepage_info *hpi = arg;
+
+	if (msl->page_sz != hpi->hugepage_sz)
+		return 0;
+
+	hpi->num_pages[msl->socket_id] += msl->memseg_arr.len;
+	return 0;
+}
+
+static int
+limits_callback(int socket_id, size_t cur_limit, size_t new_len)
+{
+	RTE_SET_USED(socket_id);
+	RTE_SET_USED(cur_limit);
+	RTE_SET_USED(new_len);
+	return -1;
+}
+
+int
+eal_dynmem_hugepage_init(void)
+{
+	struct hugepage_info used_hp[MAX_HUGEPAGE_SIZES];
+	uint64_t memory[RTE_MAX_NUMA_NODES];
+	int hp_sz_idx, socket_id;
+
+	memset(used_hp, 0, sizeof(used_hp));
+
+	for (hp_sz_idx = 0;
+			hp_sz_idx < (int) internal_config.num_hugepage_sizes;
+			hp_sz_idx++) {
+#ifndef RTE_ARCH_64
+		struct hugepage_info dummy;
+		unsigned int i;
+#endif
+		/* also initialize used_hp hugepage sizes in used_hp */
+		struct hugepage_info *hpi;
+		hpi = &internal_config.hugepage_info[hp_sz_idx];
+		used_hp[hp_sz_idx].hugepage_sz = hpi->hugepage_sz;
+
+#ifndef RTE_ARCH_64
+		/* for 32-bit, limit number of pages on socket to whatever we've
+		 * preallocated, as we cannot allocate more.
+		 */
+		memset(&dummy, 0, sizeof(dummy));
+		dummy.hugepage_sz = hpi->hugepage_sz;
+		if (rte_memseg_list_walk(hugepage_count_walk, &dummy) < 0)
+			return -1;
+
+		for (i = 0; i < RTE_DIM(dummy.num_pages); i++) {
+			hpi->num_pages[i] = RTE_MIN(hpi->num_pages[i],
+					dummy.num_pages[i]);
+		}
+#endif
+	}
+
+	/* make a copy of socket_mem, needed for balanced allocation. */
+	for (hp_sz_idx = 0; hp_sz_idx < RTE_MAX_NUMA_NODES; hp_sz_idx++)
+		memory[hp_sz_idx] = internal_config.socket_mem[hp_sz_idx];
+
+	/* calculate final number of pages */
+	if (eal_dynmem_calc_num_pages_per_socket(memory,
+			internal_config.hugepage_info, used_hp,
+			internal_config.num_hugepage_sizes) < 0)
+		return -1;
+
+	for (hp_sz_idx = 0;
+			hp_sz_idx < (int)internal_config.num_hugepage_sizes;
+			hp_sz_idx++) {
+		for (socket_id = 0; socket_id < RTE_MAX_NUMA_NODES;
+				socket_id++) {
+			struct rte_memseg **pages;
+			struct hugepage_info *hpi = &used_hp[hp_sz_idx];
+			unsigned int num_pages = hpi->num_pages[socket_id];
+			unsigned int num_pages_alloc;
+
+			if (num_pages == 0)
+				continue;
+
+			RTE_LOG(DEBUG, EAL,
+				"Allocating %u pages of size %" PRIu64 "M "
+				"on socket %i\n",
+				num_pages, hpi->hugepage_sz >> 20, socket_id);
+
+			/* we may not be able to allocate all pages in one go,
+			 * because we break up our memory map into multiple
+			 * memseg lists. therefore, try allocating multiple
+			 * times and see if we can get the desired number of
+			 * pages from multiple allocations.
+			 */
+
+			num_pages_alloc = 0;
+			do {
+				int i, cur_pages, needed;
+
+				needed = num_pages - num_pages_alloc;
+
+				pages = malloc(sizeof(*pages) * needed);
+
+				/* do not request exact number of pages */
+				cur_pages = eal_memalloc_alloc_seg_bulk(pages,
+						needed, hpi->hugepage_sz,
+						socket_id, false);
+				if (cur_pages <= 0) {
+					free(pages);
+					return -1;
+				}
+
+				/* mark preallocated pages as unfreeable */
+				for (i = 0; i < cur_pages; i++) {
+					struct rte_memseg *ms = pages[i];
+					ms->flags |=
+						RTE_MEMSEG_FLAG_DO_NOT_FREE;
+				}
+				free(pages);
+
+				num_pages_alloc += cur_pages;
+			} while (num_pages_alloc != num_pages);
+		}
+	}
+
+	/* if socket limits were specified, set them */
+	if (internal_config.force_socket_limits) {
+		unsigned int i;
+		for (i = 0; i < RTE_MAX_NUMA_NODES; i++) {
+			uint64_t limit = internal_config.socket_limit[i];
+			if (limit == 0)
+				continue;
+			if (rte_mem_alloc_validator_register("socket-limit",
+					limits_callback, i, limit))
+				RTE_LOG(ERR, EAL, "Failed to register socket limits validator callback\n");
+		}
+	}
+	return 0;
+}
+
+__rte_unused /* function is unused on 32-bit builds */
+static inline uint64_t
+get_socket_mem_size(int socket)
+{
+	uint64_t size = 0;
+	unsigned int i;
+
+	for (i = 0; i < internal_config.num_hugepage_sizes; i++) {
+		struct hugepage_info *hpi = &internal_config.hugepage_info[i];
+		size += hpi->hugepage_sz * hpi->num_pages[socket];
+	}
+
+	return size;
+}
+
+int
+eal_dynmem_calc_num_pages_per_socket(
+	uint64_t *memory, struct hugepage_info *hp_info,
+	struct hugepage_info *hp_used, unsigned int num_hp_info)
+{
+	unsigned int socket, j, i = 0;
+	unsigned int requested, available;
+	int total_num_pages = 0;
+	uint64_t remaining_mem, cur_mem;
+	uint64_t total_mem = internal_config.memory;
+
+	if (num_hp_info == 0)
+		return -1;
+
+	/* if specific memory amounts per socket weren't requested */
+	if (internal_config.force_sockets == 0) {
+		size_t total_size;
+#ifdef RTE_ARCH_64
+		int cpu_per_socket[RTE_MAX_NUMA_NODES];
+		size_t default_size;
+		unsigned int lcore_id;
+
+		/* Compute number of cores per socket */
+		memset(cpu_per_socket, 0, sizeof(cpu_per_socket));
+		RTE_LCORE_FOREACH(lcore_id) {
+			cpu_per_socket[rte_lcore_to_socket_id(lcore_id)]++;
+		}
+
+		/*
+		 * Automatically spread requested memory amongst detected
+		 * sockets according to number of cores from CPU mask present
+		 * on each socket.
+		 */
+		total_size = internal_config.memory;
+		for (socket = 0; socket < RTE_MAX_NUMA_NODES && total_size != 0;
+				socket++) {
+
+			/* Set memory amount per socket */
+			default_size = internal_config.memory *
+				cpu_per_socket[socket] / rte_lcore_count();
+
+			/* Limit to maximum available memory on socket */
+			default_size = RTE_MIN(
+				default_size, get_socket_mem_size(socket));
+
+			/* Update sizes */
+			memory[socket] = default_size;
+			total_size -= default_size;
+		}
+
+		/*
+		 * If some memory is remaining, try to allocate it by getting
+		 * all available memory from sockets, one after the other.
+		 */
+		for (socket = 0; socket < RTE_MAX_NUMA_NODES && total_size != 0;
+				socket++) {
+			/* take whatever is available */
+			default_size = RTE_MIN(
+				get_socket_mem_size(socket) - memory[socket],
+				total_size);
+
+			/* Update sizes */
+			memory[socket] += default_size;
+			total_size -= default_size;
+		}
+#else
+		/* in 32-bit mode, allocate all of the memory only on master
+		 * lcore socket
+		 */
+		total_size = internal_config.memory;
+		for (socket = 0; socket < RTE_MAX_NUMA_NODES && total_size != 0;
+				socket++) {
+			struct rte_config *cfg = rte_eal_get_configuration();
+			unsigned int master_lcore_socket;
+
+			master_lcore_socket =
+				rte_lcore_to_socket_id(cfg->master_lcore);
+
+			if (master_lcore_socket != socket)
+				continue;
+
+			/* Update sizes */
+			memory[socket] = total_size;
+			break;
+		}
+#endif
+	}
+
+	for (socket = 0; socket < RTE_MAX_NUMA_NODES && total_mem != 0;
+			socket++) {
+		/* skips if the memory on specific socket wasn't requested */
+		for (i = 0; i < num_hp_info && memory[socket] != 0; i++) {
+			rte_strscpy(hp_used[i].hugedir, hp_info[i].hugedir,
+				sizeof(hp_used[i].hugedir));
+			hp_used[i].num_pages[socket] = RTE_MIN(
+					memory[socket] / hp_info[i].hugepage_sz,
+					hp_info[i].num_pages[socket]);
+
+			cur_mem = hp_used[i].num_pages[socket] *
+					hp_used[i].hugepage_sz;
+
+			memory[socket] -= cur_mem;
+			total_mem -= cur_mem;
+
+			total_num_pages += hp_used[i].num_pages[socket];
+
+			/* check if we have met all memory requests */
+			if (memory[socket] == 0)
+				break;
+
+			/* Check if we have any more pages left at this size,
+			 * if so, move on to next size.
+			 */
+			if (hp_used[i].num_pages[socket] ==
+					hp_info[i].num_pages[socket])
+				continue;
+			/* At this point we know that there are more pages
+			 * available that are bigger than the memory we want,
+			 * so lets see if we can get enough from other page
+			 * sizes.
+			 */
+			remaining_mem = 0;
+			for (j = i+1; j < num_hp_info; j++)
+				remaining_mem += hp_info[j].hugepage_sz *
+				hp_info[j].num_pages[socket];
+
+			/* Is there enough other memory?
+			 * If not, allocate another page and quit.
+			 */
+			if (remaining_mem < memory[socket]) {
+				cur_mem = RTE_MIN(
+					memory[socket], hp_info[i].hugepage_sz);
+				memory[socket] -= cur_mem;
+				total_mem -= cur_mem;
+				hp_used[i].num_pages[socket]++;
+				total_num_pages++;
+				break; /* we are done with this socket*/
+			}
+		}
+
+		/* if we didn't satisfy all memory requirements per socket */
+		if (memory[socket] > 0 &&
+				internal_config.socket_mem[socket] != 0) {
+			/* to prevent icc errors */
+			requested = (unsigned int)(
+				internal_config.socket_mem[socket] / 0x100000);
+			available = requested -
+				((unsigned int)(memory[socket] / 0x100000));
+			RTE_LOG(ERR, EAL, "Not enough memory available on "
+				"socket %u! Requested: %uMB, available: %uMB\n",
+				socket, requested, available);
+			return -1;
+		}
+	}
+
+	/* if we didn't satisfy total memory requirements */
+	if (total_mem > 0) {
+		requested = (unsigned int)(internal_config.memory / 0x100000);
+		available = requested - (unsigned int)(total_mem / 0x100000);
+		RTE_LOG(ERR, EAL, "Not enough memory available! "
+			"Requested: %uMB, available: %uMB\n",
+			requested, available);
+		return -1;
+	}
+	return total_num_pages;
+}
diff --git a/lib/librte_eal/common/eal_private.h b/lib/librte_eal/common/eal_private.h
index c698ffbaf..290262ff5 100644
--- a/lib/librte_eal/common/eal_private.h
+++ b/lib/librte_eal/common/eal_private.h
@@ -13,6 +13,8 @@
 #include <rte_lcore.h>
 #include <rte_memory.h>
 
+#include "eal_internal_cfg.h"
+
 /**
  * Structure storing internal configuration (per-lcore)
  */
@@ -316,6 +318,45 @@ eal_memseg_list_alloc(struct rte_memseg_list *msl, int reserve_flags);
 void
 eal_memseg_list_populate(struct rte_memseg_list *msl, void *addr, int n_segs);
 
+/**
+ * Distribute available memory between MSLs.
+ *
+ * @return
+ *  0 on success, (-1) on failure.
+ */
+int
+eal_dynmem_memseg_lists_init(void);
+
+/**
+ * Preallocate hugepages for dynamic allocation.
+ *
+ * @return
+ *  0 on success, (-1) on failure.
+ */
+int
+eal_dynmem_hugepage_init(void);
+
+/**
+ * Given the list of hugepage sizes and the number of pages thereof,
+ * calculate the best number of pages of each size to fulfill the request
+ * for RAM on each NUMA node.
+ *
+ * @param memory
+ *  Amounts of memory requested for each NUMA node of RTE_MAX_NUMA_NODES.
+ * @param hp_info
+ *  Information about hugepages of different size.
+ * @param hp_used
+ *  Receives information about used hugepages of each size.
+ * @param num_hp_info
+ *  Number of elements in hp_info and hp_used.
+ * @return
+ *  0 on success, (-1) on failure.
+ */
+int
+eal_dynmem_calc_num_pages_per_socket(
+		uint64_t *memory, struct hugepage_info *hp_info,
+		struct hugepage_info *hp_used, unsigned int num_hp_info);
+
 /**
  * Get cpu core_id.
  *
@@ -596,7 +637,7 @@ void *
 eal_mem_reserve(void *requested_addr, size_t size, int flags);
 
 /**
- * Free memory obtained by eal_mem_reserve() or eal_mem_alloc().
+ * Free memory obtained by eal_mem_reserve() and possibly allocated.
  *
  * If *virt* and *size* describe a part of the reserved region,
  * only this part of the region is freed (accurately up to the system
diff --git a/lib/librte_eal/common/meson.build b/lib/librte_eal/common/meson.build
index 55aaeb18e..d91c22220 100644
--- a/lib/librte_eal/common/meson.build
+++ b/lib/librte_eal/common/meson.build
@@ -56,3 +56,7 @@ sources += files(
 	'rte_reciprocal.c',
 	'rte_service.c',
 )
+
+if is_linux
+	sources += files('eal_common_dynmem.c')
+endif
diff --git a/lib/librte_eal/freebsd/eal_memory.c b/lib/librte_eal/freebsd/eal_memory.c
index 29c3ed5a9..7106b8b84 100644
--- a/lib/librte_eal/freebsd/eal_memory.c
+++ b/lib/librte_eal/freebsd/eal_memory.c
@@ -317,14 +317,6 @@ get_mem_amount(uint64_t page_sz, uint64_t max_mem)
 	return RTE_ALIGN(area_sz, page_sz);
 }
 
-static int
-memseg_list_init(struct rte_memseg_list *msl, uint64_t page_sz,
-		int n_segs, int socket_id, int type_msl_idx)
-{
-	return eal_memseg_list_init(
-		msl, page_sz, n_segs, socket_id, type_msl_idx, false);
-}
-
 static int
 memseg_list_alloc(struct rte_memseg_list *msl)
 {
@@ -421,8 +413,8 @@ memseg_primary_init(void)
 					cur_max_mem);
 			n_segs = cur_mem / hugepage_sz;
 
-			if (memseg_list_init(msl, hugepage_sz, n_segs,
-					0, type_msl_idx))
+			if (eal_memseg_list_init(msl, hugepage_sz, n_segs,
+					0, type_msl_idx, false))
 				return -1;
 
 			total_segs += msl->memseg_arr.len;
diff --git a/lib/librte_eal/linux/Makefile b/lib/librte_eal/linux/Makefile
index 8febf2212..07ce643ba 100644
--- a/lib/librte_eal/linux/Makefile
+++ b/lib/librte_eal/linux/Makefile
@@ -50,6 +50,7 @@ SRCS-$(CONFIG_RTE_EXEC_ENV_LINUX) += eal_common_timer.c
 SRCS-$(CONFIG_RTE_EXEC_ENV_LINUX) += eal_common_memzone.c
 SRCS-$(CONFIG_RTE_EXEC_ENV_LINUX) += eal_common_log.c
 SRCS-$(CONFIG_RTE_EXEC_ENV_LINUX) += eal_common_launch.c
+SRCS-$(CONFIG_RTE_EXEC_ENV_LINUX) += eal_common_dynmem.c
 SRCS-$(CONFIG_RTE_EXEC_ENV_LINUX) += eal_common_mcfg.c
 SRCS-$(CONFIG_RTE_EXEC_ENV_LINUX) += eal_common_memalloc.c
 SRCS-$(CONFIG_RTE_EXEC_ENV_LINUX) += eal_common_memory.c
diff --git a/lib/librte_eal/linux/eal_memory.c b/lib/librte_eal/linux/eal_memory.c
index 8b5fe613e..12d72f726 100644
--- a/lib/librte_eal/linux/eal_memory.c
+++ b/lib/librte_eal/linux/eal_memory.c
@@ -812,20 +812,6 @@ memseg_list_free(struct rte_memseg_list *msl)
 	return 0;
 }
 
-static int
-memseg_list_init(struct rte_memseg_list *msl, uint64_t page_sz,
-		int n_segs, int socket_id, int type_msl_idx)
-{
-	return eal_memseg_list_init(
-		msl, page_sz, n_segs, socket_id, type_msl_idx, true);
-}
-
-static int
-memseg_list_alloc(struct rte_memseg_list *msl)
-{
-	return eal_memseg_list_alloc(msl, 0);
-}
-
 /*
  * Our VA space is not preallocated yet, so preallocate it here. We need to know
  * how many segments there are in order to map all pages into one address space,
@@ -969,12 +955,12 @@ prealloc_segments(struct hugepage_file *hugepages, int n_pages)
 			}
 
 			/* now, allocate fbarray itself */
-			if (memseg_list_init(msl, page_sz, n_segs, socket,
-						msl_idx) < 0)
+			if (eal_memseg_list_init(msl, page_sz, n_segs,
+					socket, msl_idx, true) < 0)
 				return -1;
 
 			/* finally, allocate VA space */
-			if (memseg_list_alloc(msl) < 0)
+			if (eal_memseg_list_alloc(msl, 0) < 0)
 				return -1;
 		}
 	}
@@ -1045,182 +1031,6 @@ remap_needed_hugepages(struct hugepage_file *hugepages, int n_pages)
 	return 0;
 }
 
-__rte_unused /* function is unused on 32-bit builds */
-static inline uint64_t
-get_socket_mem_size(int socket)
-{
-	uint64_t size = 0;
-	unsigned i;
-
-	for (i = 0; i < internal_config.num_hugepage_sizes; i++){
-		struct hugepage_info *hpi = &internal_config.hugepage_info[i];
-		size += hpi->hugepage_sz * hpi->num_pages[socket];
-	}
-
-	return size;
-}
-
-/*
- * This function is a NUMA-aware equivalent of calc_num_pages.
- * It takes in the list of hugepage sizes and the
- * number of pages thereof, and calculates the best number of
- * pages of each size to fulfill the request for <memory> ram
- */
-static int
-calc_num_pages_per_socket(uint64_t * memory,
-		struct hugepage_info *hp_info,
-		struct hugepage_info *hp_used,
-		unsigned num_hp_info)
-{
-	unsigned socket, j, i = 0;
-	unsigned requested, available;
-	int total_num_pages = 0;
-	uint64_t remaining_mem, cur_mem;
-	uint64_t total_mem = internal_config.memory;
-
-	if (num_hp_info == 0)
-		return -1;
-
-	/* if specific memory amounts per socket weren't requested */
-	if (internal_config.force_sockets == 0) {
-		size_t total_size;
-#ifdef RTE_ARCH_64
-		int cpu_per_socket[RTE_MAX_NUMA_NODES];
-		size_t default_size;
-		unsigned lcore_id;
-
-		/* Compute number of cores per socket */
-		memset(cpu_per_socket, 0, sizeof(cpu_per_socket));
-		RTE_LCORE_FOREACH(lcore_id) {
-			cpu_per_socket[rte_lcore_to_socket_id(lcore_id)]++;
-		}
-
-		/*
-		 * Automatically spread requested memory amongst detected sockets according
-		 * to number of cores from cpu mask present on each socket
-		 */
-		total_size = internal_config.memory;
-		for (socket = 0; socket < RTE_MAX_NUMA_NODES && total_size != 0; socket++) {
-
-			/* Set memory amount per socket */
-			default_size = (internal_config.memory * cpu_per_socket[socket])
-					/ rte_lcore_count();
-
-			/* Limit to maximum available memory on socket */
-			default_size = RTE_MIN(default_size, get_socket_mem_size(socket));
-
-			/* Update sizes */
-			memory[socket] = default_size;
-			total_size -= default_size;
-		}
-
-		/*
-		 * If some memory is remaining, try to allocate it by getting all
-		 * available memory from sockets, one after the other
-		 */
-		for (socket = 0; socket < RTE_MAX_NUMA_NODES && total_size != 0; socket++) {
-			/* take whatever is available */
-			default_size = RTE_MIN(get_socket_mem_size(socket) - memory[socket],
-					       total_size);
-
-			/* Update sizes */
-			memory[socket] += default_size;
-			total_size -= default_size;
-		}
-#else
-		/* in 32-bit mode, allocate all of the memory only on master
-		 * lcore socket
-		 */
-		total_size = internal_config.memory;
-		for (socket = 0; socket < RTE_MAX_NUMA_NODES && total_size != 0;
-				socket++) {
-			struct rte_config *cfg = rte_eal_get_configuration();
-			unsigned int master_lcore_socket;
-
-			master_lcore_socket =
-				rte_lcore_to_socket_id(cfg->master_lcore);
-
-			if (master_lcore_socket != socket)
-				continue;
-
-			/* Update sizes */
-			memory[socket] = total_size;
-			break;
-		}
-#endif
-	}
-
-	for (socket = 0; socket < RTE_MAX_NUMA_NODES && total_mem != 0; socket++) {
-		/* skips if the memory on specific socket wasn't requested */
-		for (i = 0; i < num_hp_info && memory[socket] != 0; i++){
-			strlcpy(hp_used[i].hugedir, hp_info[i].hugedir,
-				sizeof(hp_used[i].hugedir));
-			hp_used[i].num_pages[socket] = RTE_MIN(
-					memory[socket] / hp_info[i].hugepage_sz,
-					hp_info[i].num_pages[socket]);
-
-			cur_mem = hp_used[i].num_pages[socket] *
-					hp_used[i].hugepage_sz;
-
-			memory[socket] -= cur_mem;
-			total_mem -= cur_mem;
-
-			total_num_pages += hp_used[i].num_pages[socket];
-
-			/* check if we have met all memory requests */
-			if (memory[socket] == 0)
-				break;
-
-			/* check if we have any more pages left at this size, if so
-			 * move on to next size */
-			if (hp_used[i].num_pages[socket] == hp_info[i].num_pages[socket])
-				continue;
-			/* At this point we know that there are more pages available that are
-			 * bigger than the memory we want, so lets see if we can get enough
-			 * from other page sizes.
-			 */
-			remaining_mem = 0;
-			for (j = i+1; j < num_hp_info; j++)
-				remaining_mem += hp_info[j].hugepage_sz *
-				hp_info[j].num_pages[socket];
-
-			/* is there enough other memory, if not allocate another page and quit */
-			if (remaining_mem < memory[socket]){
-				cur_mem = RTE_MIN(memory[socket],
-						hp_info[i].hugepage_sz);
-				memory[socket] -= cur_mem;
-				total_mem -= cur_mem;
-				hp_used[i].num_pages[socket]++;
-				total_num_pages++;
-				break; /* we are done with this socket*/
-			}
-		}
-		/* if we didn't satisfy all memory requirements per socket */
-		if (memory[socket] > 0 &&
-				internal_config.socket_mem[socket] != 0) {
-			/* to prevent icc errors */
-			requested = (unsigned) (internal_config.socket_mem[socket] /
-					0x100000);
-			available = requested -
-					((unsigned) (memory[socket] / 0x100000));
-			RTE_LOG(ERR, EAL, "Not enough memory available on socket %u! "
-					"Requested: %uMB, available: %uMB\n", socket,
-					requested, available);
-			return -1;
-		}
-	}
-
-	/* if we didn't satisfy total memory requirements */
-	if (total_mem > 0) {
-		requested = (unsigned) (internal_config.memory / 0x100000);
-		available = requested - (unsigned) (total_mem / 0x100000);
-		RTE_LOG(ERR, EAL, "Not enough memory available! Requested: %uMB,"
-				" available: %uMB\n", requested, available);
-		return -1;
-	}
-	return total_num_pages;
-}
-
 static inline size_t
 eal_get_hugepage_mem_size(void)
 {
@@ -1524,7 +1334,7 @@ eal_legacy_hugepage_init(void)
 		memory[i] = internal_config.socket_mem[i];
 
 	/* calculate final number of pages */
-	nr_hugepages = calc_num_pages_per_socket(memory,
+	nr_hugepages = eal_dynmem_calc_num_pages_per_socket(memory,
 			internal_config.hugepage_info, used_hp,
 			internal_config.num_hugepage_sizes);
 
@@ -1651,140 +1461,6 @@ eal_legacy_hugepage_init(void)
 	return -1;
 }
 
-static int __rte_unused
-hugepage_count_walk(const struct rte_memseg_list *msl, void *arg)
-{
-	struct hugepage_info *hpi = arg;
-
-	if (msl->page_sz != hpi->hugepage_sz)
-		return 0;
-
-	hpi->num_pages[msl->socket_id] += msl->memseg_arr.len;
-	return 0;
-}
-
-static int
-limits_callback(int socket_id, size_t cur_limit, size_t new_len)
-{
-	RTE_SET_USED(socket_id);
-	RTE_SET_USED(cur_limit);
-	RTE_SET_USED(new_len);
-	return -1;
-}
-
-static int
-eal_hugepage_init(void)
-{
-	struct hugepage_info used_hp[MAX_HUGEPAGE_SIZES];
-	uint64_t memory[RTE_MAX_NUMA_NODES];
-	int hp_sz_idx, socket_id;
-
-	memset(used_hp, 0, sizeof(used_hp));
-
-	for (hp_sz_idx = 0;
-			hp_sz_idx < (int) internal_config.num_hugepage_sizes;
-			hp_sz_idx++) {
-#ifndef RTE_ARCH_64
-		struct hugepage_info dummy;
-		unsigned int i;
-#endif
-		/* also initialize used_hp hugepage sizes in used_hp */
-		struct hugepage_info *hpi;
-		hpi = &internal_config.hugepage_info[hp_sz_idx];
-		used_hp[hp_sz_idx].hugepage_sz = hpi->hugepage_sz;
-
-#ifndef RTE_ARCH_64
-		/* for 32-bit, limit number of pages on socket to whatever we've
-		 * preallocated, as we cannot allocate more.
-		 */
-		memset(&dummy, 0, sizeof(dummy));
-		dummy.hugepage_sz = hpi->hugepage_sz;
-		if (rte_memseg_list_walk(hugepage_count_walk, &dummy) < 0)
-			return -1;
-
-		for (i = 0; i < RTE_DIM(dummy.num_pages); i++) {
-			hpi->num_pages[i] = RTE_MIN(hpi->num_pages[i],
-					dummy.num_pages[i]);
-		}
-#endif
-	}
-
-	/* make a copy of socket_mem, needed for balanced allocation. */
-	for (hp_sz_idx = 0; hp_sz_idx < RTE_MAX_NUMA_NODES; hp_sz_idx++)
-		memory[hp_sz_idx] = internal_config.socket_mem[hp_sz_idx];
-
-	/* calculate final number of pages */
-	if (calc_num_pages_per_socket(memory,
-			internal_config.hugepage_info, used_hp,
-			internal_config.num_hugepage_sizes) < 0)
-		return -1;
-
-	for (hp_sz_idx = 0;
-			hp_sz_idx < (int)internal_config.num_hugepage_sizes;
-			hp_sz_idx++) {
-		for (socket_id = 0; socket_id < RTE_MAX_NUMA_NODES;
-				socket_id++) {
-			struct rte_memseg **pages;
-			struct hugepage_info *hpi = &used_hp[hp_sz_idx];
-			unsigned int num_pages = hpi->num_pages[socket_id];
-			unsigned int num_pages_alloc;
-
-			if (num_pages == 0)
-				continue;
-
-			RTE_LOG(DEBUG, EAL, "Allocating %u pages of size %" PRIu64 "M on socket %i\n",
-				num_pages, hpi->hugepage_sz >> 20, socket_id);
-
-			/* we may not be able to allocate all pages in one go,
-			 * because we break up our memory map into multiple
-			 * memseg lists. therefore, try allocating multiple
-			 * times and see if we can get the desired number of
-			 * pages from multiple allocations.
-			 */
-
-			num_pages_alloc = 0;
-			do {
-				int i, cur_pages, needed;
-
-				needed = num_pages - num_pages_alloc;
-
-				pages = malloc(sizeof(*pages) * needed);
-
-				/* do not request exact number of pages */
-				cur_pages = eal_memalloc_alloc_seg_bulk(pages,
-						needed, hpi->hugepage_sz,
-						socket_id, false);
-				if (cur_pages <= 0) {
-					free(pages);
-					return -1;
-				}
-
-				/* mark preallocated pages as unfreeable */
-				for (i = 0; i < cur_pages; i++) {
-					struct rte_memseg *ms = pages[i];
-					ms->flags |= RTE_MEMSEG_FLAG_DO_NOT_FREE;
-				}
-				free(pages);
-
-				num_pages_alloc += cur_pages;
-			} while (num_pages_alloc != num_pages);
-		}
-	}
-	/* if socket limits were specified, set them */
-	if (internal_config.force_socket_limits) {
-		unsigned int i;
-		for (i = 0; i < RTE_MAX_NUMA_NODES; i++) {
-			uint64_t limit = internal_config.socket_limit[i];
-			if (limit == 0)
-				continue;
-			if (rte_mem_alloc_validator_register("socket-limit",
-					limits_callback, i, limit))
-				RTE_LOG(ERR, EAL, "Failed to register socket limits validator callback\n");
-		}
-	}
-	return 0;
-}
-
 /*
  * uses fstat to report the size of a file on disk
  */
@@ -1943,7 +1619,7 @@ rte_eal_hugepage_init(void)
 {
 	return internal_config.legacy_mem ?
 			eal_legacy_hugepage_init() :
-			eal_hugepage_init();
+			eal_dynmem_hugepage_init();
 }
 
 int
@@ -2122,8 +1798,9 @@ memseg_primary_init_32(void)
 						max_pagesz_mem);
 				n_segs = cur_mem / hugepage_sz;
 
-				if (memseg_list_init(msl, hugepage_sz, n_segs,
-						socket_id, type_msl_idx)) {
+				if (eal_memseg_list_init(msl, hugepage_sz,
+						n_segs, socket_id, type_msl_idx,
+						true)) {
 					/* failing to allocate a memseg list is
 					 * a serious error.
 					 */
@@ -2131,7 +1808,7 @@ memseg_primary_init_32(void)
 					return -1;
 				}
 
-				if (memseg_list_alloc(msl)) {
+				if (eal_memseg_list_alloc(msl, 0)) {
 					/* if we couldn't allocate VA space, we
 					 * can try with smaller page sizes.
 					 */
@@ -2162,185 +1839,7 @@ memseg_primary_init_32(void)
 static int __rte_unused
 memseg_primary_init(void)
 {
-	struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
-	struct memtype {
-		uint64_t page_sz;
-		int socket_id;
-	} *memtypes = NULL;
-	int i, hpi_idx, msl_idx, ret = -1; /* fail unless told to succeed */
-	struct rte_memseg_list *msl;
-	uint64_t max_mem, max_mem_per_type;
-	unsigned int max_seglists_per_type;
-	unsigned int n_memtypes, cur_type;
-
-	/* no-huge does not need this at all */
-	if (internal_config.no_hugetlbfs)
-		return 0;
-
-	/*
-	 * figuring out amount of memory we're going to have is a long and very
-	 * involved process. the basic element we're operating with is a memory
-	 * type, defined as a combination of NUMA node ID and page size (so that
-	 * e.g. 2 sockets with 2 page sizes yield 4 memory types in total).
-	 *
-	 * deciding amount of memory going towards each memory type is a
-	 * balancing act between maximum segments per type, maximum memory per
-	 * type, and number of detected NUMA nodes. the goal is to make sure
-	 * each memory type gets at least one memseg list.
-	 *
-	 * the total amount of memory is limited by RTE_MAX_MEM_MB value.
-	 *
-	 * the total amount of memory per type is limited by either
-	 * RTE_MAX_MEM_MB_PER_TYPE, or by RTE_MAX_MEM_MB divided by the number
-	 * of detected NUMA nodes. additionally, maximum number of segments per
-	 * type is also limited by RTE_MAX_MEMSEG_PER_TYPE. this is because for
-	 * smaller page sizes, it can take hundreds of thousands of segments to
-	 * reach the above specified per-type memory limits.
-	 *
-	 * additionally, each type may have multiple memseg lists associated
-	 * with it, each limited by either RTE_MAX_MEM_MB_PER_LIST for bigger
-	 * page sizes, or RTE_MAX_MEMSEG_PER_LIST segments for smaller ones.
-	 *
-	 * the number of memseg lists per type is decided based on the above
-	 * limits, and also taking number of detected NUMA nodes, to make sure
-	 * that we don't run out of memseg lists before we populate all NUMA
-	 * nodes with memory.
-	 *
-	 * we do this in three stages. first, we collect the number of types.
-	 * then, we figure out memory constraints and populate the list of
-	 * would-be memseg lists. then, we go ahead and allocate the memseg
-	 * lists.
-	 */
-
-	/* create space for mem types */
-	n_memtypes = internal_config.num_hugepage_sizes * rte_socket_count();
-	memtypes = calloc(n_memtypes, sizeof(*memtypes));
-	if (memtypes == NULL) {
-		RTE_LOG(ERR, EAL, "Cannot allocate space for memory types\n");
-		return -1;
-	}
-
-	/* populate mem types */
-	cur_type = 0;
-	for (hpi_idx = 0; hpi_idx < (int) internal_config.num_hugepage_sizes;
-			hpi_idx++) {
-		struct hugepage_info *hpi;
-		uint64_t hugepage_sz;
-
-		hpi = &internal_config.hugepage_info[hpi_idx];
-		hugepage_sz = hpi->hugepage_sz;
-
-		for (i = 0; i < (int) rte_socket_count(); i++, cur_type++) {
-			int socket_id = rte_socket_id_by_idx(i);
-
-#ifndef RTE_EAL_NUMA_AWARE_HUGEPAGES
-			/* we can still sort pages by socket in legacy mode */
-			if (!internal_config.legacy_mem && socket_id > 0)
-				break;
-#endif
-			memtypes[cur_type].page_sz = hugepage_sz;
-			memtypes[cur_type].socket_id = socket_id;
-
-			RTE_LOG(DEBUG, EAL, "Detected memory type: "
-				"socket_id:%u hugepage_sz:%" PRIu64 "\n",
-				socket_id, hugepage_sz);
-		}
-	}
-	/* number of memtypes could have been lower due to no NUMA support */
-	n_memtypes = cur_type;
-
-	/* set up limits for types */
-	max_mem = (uint64_t)RTE_MAX_MEM_MB << 20;
-	max_mem_per_type = RTE_MIN((uint64_t)RTE_MAX_MEM_MB_PER_TYPE << 20,
-			max_mem / n_memtypes);
-	/*
-	 * limit maximum number of segment lists per type to ensure there's
-	 * space for memseg lists for all NUMA nodes with all page sizes
-	 */
-	max_seglists_per_type = RTE_MAX_MEMSEG_LISTS / n_memtypes;
-
-	if (max_seglists_per_type == 0) {
-		RTE_LOG(ERR, EAL, "Cannot accommodate all memory types, please increase %s\n",
-			RTE_STR(CONFIG_RTE_MAX_MEMSEG_LISTS));
-		goto out;
-	}
-
-	/* go through all mem types and create segment lists */
-	msl_idx = 0;
-	for (cur_type = 0; cur_type < n_memtypes; cur_type++) {
-		unsigned int cur_seglist, n_seglists, n_segs;
-		unsigned int max_segs_per_type, max_segs_per_list;
-		struct memtype *type = &memtypes[cur_type];
-		uint64_t max_mem_per_list, pagesz;
-		int socket_id;
-
-		pagesz = type->page_sz;
-		socket_id = type->socket_id;
-
-		/*
-		 * we need to create segment lists for this type. we must take
-		 * into account the following things:
-		 *
-		 * 1. total amount of memory we can use for this memory type
-		 * 2. total amount of memory per memseg list allowed
-		 * 3. number of segments needed to fit the amount of memory
-		 * 4. number of segments allowed per type
-		 * 5. number of segments allowed per memseg list
-		 * 6. number of memseg lists we are allowed to take up
-		 */
-
-		/* calculate how much segments we will need in total */
-		max_segs_per_type = max_mem_per_type / pagesz;
-		/* limit number of segments to maximum allowed per type */
-		max_segs_per_type = RTE_MIN(max_segs_per_type,
-				(unsigned int)RTE_MAX_MEMSEG_PER_TYPE);
-		/* limit number of segments to maximum allowed per list */
-		max_segs_per_list = RTE_MIN(max_segs_per_type,
-				(unsigned int)RTE_MAX_MEMSEG_PER_LIST);
-
-		/* calculate how much memory we can have per segment list */
-		max_mem_per_list = RTE_MIN(max_segs_per_list * pagesz,
-				(uint64_t)RTE_MAX_MEM_MB_PER_LIST << 20);
-
-		/* calculate how many segments each segment list will have */
-		n_segs = RTE_MIN(max_segs_per_list, max_mem_per_list / pagesz);
-
-		/* calculate how many segment lists we can have */
-		n_seglists = RTE_MIN(max_segs_per_type / n_segs,
-				max_mem_per_type / max_mem_per_list);
-
-		/* limit number of segment lists according to our maximum */
-		n_seglists = RTE_MIN(n_seglists, max_seglists_per_type);
-
-		RTE_LOG(DEBUG, EAL, "Creating %i segment lists: "
-				"n_segs:%i socket_id:%i hugepage_sz:%" PRIu64 "\n",
-			n_seglists, n_segs, socket_id, pagesz);
-
-		/* create all segment lists */
-		for (cur_seglist = 0; cur_seglist < n_seglists; cur_seglist++) {
-			if (msl_idx >= RTE_MAX_MEMSEG_LISTS) {
-				RTE_LOG(ERR, EAL,
-					"No more space in memseg lists, please increase %s\n",
-					RTE_STR(CONFIG_RTE_MAX_MEMSEG_LISTS));
-				goto out;
-			}
-			msl = &mcfg->memsegs[msl_idx++];
-
-			if (memseg_list_init(msl, pagesz, n_segs,
-					socket_id, cur_seglist))
-				goto out;
-
-			if (memseg_list_alloc(msl)) {
-				RTE_LOG(ERR, EAL, "Cannot allocate VA space for memseg list\n");
-				goto out;
-			}
-		}
-	}
-	/* we're successful */
-	ret = 0;
-out:
-	free(memtypes);
-	return ret;
+	return eal_dynmem_memseg_lists_init();
 }
 
 static int
@@ -2364,7 +1863,7 @@ memseg_secondary_init(void)
 		}
 
 		/* preallocate VA space */
-		if (memseg_list_alloc(msl)) {
+		if (eal_memseg_list_alloc(msl, 0)) {
 			RTE_LOG(ERR, EAL, "Cannot preallocate VA space for hugepage memory\n");
 			return -1;
 		}
-- 
2.25.4


  parent reply	other threads:[~2020-06-02 23:04 UTC|newest]

Thread overview: 218+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-03-30  4:10 [dpdk-dev] [RFC PATCH 0/9] Windows basic memory management Dmitry Kozlyuk
2020-03-30  4:10 ` [dpdk-dev] [PATCH 1/1] virt2phys: virtual to physical address translator for Windows Dmitry Kozlyuk
2020-03-30  6:58   ` Jerin Jacob
2020-03-30 13:41     ` Dmitry Kozlyuk
2020-04-10  1:45   ` Ranjit Menon
2020-04-10  2:50     ` Dmitry Kozlyuk
2020-04-10  2:59       ` Dmitry Kozlyuk
2020-04-10 19:39       ` Ranjit Menon
2020-03-30  4:10 ` [dpdk-dev] [RFC PATCH 2/9] eal/windows: do not expose private EAL facilities Dmitry Kozlyuk
2020-03-30  4:10 ` [dpdk-dev] [RFC PATCH 3/9] eal/windows: improve CPU and NUMA node detection Dmitry Kozlyuk
2020-03-30  4:10 ` [dpdk-dev] [RFC PATCH 4/9] eal/windows: initialize hugepage info Dmitry Kozlyuk
2020-03-30  4:10 ` [dpdk-dev] [RFC PATCH 5/9] eal: introduce internal wrappers for file operations Dmitry Kozlyuk
2020-03-30  7:04   ` Jerin Jacob
2020-03-30  4:10 ` [dpdk-dev] [RFC PATCH 6/9] eal: introduce memory management wrappers Dmitry Kozlyuk
2020-03-30  7:31   ` Thomas Monjalon
2020-03-30  4:10 ` [dpdk-dev] [RFC PATCH 7/9] eal/windows: fix rte_page_sizes with Clang on Windows Dmitry Kozlyuk
2020-03-30  4:10 ` [dpdk-dev] [RFC PATCH 8/9] eal/windows: replace sys/queue.h with a complete one from FreeBSD Dmitry Kozlyuk
2020-03-30  4:10 ` [dpdk-dev] [RFC PATCH 9/9] eal/windows: implement basic memory management Dmitry Kozlyuk
2020-04-10 16:43 ` [dpdk-dev] [PATCH v2 00/10] eal: Windows " Dmitry Kozlyuk
2020-04-10 16:43   ` [dpdk-dev] [PATCH v2 1/1] virt2phys: virtual to physical address translator for Windows Dmitry Kozlyuk
2020-04-13  5:32     ` Ranjit Menon
2020-04-10 16:43   ` [dpdk-dev] [PATCH v2 02/10] eal/windows: do not expose private EAL facilities Dmitry Kozlyuk
2020-04-10 16:43   ` [dpdk-dev] [PATCH v2 03/10] eal/windows: improve CPU and NUMA node detection Dmitry Kozlyuk
2020-04-10 16:43   ` [dpdk-dev] [PATCH v2 04/10] eal/windows: initialize hugepage info Dmitry Kozlyuk
2020-04-10 16:43   ` [dpdk-dev] [PATCH v2 05/10] eal: introduce internal wrappers for file operations Dmitry Kozlyuk
2020-04-10 16:43   ` [dpdk-dev] [PATCH v2 06/10] eal: introduce memory management wrappers Dmitry Kozlyuk
2020-04-13  7:50     ` Tal Shnaiderman
2020-04-10 16:43   ` [dpdk-dev] [PATCH v2 07/10] eal: extract common code for memseg list initialization Dmitry Kozlyuk
2020-04-10 16:43   ` [dpdk-dev] [PATCH v2 08/10] eal/windows: fix rte_page_sizes with Clang on Windows Dmitry Kozlyuk
2020-04-10 16:43   ` [dpdk-dev] [PATCH v2 09/10] eal/windows: replace sys/queue.h with a complete one from FreeBSD Dmitry Kozlyuk
2020-04-10 16:43   ` [dpdk-dev] [PATCH v2 10/10] eal/windows: implement basic memory management Dmitry Kozlyuk
2020-04-10 22:04     ` Narcisa Ana Maria Vasile
2020-04-11  1:16       ` Dmitry Kozlyuk
2020-04-14 19:44   ` [dpdk-dev] [PATCH v3 00/10] Windows " Dmitry Kozlyuk
2020-04-14 19:44     ` [dpdk-dev] [PATCH v3 1/1] virt2phys: virtual to physical address translator for Windows Dmitry Kozlyuk
2020-04-14 23:35       ` Ranjit Menon
2020-04-15 15:19         ` Thomas Monjalon
2020-04-21  6:23       ` Ophir Munk
2020-04-14 19:44     ` [dpdk-dev] [PATCH v3 02/10] eal/windows: do not expose private EAL facilities Dmitry Kozlyuk
2020-04-21 22:40       ` Thomas Monjalon
2020-04-14 19:44     ` [dpdk-dev] [PATCH v3 03/10] eal/windows: improve CPU and NUMA node detection Dmitry Kozlyuk
2020-04-14 19:44     ` [dpdk-dev] [PATCH v3 04/10] eal/windows: initialize hugepage info Dmitry Kozlyuk
2020-04-14 19:44     ` [dpdk-dev] [PATCH v3 05/10] eal: introduce internal wrappers for file operations Dmitry Kozlyuk
2020-04-15 21:48       ` Thomas Monjalon
2020-04-17 12:24       ` Burakov, Anatoly
2020-04-28 23:50         ` Dmitry Kozlyuk
2020-04-14 19:44     ` [dpdk-dev] [PATCH v3 06/10] eal: introduce memory management wrappers Dmitry Kozlyuk
2020-04-15 22:17       ` Thomas Monjalon
2020-04-15 23:32         ` Dmitry Kozlyuk
2020-04-17 12:43       ` Burakov, Anatoly
2020-04-20  5:59       ` Tal Shnaiderman
2020-04-21 23:36         ` Dmitry Kozlyuk
2020-04-22  0:55       ` Ranjit Menon
2020-04-22  2:07       ` Ranjit Menon
2020-04-14 19:44     ` [dpdk-dev] [PATCH v3 07/10] eal: extract common code for memseg list initialization Dmitry Kozlyuk
2020-04-15 22:19       ` Thomas Monjalon
2020-04-17 13:04       ` Burakov, Anatoly
2020-04-14 19:44     ` [dpdk-dev] [PATCH v3 08/10] eal/windows: fix rte_page_sizes with Clang on Windows Dmitry Kozlyuk
2020-04-15  9:34       ` Jerin Jacob
2020-04-15 10:32         ` Dmitry Kozlyuk
2020-04-15 10:57           ` Jerin Jacob
2020-04-15 11:09             ` Dmitry Kozlyuk
2020-04-15 11:17               ` Jerin Jacob
2020-05-06  5:41                 ` Ray Kinsella
2020-04-14 19:44     ` [dpdk-dev] [PATCH v3 09/10] eal/windows: replace sys/queue.h with a complete one from FreeBSD Dmitry Kozlyuk
2020-04-14 19:44     ` [dpdk-dev] [PATCH v3 10/10] eal/windows: implement basic memory management Dmitry Kozlyuk
2020-04-15  9:42       ` Jerin Jacob
2020-04-16 18:34       ` Ranjit Menon
2020-04-23  1:00         ` Dmitry Kozlyuk
2020-04-14 23:37     ` [dpdk-dev] [PATCH v3 00/10] Windows " Kadam, Pallavi
2020-04-28 23:50   ` [dpdk-dev] [PATCH v4 0/8] " Dmitry Kozlyuk
2020-04-28 23:50     ` [dpdk-dev] [PATCH v4 1/8] eal: replace rte_page_sizes with a set of constants Dmitry Kozlyuk
2020-04-28 23:50     ` [dpdk-dev] [PATCH v4 2/8] eal: introduce internal wrappers for file operations Dmitry Kozlyuk
2020-04-29 16:41       ` Burakov, Anatoly
2020-04-28 23:50     ` [dpdk-dev] [PATCH v4 3/8] eal: introduce memory management wrappers Dmitry Kozlyuk
2020-04-29 17:13       ` Burakov, Anatoly
2020-04-30 13:59         ` Burakov, Anatoly
2020-05-01 19:00         ` Dmitry Kozlyuk
2020-05-05 14:43           ` Burakov, Anatoly
2020-04-28 23:50     ` [dpdk-dev] [PATCH v4 4/8] eal: extract common code for memseg list initialization Dmitry Kozlyuk
2020-05-05 16:08       ` Burakov, Anatoly
2020-04-28 23:50     ` [dpdk-dev] [PATCH v4 5/8] eal/windows: replace sys/queue.h with a complete one from FreeBSD Dmitry Kozlyuk
2020-04-28 23:50     ` [dpdk-dev] [PATCH v4 6/8] eal/windows: improve CPU and NUMA node detection Dmitry Kozlyuk
2020-04-28 23:50     ` [dpdk-dev] [PATCH v4 7/8] eal/windows: initialize hugepage info Dmitry Kozlyuk
2020-04-28 23:50     ` [dpdk-dev] [PATCH v4 8/8] eal/windows: implement basic memory management Dmitry Kozlyuk
2020-04-29  1:18       ` Ranjit Menon
2020-05-01 19:19         ` Dmitry Kozlyuk
2020-05-05 16:24       ` Burakov, Anatoly
2020-05-05 23:20         ` Dmitry Kozlyuk
2020-05-06  9:46           ` Burakov, Anatoly
2020-05-06 21:53             ` Dmitry Kozlyuk
2020-05-07 11:57               ` Burakov, Anatoly
2020-05-13  8:24       ` Fady Bader
2020-05-13  8:42         ` Dmitry Kozlyuk
2020-05-13  9:09           ` Fady Bader
2020-05-13  9:22             ` Fady Bader
2020-05-13  9:38             ` Dmitry Kozlyuk
2020-05-13 12:25               ` Fady Bader
2020-05-18  0:17                 ` Dmitry Kozlyuk
2020-05-18 22:25                   ` Dmitry Kozlyuk
2020-05-25  0:37     ` [dpdk-dev] [PATCH v5 0/8] Windows " Dmitry Kozlyuk
2020-05-25  0:37       ` [dpdk-dev] [PATCH v5 01/11] eal: replace rte_page_sizes with a set of constants Dmitry Kozlyuk
2020-05-25  0:37       ` [dpdk-dev] [PATCH v5 02/11] eal: introduce internal wrappers for file operations Dmitry Kozlyuk
2020-05-28  7:59         ` Thomas Monjalon
2020-05-28 10:09           ` Dmitry Kozlyuk
2020-05-28 11:29             ` Thomas Monjalon
2020-05-25  0:37       ` [dpdk-dev] [PATCH v5 03/11] eal: introduce memory management wrappers Dmitry Kozlyuk
2020-05-27  6:33         ` Ray Kinsella
2020-05-27 16:34           ` Dmitry Kozlyuk
2020-05-28 11:26         ` Burakov, Anatoly
2020-06-01 21:08           ` Thomas Monjalon
2020-05-28 11:52         ` Burakov, Anatoly
2020-05-25  0:37       ` [dpdk-dev] [PATCH v5 04/11] eal/mem: extract common code for memseg list initialization Dmitry Kozlyuk
2020-05-28  7:31         ` Thomas Monjalon
2020-05-28 10:04           ` Dmitry Kozlyuk
2020-05-28 11:46         ` Burakov, Anatoly
2020-05-28 14:41           ` Dmitry Kozlyuk
2020-05-29  8:49             ` Burakov, Anatoly
2020-05-28 12:19         ` Burakov, Anatoly
2020-05-25  0:37       ` [dpdk-dev] [PATCH v5 05/11] eal/mem: extract common code for dynamic memory allocation Dmitry Kozlyuk
2020-05-28  8:34         ` Thomas Monjalon
2020-05-28 12:21         ` Burakov, Anatoly
2020-05-28 13:24           ` Dmitry Kozlyuk
2020-05-25  0:37       ` [dpdk-dev] [PATCH v5 06/11] trace: add size_t field emitter Dmitry Kozlyuk
2020-05-25  5:53         ` Jerin Jacob
2020-05-25  0:37       ` [dpdk-dev] [PATCH v5 07/11] eal/windows: add tracing support stubs Dmitry Kozlyuk
2020-05-25  0:37       ` [dpdk-dev] [PATCH v5 08/11] eal/windows: replace sys/queue.h with a complete one from FreeBSD Dmitry Kozlyuk
2020-05-25  0:37       ` [dpdk-dev] [PATCH v5 09/11] eal/windows: improve CPU and NUMA node detection Dmitry Kozlyuk
2020-05-25  0:37       ` [dpdk-dev] [PATCH v5 10/11] eal/windows: initialize hugepage info Dmitry Kozlyuk
2020-05-25  0:37       ` [dpdk-dev] [PATCH v5 11/11] eal/windows: implement basic memory management Dmitry Kozlyuk
2020-06-02 23:03       ` [dpdk-dev] [PATCH v6 00/11] Windows " Dmitry Kozlyuk
2020-06-02 23:03         ` [dpdk-dev] [PATCH v6 01/11] eal: replace rte_page_sizes with a set of constants Dmitry Kozlyuk
2020-06-03  1:59           ` Stephen Hemminger
2020-06-02 23:03         ` [dpdk-dev] [PATCH v6 02/11] eal: introduce internal wrappers for file operations Dmitry Kozlyuk
2020-06-03 12:07           ` Neil Horman
2020-06-03 12:34             ` Dmitry Kozlyuk
2020-06-04 21:07               ` Neil Horman
2020-06-05  0:16                 ` Dmitry Kozlyuk
2020-06-05 11:19                   ` Neil Horman
2020-06-02 23:03         ` [dpdk-dev] [PATCH v6 03/11] eal: introduce memory management wrappers Dmitry Kozlyuk
2020-06-02 23:03         ` [dpdk-dev] [PATCH v6 04/11] eal/mem: extract common code for memseg list initialization Dmitry Kozlyuk
2020-06-09 13:36           ` Burakov, Anatoly
2020-06-09 14:17             ` Dmitry Kozlyuk
2020-06-10 10:26               ` Burakov, Anatoly
2020-06-10 14:31                 ` Dmitry Kozlyuk
2020-06-10 15:48                   ` Burakov, Anatoly
2020-06-10 16:39                     ` Dmitry Kozlyuk
2020-06-11  8:59                       ` Burakov, Anatoly
2020-06-02 23:03         ` Dmitry Kozlyuk [this message]
2020-06-02 23:03         ` [dpdk-dev] [PATCH v6 06/11] trace: add size_t field emitter Dmitry Kozlyuk
2020-06-03  3:29           ` Jerin Jacob
2020-06-02 23:03         ` [dpdk-dev] [PATCH v6 07/11] eal/windows: add tracing support stubs Dmitry Kozlyuk
2020-06-02 23:03         ` [dpdk-dev] [PATCH v6 08/11] eal/windows: replace sys/queue.h with a complete one from FreeBSD Dmitry Kozlyuk
2020-06-02 23:03         ` [dpdk-dev] [PATCH v6 09/11] eal/windows: improve CPU and NUMA node detection Dmitry Kozlyuk
2020-06-02 23:03         ` [dpdk-dev] [PATCH v6 10/11] eal/windows: initialize hugepage info Dmitry Kozlyuk
2020-06-02 23:03         ` [dpdk-dev] [PATCH v6 11/11] eal/windows: implement basic memory management Dmitry Kozlyuk
2020-06-08  7:41         ` [dpdk-dev] [PATCH v6 00/11] Windows " Dmitry Kozlyuk
2020-06-08  7:41           ` [dpdk-dev] [PATCH v7 01/11] eal: replace rte_page_sizes with a set of constants Dmitry Kozlyuk
2020-06-08  7:41           ` [dpdk-dev] [PATCH v7 02/11] eal: introduce internal wrappers for file operations Dmitry Kozlyuk
2020-06-08  7:41           ` [dpdk-dev] [PATCH v7 03/11] eal: introduce memory management wrappers Dmitry Kozlyuk
2020-06-08  7:41           ` [dpdk-dev] [PATCH v7 04/11] eal/mem: extract common code for memseg list initialization Dmitry Kozlyuk
2020-06-09 11:14             ` Tal Shnaiderman
2020-06-09 13:49               ` Burakov, Anatoly
2020-06-08  7:41           ` [dpdk-dev] [PATCH v7 05/11] eal/mem: extract common code for dynamic memory allocation Dmitry Kozlyuk
2020-06-08  7:41           ` [dpdk-dev] [PATCH v7 06/11] trace: add size_t field emitter Dmitry Kozlyuk
2020-06-08  7:41           ` [dpdk-dev] [PATCH v7 07/11] eal/windows: add tracing support stubs Dmitry Kozlyuk
2020-06-08  7:41           ` [dpdk-dev] [PATCH v7 08/11] eal/windows: replace sys/queue.h with a complete one from FreeBSD Dmitry Kozlyuk
2020-06-08  7:41           ` [dpdk-dev] [PATCH v7 09/11] eal/windows: improve CPU and NUMA node detection Dmitry Kozlyuk
2020-06-08  7:41           ` [dpdk-dev] [PATCH v7 10/11] eal/windows: initialize hugepage info Dmitry Kozlyuk
2020-06-08  7:41           ` [dpdk-dev] [PATCH v7 11/11] eal/windows: implement basic memory management Dmitry Kozlyuk
2020-06-10 14:27           ` [dpdk-dev] [PATCH v8 00/11] Windows " Dmitry Kozlyuk
2020-06-10 14:27             ` [dpdk-dev] [PATCH v8 01/11] eal: replace rte_page_sizes with a set of constants Dmitry Kozlyuk
2020-06-10 14:27             ` [dpdk-dev] [PATCH v8 02/11] eal: introduce internal wrappers for file operations Dmitry Kozlyuk
2020-06-11 17:13               ` Thomas Monjalon
2020-06-10 14:27             ` [dpdk-dev] [PATCH v8 03/11] eal: introduce memory management wrappers Dmitry Kozlyuk
2020-06-12 10:47               ` Thomas Monjalon
2020-06-12 13:44                 ` Dmitry Kozliuk
2020-06-12 13:54                   ` Thomas Monjalon
2020-06-12 20:24                     ` Dmitry Kozliuk
2020-06-12 21:37                       ` Thomas Monjalon
2020-06-10 14:27             ` [dpdk-dev] [PATCH v8 04/11] eal/mem: extract common code for memseg list initialization Dmitry Kozlyuk
2020-06-12 15:39               ` Thomas Monjalon
2020-06-10 14:27             ` [dpdk-dev] [PATCH v8 05/11] eal/mem: extract common code for dynamic memory allocation Dmitry Kozlyuk
2020-06-10 14:27             ` [dpdk-dev] [PATCH v8 06/11] trace: add size_t field emitter Dmitry Kozlyuk
2020-06-10 14:27             ` [dpdk-dev] [PATCH v8 07/11] eal/windows: add tracing support stubs Dmitry Kozlyuk
2020-06-10 14:27             ` [dpdk-dev] [PATCH v8 08/11] eal/windows: replace sys/queue.h with a complete one from FreeBSD Dmitry Kozlyuk
2020-06-10 14:27             ` [dpdk-dev] [PATCH v8 09/11] eal/windows: improve CPU and NUMA node detection Dmitry Kozlyuk
2020-06-12 21:45               ` Thomas Monjalon
2020-06-12 22:09               ` Thomas Monjalon
2020-06-10 14:27             ` [dpdk-dev] [PATCH v8 10/11] eal/windows: initialize hugepage info Dmitry Kozlyuk
2020-06-12 21:55               ` Thomas Monjalon
2020-06-10 14:27             ` [dpdk-dev] [PATCH v8 11/11] eal/windows: implement basic memory management Dmitry Kozlyuk
2020-06-12 22:12               ` Thomas Monjalon
2020-06-11 17:29             ` [dpdk-dev] [PATCH v8 00/11] Windows " Thomas Monjalon
2020-06-12 22:00               ` Thomas Monjalon
2020-06-15  0:43             ` [dpdk-dev] [PATCH v9 00/12] " Dmitry Kozlyuk
2020-06-15  0:43               ` [dpdk-dev] [PATCH v9 01/12] eal: replace rte_page_sizes with a set of constants Dmitry Kozlyuk
2020-06-15  0:43               ` [dpdk-dev] [PATCH v9 02/12] eal: introduce internal wrappers for file operations Dmitry Kozlyuk
2020-06-15  0:43               ` [dpdk-dev] [PATCH v9 03/12] eal: introduce memory management wrappers Dmitry Kozlyuk
2020-06-15  6:03                 ` Kinsella, Ray
2020-06-15  7:41                   ` Dmitry Kozlyuk
2020-06-15  7:41                     ` Kinsella, Ray
2020-06-15 10:53                     ` Neil Horman
2020-06-15 11:10                       ` Dmitry Kozlyuk
2020-06-15  0:43               ` [dpdk-dev] [PATCH v9 04/12] eal/mem: extract common code for memseg list initialization Dmitry Kozlyuk
2020-06-15 13:13                 ` Thomas Monjalon
2020-06-15  0:43               ` [dpdk-dev] [PATCH v9 05/12] eal/mem: extract common code for dynamic memory allocation Dmitry Kozlyuk
2020-06-15  0:43               ` [dpdk-dev] [PATCH v9 06/12] trace: add size_t field emitter Dmitry Kozlyuk
2020-06-15  0:43               ` [dpdk-dev] [PATCH v9 07/12] eal/windows: add tracing support stubs Dmitry Kozlyuk
2020-06-15  0:43               ` [dpdk-dev] [PATCH v9 08/12] eal/windows: replace sys/queue.h with a complete one from FreeBSD Dmitry Kozlyuk
2020-06-15  0:43               ` [dpdk-dev] [PATCH v9 09/12] eal/windows: improve CPU and NUMA node detection Dmitry Kozlyuk
2020-06-15 15:21                 ` Thomas Monjalon
2020-06-15 15:39                   ` Dmitry Kozlyuk
2020-06-15  0:43               ` [dpdk-dev] [PATCH v9 10/12] doc/windows: split build and run instructions Dmitry Kozlyuk
2020-06-15  0:43               ` [dpdk-dev] [PATCH v9 11/12] eal/windows: initialize hugepage info Dmitry Kozlyuk
2020-06-15  0:43               ` [dpdk-dev] [PATCH v9 12/12] eal/windows: implement basic memory management Dmitry Kozlyuk
2020-06-15 17:34               ` [dpdk-dev] [PATCH v9 00/12] Windows " Thomas Monjalon
2020-06-16  1:52               ` Ranjit Menon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200602230329.17838-6-dmitry.kozliuk@gmail.com \
    --to=dmitry.kozliuk@gmail.com \
    --cc=Narcisa.Vasile@microsoft.com \
    --cc=anatoly.burakov@intel.com \
    --cc=bruce.richardson@intel.com \
    --cc=dev@dpdk.org \
    --cc=dmitrym@microsoft.com \
    --cc=fady@mellanox.com \
    --cc=talshn@mellanox.com \
    --cc=thomas@monjalon.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.