All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/2] test/memzone: add test for memzone count in eal mem config
@ 2018-01-26 17:40 Anatoly Burakov
  2018-01-26 17:40 ` [PATCH 2/2] test/memzone: handle previously allocated memzones Anatoly Burakov
                   ` (3 more replies)
  0 siblings, 4 replies; 17+ messages in thread
From: Anatoly Burakov @ 2018-01-26 17:40 UTC (permalink / raw)
  To: dev

Ensure that memzone count in eal mem config is incremented and
decremented whenever memzones are allocated and freed.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
 test/test/test_memzone.c | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/test/test/test_memzone.c b/test/test/test_memzone.c
index f6c9b56..00d340f 100644
--- a/test/test/test_memzone.c
+++ b/test/test/test_memzone.c
@@ -841,6 +841,9 @@ test_memzone_basic(void)
 	const struct rte_memzone *memzone3;
 	const struct rte_memzone *memzone4;
 	const struct rte_memzone *mz;
+	int memzone_cnt_after, memzone_cnt_expected;
+	int memzone_cnt_before =
+			rte_eal_get_configuration()->mem_config->memzone_cnt;
 
 	memzone1 = rte_memzone_reserve("testzone1", 100,
 				SOCKET_ID_ANY, 0);
@@ -858,6 +861,18 @@ test_memzone_basic(void)
 	if (memzone1 == NULL || memzone2 == NULL || memzone4 == NULL)
 		return -1;
 
+	/* check how many memzones we are expecting */
+	memzone_cnt_expected = memzone_cnt_before +
+			(memzone1 != NULL) + (memzone2 != NULL) +
+			(memzone3 != NULL) + (memzone4 != NULL);
+
+	memzone_cnt_after =
+			rte_eal_get_configuration()->mem_config->memzone_cnt;
+
+	if (memzone_cnt_after != memzone_cnt_expected)
+		return -1;
+
+
 	rte_memzone_dump(stdout);
 
 	/* check cache-line alignments */
@@ -930,6 +945,11 @@ test_memzone_basic(void)
 		return -1;
 	}
 
+	memzone_cnt_after =
+			rte_eal_get_configuration()->mem_config->memzone_cnt;
+	if (memzone_cnt_after != memzone_cnt_before)
+		return -1;
+
 	return 0;
 }
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 2/2] test/memzone: handle previously allocated memzones
  2018-01-26 17:40 [PATCH 1/2] test/memzone: add test for memzone count in eal mem config Anatoly Burakov
@ 2018-01-26 17:40 ` Anatoly Burakov
  2018-01-27 14:46   ` Radoslaw Biernacki
  2018-01-31  7:51   ` Phil Yang
  2018-01-27 14:53 ` [PATCH 1/2] test/memzone: add test for memzone count in eal mem config Radoslaw Biernacki
                   ` (2 subsequent siblings)
  3 siblings, 2 replies; 17+ messages in thread
From: Anatoly Burakov @ 2018-01-26 17:40 UTC (permalink / raw)
  To: dev; +Cc: radoslaw.biernacki, stable

Currently, memzone autotest expects there to be no memzones
present by the time the test is run. Some hardware drivers
will allocate memzones for internal use during initialization,
resulting in tests failing due to unexpected memzones being
allocated before the test was run.

Fix this by making callback increment a counter instead. This
also doubles as a test for correct operation of memzone_walk().

Fixes: 71330483a193 ("test/memzone: fix memory leak")
Cc: radoslaw.biernacki@linaro.org
Cc: stable@dpdk.org

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
 test/test/test_memzone.c | 10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/test/test/test_memzone.c b/test/test/test_memzone.c
index 00d340f..5428b35 100644
--- a/test/test/test_memzone.c
+++ b/test/test/test_memzone.c
@@ -953,16 +953,19 @@ test_memzone_basic(void)
 	return 0;
 }
 
-static int memzone_walk_called;
+static int memzone_walk_cnt;
 static void memzone_walk_clb(const struct rte_memzone *mz __rte_unused,
 			     void *arg __rte_unused)
 {
-	memzone_walk_called = 1;
+	memzone_walk_cnt++;
 }
 
 static int
 test_memzone(void)
 {
+	/* take note of how many memzones were allocated before running */
+	int memzone_cnt = rte_eal_get_configuration()->mem_config->memzone_cnt;
+
 	printf("test basic memzone API\n");
 	if (test_memzone_basic() < 0)
 		return -1;
@@ -1000,8 +1003,9 @@ test_memzone(void)
 		return -1;
 
 	printf("check memzone cleanup\n");
+	memzone_walk_cnt = 0;
 	rte_memzone_walk(memzone_walk_clb, NULL);
-	if (memzone_walk_called) {
+	if (memzone_walk_cnt != memzone_cnt) {
 		printf("there are some memzones left after test\n");
 		rte_memzone_dump(stdout);
 		return -1;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/2] test/memzone: handle previously allocated memzones
  2018-01-26 17:40 ` [PATCH 2/2] test/memzone: handle previously allocated memzones Anatoly Burakov
@ 2018-01-27 14:46   ` Radoslaw Biernacki
  2018-01-31  7:51   ` Phil Yang
  1 sibling, 0 replies; 17+ messages in thread
From: Radoslaw Biernacki @ 2018-01-27 14:46 UTC (permalink / raw)
  To: Anatoly Burakov; +Cc: dev, stable

Thanks, looks OK for me.

Reviewed-by: Radoslaw Biernacki <r <ferruh.yigit@intel.com>
adoslaw.biernacki@linaro.com>

On 26 January 2018 at 18:40, Anatoly Burakov <anatoly.burakov@intel.com>
wrote:

> Currently, memzone autotest expects there to be no memzones
> present by the time the test is run. Some hardware drivers
> will allocate memzones for internal use during initialization,
> resulting in tests failing due to unexpected memzones being
> allocated before the test was run.
>
> Fix this by making callback increment a counter instead. This
> also doubles as a test for correct operation of memzone_walk().
>
> Fixes: 71330483a193 ("test/memzone: fix memory leak")
> Cc: radoslaw.biernacki@linaro.org
> Cc: stable@dpdk.org
>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
>  test/test/test_memzone.c | 10 +++++++---
>  1 file changed, 7 insertions(+), 3 deletions(-)
>
> diff --git a/test/test/test_memzone.c b/test/test/test_memzone.c
> index 00d340f..5428b35 100644
> --- a/test/test/test_memzone.c
> +++ b/test/test/test_memzone.c
> @@ -953,16 +953,19 @@ test_memzone_basic(void)
>         return 0;
>  }
>
> -static int memzone_walk_called;
> +static int memzone_walk_cnt;
>  static void memzone_walk_clb(const struct rte_memzone *mz __rte_unused,
>                              void *arg __rte_unused)
>  {
> -       memzone_walk_called = 1;
> +       memzone_walk_cnt++;
>  }
>
>  static int
>  test_memzone(void)
>  {
> +       /* take note of how many memzones were allocated before running */
> +       int memzone_cnt = rte_eal_get_configuration()->
> mem_config->memzone_cnt;
> +
>         printf("test basic memzone API\n");
>         if (test_memzone_basic() < 0)
>                 return -1;
> @@ -1000,8 +1003,9 @@ test_memzone(void)
>                 return -1;
>
>         printf("check memzone cleanup\n");
> +       memzone_walk_cnt = 0;
>         rte_memzone_walk(memzone_walk_clb, NULL);
> -       if (memzone_walk_called) {
> +       if (memzone_walk_cnt != memzone_cnt) {
>                 printf("there are some memzones left after test\n");
>                 rte_memzone_dump(stdout);
>                 return -1;
> --
> 2.7.4
>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 1/2] test/memzone: add test for memzone count in eal mem config
  2018-01-26 17:40 [PATCH 1/2] test/memzone: add test for memzone count in eal mem config Anatoly Burakov
  2018-01-26 17:40 ` [PATCH 2/2] test/memzone: handle previously allocated memzones Anatoly Burakov
@ 2018-01-27 14:53 ` Radoslaw Biernacki
  2018-01-29  9:40   ` Burakov, Anatoly
  2018-01-31 15:29 ` [PATCH v2 " Anatoly Burakov
  2018-01-31 15:29 ` [PATCH v2 " Anatoly Burakov
  3 siblings, 1 reply; 17+ messages in thread
From: Radoslaw Biernacki @ 2018-01-27 14:53 UTC (permalink / raw)
  To: Anatoly Burakov; +Cc: dev

Looks OK.

Following note is aside from the patch.
Might be beneficial (in some rare cases) to add bailout recovery with
goto's in test_memzone_basic()
Just in case one of the rte_memzone_reserve() we should not make return -1,
but instead a goto to below section where we call rte_memzone_free().
This way we would be able to free only the allocated memzones and prevent
leaking out those memzones to other tests.

Reviewed-by: Radoslaw Biernacki <r <ferruh.yigit@intel.com>
adoslaw.biernacki@linaro.com>

On 26 January 2018 at 18:40, Anatoly Burakov <anatoly.burakov@intel.com>
wrote:

> Ensure that memzone count in eal mem config is incremented and
> decremented whenever memzones are allocated and freed.
>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
>  test/test/test_memzone.c | 20 ++++++++++++++++++++
>  1 file changed, 20 insertions(+)
>
> diff --git a/test/test/test_memzone.c b/test/test/test_memzone.c
> index f6c9b56..00d340f 100644
> --- a/test/test/test_memzone.c
> +++ b/test/test/test_memzone.c
> @@ -841,6 +841,9 @@ test_memzone_basic(void)
>         const struct rte_memzone *memzone3;
>         const struct rte_memzone *memzone4;
>         const struct rte_memzone *mz;
> +       int memzone_cnt_after, memzone_cnt_expected;
> +       int memzone_cnt_before =
> +                       rte_eal_get_configuration()->
> mem_config->memzone_cnt;
>
>         memzone1 = rte_memzone_reserve("testzone1", 100,
>                                 SOCKET_ID_ANY, 0);
> @@ -858,6 +861,18 @@ test_memzone_basic(void)
>         if (memzone1 == NULL || memzone2 == NULL || memzone4 == NULL)
>                 return -1;
>
> +       /* check how many memzones we are expecting */
> +       memzone_cnt_expected = memzone_cnt_before +
> +                       (memzone1 != NULL) + (memzone2 != NULL) +
> +                       (memzone3 != NULL) + (memzone4 != NULL);
> +
> +       memzone_cnt_after =
> +                       rte_eal_get_configuration()->
> mem_config->memzone_cnt;
> +
> +       if (memzone_cnt_after != memzone_cnt_expected)
> +               return -1;
> +
> +
>         rte_memzone_dump(stdout);
>
>         /* check cache-line alignments */
> @@ -930,6 +945,11 @@ test_memzone_basic(void)
>                 return -1;
>         }
>
> +       memzone_cnt_after =
> +                       rte_eal_get_configuration()->
> mem_config->memzone_cnt;
> +       if (memzone_cnt_after != memzone_cnt_before)
> +               return -1;
> +
>         return 0;
>  }
>
> --
> 2.7.4
>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 1/2] test/memzone: add test for memzone count in eal mem config
  2018-01-27 14:53 ` [PATCH 1/2] test/memzone: add test for memzone count in eal mem config Radoslaw Biernacki
@ 2018-01-29  9:40   ` Burakov, Anatoly
  0 siblings, 0 replies; 17+ messages in thread
From: Burakov, Anatoly @ 2018-01-29  9:40 UTC (permalink / raw)
  To: Radoslaw Biernacki; +Cc: dev

On 27-Jan-18 2:53 PM, Radoslaw Biernacki wrote:
> Looks OK.
> 
> Following note is aside from the patch.
> Might be beneficial (in some rare cases) to add bailout recovery with
> goto's in test_memzone_basic()
> Just in case one of the rte_memzone_reserve() we should not make return -1,
> but instead a goto to below section where we call rte_memzone_free().
> This way we would be able to free only the allocated memzones and prevent
> leaking out those memzones to other tests.

Thanks, and yep, it's on my todo list :) didn't get around to it yet.

> 
> Reviewed-by: Radoslaw Biernacki <r <ferruh.yigit@intel.com>
> adoslaw.biernacki@linaro.com>
> 


-- 
Thanks,
Anatoly

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/2] test/memzone: handle previously allocated memzones
  2018-01-26 17:40 ` [PATCH 2/2] test/memzone: handle previously allocated memzones Anatoly Burakov
  2018-01-27 14:46   ` Radoslaw Biernacki
@ 2018-01-31  7:51   ` Phil Yang
  2018-01-31 10:05     ` Burakov, Anatoly
  1 sibling, 1 reply; 17+ messages in thread
From: Phil Yang @ 2018-01-31  7:51 UTC (permalink / raw)
  To: Anatoly Burakov, dev; +Cc: radoslaw.biernacki, stable, nd

Hi Anatoly,

I think your fix is elegant, however you can't grantee it doesn't have dirty memzone remained after memzone autotest.
What if some existed initial memzone released during the test and some dirty memzone remained. The counter cannot illustrate this state.

My fix just care about the memzone used in memzone autotest. It is rough but it seems more reliable. 😊

Thanks,
Phil Yang

> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Anatoly Burakov
> Sent: Saturday, January 27, 2018 1:41 AM
> To: dev@dpdk.org
> Cc: radoslaw.biernacki@linaro.org; stable@dpdk.org
> Subject: [dpdk-dev] [PATCH 2/2] test/memzone: handle previously allocated
> memzones
> 
> Currently, memzone autotest expects there to be no memzones present by the
> time the test is run. Some hardware drivers will allocate memzones for internal
> use during initialization, resulting in tests failing due to unexpected memzones
> being allocated before the test was run.
> 
> Fix this by making callback increment a counter instead. This also doubles as a
> test for correct operation of memzone_walk().
> 
> Fixes: 71330483a193 ("test/memzone: fix memory leak")
> Cc: radoslaw.biernacki@linaro.org
> Cc: stable@dpdk.org
> 
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
>  test/test/test_memzone.c | 10 +++++++---
>  1 file changed, 7 insertions(+), 3 deletions(-)
> 
> diff --git a/test/test/test_memzone.c b/test/test/test_memzone.c index
> 00d340f..5428b35 100644
> --- a/test/test/test_memzone.c
> +++ b/test/test/test_memzone.c
> @@ -953,16 +953,19 @@ test_memzone_basic(void)
>  	return 0;
>  }
> 
> -static int memzone_walk_called;
> +static int memzone_walk_cnt;
>  static void memzone_walk_clb(const struct rte_memzone *mz __rte_unused,
>  			     void *arg __rte_unused)
>  {
> -	memzone_walk_called = 1;
> +	memzone_walk_cnt++;
>  }
> 
>  static int
>  test_memzone(void)
>  {
> +	/* take note of how many memzones were allocated before running */
> +	int memzone_cnt =
> +rte_eal_get_configuration()->mem_config->memzone_cnt;
> +
>  	printf("test basic memzone API\n");
>  	if (test_memzone_basic() < 0)
>  		return -1;
> @@ -1000,8 +1003,9 @@ test_memzone(void)
>  		return -1;
> 
>  	printf("check memzone cleanup\n");
> +	memzone_walk_cnt = 0;
>  	rte_memzone_walk(memzone_walk_clb, NULL);
> -	if (memzone_walk_called) {
> +	if (memzone_walk_cnt != memzone_cnt) {
>  		printf("there are some memzones left after test\n");
>  		rte_memzone_dump(stdout);
>  		return -1;
> --
> 2.7.4

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/2] test/memzone: handle previously allocated memzones
  2018-01-31  7:51   ` Phil Yang
@ 2018-01-31 10:05     ` Burakov, Anatoly
  2018-01-31 10:08       ` Phil Yang
  0 siblings, 1 reply; 17+ messages in thread
From: Burakov, Anatoly @ 2018-01-31 10:05 UTC (permalink / raw)
  To: Phil Yang, dev; +Cc: radoslaw.biernacki, stable, nd

On 31-Jan-18 7:51 AM, Phil Yang wrote:
> Hi Anatoly,
> 
> I think your fix is elegant, however you can't grantee it doesn't have dirty memzone remained after memzone autotest.
> What if some existed initial memzone released during the test and some dirty memzone remained. The counter cannot illustrate this state.
> 
> My fix just care about the memzone used in memzone autotest. It is rough but it seems more reliable. 😊
> 
> Thanks,
> Phil Yang

We could combine the approaches. That way, we both ensure that no 
memzones were left in that should've been freed, and that total number 
of memzones didn't change as well (i.e. we didn't allocate/free any 
memzones we weren't supposed to allocate/free).

As i side note, i think making a #define with memzone prefix in your 
patch will work better and will be less copypaste-error-prone in the 
long run.

I will prepare a v2 combining both approaches. Is that OK with you?

> 
>> -----Original Message-----
>> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Anatoly Burakov
>> Sent: Saturday, January 27, 2018 1:41 AM
>> To: dev@dpdk.org
>> Cc: radoslaw.biernacki@linaro.org; stable@dpdk.org
>> Subject: [dpdk-dev] [PATCH 2/2] test/memzone: handle previously allocated
>> memzones
>>
>> Currently, memzone autotest expects there to be no memzones present by the
>> time the test is run. Some hardware drivers will allocate memzones for internal
>> use during initialization, resulting in tests failing due to unexpected memzones
>> being allocated before the test was run.
>>
>> Fix this by making callback increment a counter instead. This also doubles as a
>> test for correct operation of memzone_walk().
>>
>> Fixes: 71330483a193 ("test/memzone: fix memory leak")
>> Cc: radoslaw.biernacki@linaro.org
>> Cc: stable@dpdk.org
>>
>> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>


-- 
Thanks,
Anatoly

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/2] test/memzone: handle previously allocated memzones
  2018-01-31 10:05     ` Burakov, Anatoly
@ 2018-01-31 10:08       ` Phil Yang
  0 siblings, 0 replies; 17+ messages in thread
From: Phil Yang @ 2018-01-31 10:08 UTC (permalink / raw)
  To: Burakov, Anatoly, dev; +Cc: radoslaw.biernacki, stable, nd

That is OK for me. Thanks for your comments.

Thanks,
Phil Yang

> -----Original Message-----
> From: Burakov, Anatoly [mailto:anatoly.burakov@intel.com]
> Sent: Wednesday, January 31, 2018 6:05 PM
> To: Phil Yang <Phil.Yang@arm.com>; dev@dpdk.org
> Cc: radoslaw.biernacki@linaro.org; stable@dpdk.org; nd <nd@arm.com>
> Subject: Re: [dpdk-dev] [PATCH 2/2] test/memzone: handle previously allocated
> memzones
> 
> On 31-Jan-18 7:51 AM, Phil Yang wrote:
> > Hi Anatoly,
> >
> > I think your fix is elegant, however you can't grantee it doesn't have dirty
> memzone remained after memzone autotest.
> > What if some existed initial memzone released during the test and some dirty
> memzone remained. The counter cannot illustrate this state.
> >
> > My fix just care about the memzone used in memzone autotest. It is
> > rough but it seems more reliable. 😊
> >
> > Thanks,
> > Phil Yang
> 
> We could combine the approaches. That way, we both ensure that no
> memzones were left in that should've been freed, and that total number of
> memzones didn't change as well (i.e. we didn't allocate/free any memzones we
> weren't supposed to allocate/free).
> 
> As i side note, i think making a #define with memzone prefix in your patch will
> work better and will be less copypaste-error-prone in the long run.
> 
> I will prepare a v2 combining both approaches. Is that OK with you?
> 
> >
> >> -----Original Message-----
> >> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Anatoly Burakov
> >> Sent: Saturday, January 27, 2018 1:41 AM
> >> To: dev@dpdk.org
> >> Cc: radoslaw.biernacki@linaro.org; stable@dpdk.org
> >> Subject: [dpdk-dev] [PATCH 2/2] test/memzone: handle previously
> >> allocated memzones
> >>
> >> Currently, memzone autotest expects there to be no memzones present
> >> by the time the test is run. Some hardware drivers will allocate
> >> memzones for internal use during initialization, resulting in tests
> >> failing due to unexpected memzones being allocated before the test was run.
> >>
> >> Fix this by making callback increment a counter instead. This also
> >> doubles as a test for correct operation of memzone_walk().
> >>
> >> Fixes: 71330483a193 ("test/memzone: fix memory leak")
> >> Cc: radoslaw.biernacki@linaro.org
> >> Cc: stable@dpdk.org
> >>
> >> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> 
> 
> --
> Thanks,
> Anatoly

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v2 1/2] test/memzone: add test for memzone count in eal mem config
  2018-01-26 17:40 [PATCH 1/2] test/memzone: add test for memzone count in eal mem config Anatoly Burakov
  2018-01-26 17:40 ` [PATCH 2/2] test/memzone: handle previously allocated memzones Anatoly Burakov
  2018-01-27 14:53 ` [PATCH 1/2] test/memzone: add test for memzone count in eal mem config Radoslaw Biernacki
@ 2018-01-31 15:29 ` Anatoly Burakov
  2018-02-01  0:12   ` Thomas Monjalon
                     ` (2 more replies)
  2018-01-31 15:29 ` [PATCH v2 " Anatoly Burakov
  3 siblings, 3 replies; 17+ messages in thread
From: Anatoly Burakov @ 2018-01-31 15:29 UTC (permalink / raw)
  To: dev

Ensure that memzone count in eal mem config is incremented and
decremented whenever memzones are allocated and freed.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
 test/test/test_memzone.c | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/test/test/test_memzone.c b/test/test/test_memzone.c
index f6c9b56..00d340f 100644
--- a/test/test/test_memzone.c
+++ b/test/test/test_memzone.c
@@ -841,6 +841,9 @@ test_memzone_basic(void)
 	const struct rte_memzone *memzone3;
 	const struct rte_memzone *memzone4;
 	const struct rte_memzone *mz;
+	int memzone_cnt_after, memzone_cnt_expected;
+	int memzone_cnt_before =
+			rte_eal_get_configuration()->mem_config->memzone_cnt;
 
 	memzone1 = rte_memzone_reserve("testzone1", 100,
 				SOCKET_ID_ANY, 0);
@@ -858,6 +861,18 @@ test_memzone_basic(void)
 	if (memzone1 == NULL || memzone2 == NULL || memzone4 == NULL)
 		return -1;
 
+	/* check how many memzones we are expecting */
+	memzone_cnt_expected = memzone_cnt_before +
+			(memzone1 != NULL) + (memzone2 != NULL) +
+			(memzone3 != NULL) + (memzone4 != NULL);
+
+	memzone_cnt_after =
+			rte_eal_get_configuration()->mem_config->memzone_cnt;
+
+	if (memzone_cnt_after != memzone_cnt_expected)
+		return -1;
+
+
 	rte_memzone_dump(stdout);
 
 	/* check cache-line alignments */
@@ -930,6 +945,11 @@ test_memzone_basic(void)
 		return -1;
 	}
 
+	memzone_cnt_after =
+			rte_eal_get_configuration()->mem_config->memzone_cnt;
+	if (memzone_cnt_after != memzone_cnt_before)
+		return -1;
+
 	return 0;
 }
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v2 2/2] test/memzone: handle previously allocated memzones
  2018-01-26 17:40 [PATCH 1/2] test/memzone: add test for memzone count in eal mem config Anatoly Burakov
                   ` (2 preceding siblings ...)
  2018-01-31 15:29 ` [PATCH v2 " Anatoly Burakov
@ 2018-01-31 15:29 ` Anatoly Burakov
  3 siblings, 0 replies; 17+ messages in thread
From: Anatoly Burakov @ 2018-01-31 15:29 UTC (permalink / raw)
  To: dev; +Cc: radoslaw.biernacki, stable

Currently, memzone autotest expects there to be no memzones
present by the time the test is run. Some hardware drivers
will allocate memzones for internal use during initialization,
resulting in tests failing due to unexpected memzones being
allocated before the test was run.

Fix this by making sure all memzones allocated by this test
have a common prefix, and making callback increment a counter
on encountering memzones with this prefix. Also, separately
increment another counter that will count total number of
memzones left after test, and compares it to previously stored
number of memzones, to ensure that we didn't accidentally
allocated/freed any memzones we weren't supposed to. This
also doubles as a test for correct operation of memzone_walk().

Suggested-by: Phil Yang <Phil.Yang@arm.com>

Fixes: 71330483a193 ("test/memzone: fix memory leak")
Cc: radoslaw.biernacki@linaro.org
Cc: stable@dpdk.org

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
 test/test/test_memzone.c | 225 +++++++++++++++++++++++++++++------------------
 1 file changed, 140 insertions(+), 85 deletions(-)

diff --git a/test/test/test_memzone.c b/test/test/test_memzone.c
index 00d340f..8ece1ac 100644
--- a/test/test/test_memzone.c
+++ b/test/test/test_memzone.c
@@ -4,6 +4,7 @@
 
 #include <stdio.h>
 #include <stdint.h>
+#include <string.h>
 #include <inttypes.h>
 #include <sys/queue.h>
 
@@ -47,6 +48,8 @@
  * - Check flags for specific huge page size reservation
  */
 
+#define TEST_MEMZONE_NAME(suffix) "MZ_TEST_" suffix
+
 /* Test if memory overlaps: return 1 if true, or 0 if false. */
 static int
 is_memory_overlap(rte_iova_t ptr1, size_t len1, rte_iova_t ptr2, size_t len2)
@@ -63,14 +66,14 @@ test_memzone_invalid_alignment(void)
 {
 	const struct rte_memzone * mz;
 
-	mz = rte_memzone_lookup("invalid_alignment");
+	mz = rte_memzone_lookup(TEST_MEMZONE_NAME("invalid_alignment"));
 	if (mz != NULL) {
 		printf("Zone with invalid alignment has been reserved\n");
 		return -1;
 	}
 
-	mz = rte_memzone_reserve_aligned("invalid_alignment", 100,
-			SOCKET_ID_ANY, 0, 100);
+	mz = rte_memzone_reserve_aligned(TEST_MEMZONE_NAME("invalid_alignment"),
+					 100, SOCKET_ID_ANY, 0, 100);
 	if (mz != NULL) {
 		printf("Zone with invalid alignment has been reserved\n");
 		return -1;
@@ -83,14 +86,16 @@ test_memzone_reserving_zone_size_bigger_than_the_maximum(void)
 {
 	const struct rte_memzone * mz;
 
-	mz = rte_memzone_lookup("zone_size_bigger_than_the_maximum");
+	mz = rte_memzone_lookup(
+			TEST_MEMZONE_NAME("zone_size_bigger_than_the_maximum"));
 	if (mz != NULL) {
 		printf("zone_size_bigger_than_the_maximum has been reserved\n");
 		return -1;
 	}
 
-	mz = rte_memzone_reserve("zone_size_bigger_than_the_maximum", (size_t)-1,
-			SOCKET_ID_ANY, 0);
+	mz = rte_memzone_reserve(
+			TEST_MEMZONE_NAME("zone_size_bigger_than_the_maximum"),
+			(size_t)-1, SOCKET_ID_ANY, 0);
 	if (mz != NULL) {
 		printf("It is impossible to reserve such big a memzone\n");
 		return -1;
@@ -137,8 +142,8 @@ test_memzone_reserve_flags(void)
 	 * available page size (i.e 1GB ) when 2MB pages are unavailable.
 	 */
 	if (hugepage_2MB_avail) {
-		mz = rte_memzone_reserve("flag_zone_2M", size, SOCKET_ID_ANY,
-				RTE_MEMZONE_2MB);
+		mz = rte_memzone_reserve(TEST_MEMZONE_NAME("flag_zone_2M"),
+				size, SOCKET_ID_ANY, RTE_MEMZONE_2MB);
 		if (mz == NULL) {
 			printf("MEMZONE FLAG 2MB\n");
 			return -1;
@@ -152,7 +157,8 @@ test_memzone_reserve_flags(void)
 			return -1;
 		}
 
-		mz = rte_memzone_reserve("flag_zone_2M_HINT", size, SOCKET_ID_ANY,
+		mz = rte_memzone_reserve(TEST_MEMZONE_NAME("flag_zone_2M_HINT"),
+				size, SOCKET_ID_ANY,
 				RTE_MEMZONE_2MB|RTE_MEMZONE_SIZE_HINT_ONLY);
 		if (mz == NULL) {
 			printf("MEMZONE FLAG 2MB\n");
@@ -171,7 +177,9 @@ test_memzone_reserve_flags(void)
 		 * HINT flag is indicated
 		 */
 		if (!hugepage_1GB_avail) {
-			mz = rte_memzone_reserve("flag_zone_1G_HINT", size, SOCKET_ID_ANY,
+			mz = rte_memzone_reserve(
+					TEST_MEMZONE_NAME("flag_zone_1G_HINT"),
+					size, SOCKET_ID_ANY,
 					RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY);
 			if (mz == NULL) {
 				printf("MEMZONE FLAG 1GB & HINT\n");
@@ -186,8 +194,9 @@ test_memzone_reserve_flags(void)
 				return -1;
 			}
 
-			mz = rte_memzone_reserve("flag_zone_1G", size, SOCKET_ID_ANY,
-					RTE_MEMZONE_1GB);
+			mz = rte_memzone_reserve(
+					TEST_MEMZONE_NAME("flag_zone_1G"), size,
+					SOCKET_ID_ANY, RTE_MEMZONE_1GB);
 			if (mz != NULL) {
 				printf("MEMZONE FLAG 1GB\n");
 				return -1;
@@ -197,8 +206,8 @@ test_memzone_reserve_flags(void)
 
 	/*As with 2MB tests above for 1GB huge page requests*/
 	if (hugepage_1GB_avail) {
-		mz = rte_memzone_reserve("flag_zone_1G", size, SOCKET_ID_ANY,
-				RTE_MEMZONE_1GB);
+		mz = rte_memzone_reserve(TEST_MEMZONE_NAME("flag_zone_1G"),
+				size, SOCKET_ID_ANY, RTE_MEMZONE_1GB);
 		if (mz == NULL) {
 			printf("MEMZONE FLAG 1GB\n");
 			return -1;
@@ -212,7 +221,8 @@ test_memzone_reserve_flags(void)
 			return -1;
 		}
 
-		mz = rte_memzone_reserve("flag_zone_1G_HINT", size, SOCKET_ID_ANY,
+		mz = rte_memzone_reserve(TEST_MEMZONE_NAME("flag_zone_1G_HINT"),
+				size, SOCKET_ID_ANY,
 				RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY);
 		if (mz == NULL) {
 			printf("MEMZONE FLAG 1GB\n");
@@ -231,7 +241,9 @@ test_memzone_reserve_flags(void)
 		 * HINT flag is indicated
 		 */
 		if (!hugepage_2MB_avail) {
-			mz = rte_memzone_reserve("flag_zone_2M_HINT", size, SOCKET_ID_ANY,
+			mz = rte_memzone_reserve(
+					TEST_MEMZONE_NAME("flag_zone_2M_HINT"),
+					size, SOCKET_ID_ANY,
 					RTE_MEMZONE_2MB|RTE_MEMZONE_SIZE_HINT_ONLY);
 			if (mz == NULL){
 				printf("MEMZONE FLAG 2MB & HINT\n");
@@ -245,8 +257,9 @@ test_memzone_reserve_flags(void)
 				printf("Fail memzone free\n");
 				return -1;
 			}
-			mz = rte_memzone_reserve("flag_zone_2M", size, SOCKET_ID_ANY,
-					RTE_MEMZONE_2MB);
+			mz = rte_memzone_reserve(
+					TEST_MEMZONE_NAME("flag_zone_2M"), size,
+					SOCKET_ID_ANY, RTE_MEMZONE_2MB);
 			if (mz != NULL) {
 				printf("MEMZONE FLAG 2MB\n");
 				return -1;
@@ -254,8 +267,10 @@ test_memzone_reserve_flags(void)
 		}
 
 		if (hugepage_2MB_avail && hugepage_1GB_avail) {
-			mz = rte_memzone_reserve("flag_zone_2M_HINT", size, SOCKET_ID_ANY,
-								RTE_MEMZONE_2MB|RTE_MEMZONE_1GB);
+			mz = rte_memzone_reserve(
+					TEST_MEMZONE_NAME("flag_zone_2M_HINT"),
+					size, SOCKET_ID_ANY,
+					RTE_MEMZONE_2MB|RTE_MEMZONE_1GB);
 			if (mz == NULL) {
 				printf("BOTH SIZES SET\n");
 				return -1;
@@ -279,8 +294,8 @@ test_memzone_reserve_flags(void)
 	 * page size (i.e 16GB ) when 16MB pages are unavailable.
 	 */
 	if (hugepage_16MB_avail) {
-		mz = rte_memzone_reserve("flag_zone_16M", size, SOCKET_ID_ANY,
-				RTE_MEMZONE_16MB);
+		mz = rte_memzone_reserve(TEST_MEMZONE_NAME("flag_zone_16M"),
+				size, SOCKET_ID_ANY, RTE_MEMZONE_16MB);
 		if (mz == NULL) {
 			printf("MEMZONE FLAG 16MB\n");
 			return -1;
@@ -294,8 +309,10 @@ test_memzone_reserve_flags(void)
 			return -1;
 		}
 
-		mz = rte_memzone_reserve("flag_zone_16M_HINT", size,
-		SOCKET_ID_ANY, RTE_MEMZONE_16MB|RTE_MEMZONE_SIZE_HINT_ONLY);
+		mz = rte_memzone_reserve(
+				TEST_MEMZONE_NAME("flag_zone_16M_HINT"), size,
+				SOCKET_ID_ANY,
+				RTE_MEMZONE_16MB|RTE_MEMZONE_SIZE_HINT_ONLY);
 		if (mz == NULL) {
 			printf("MEMZONE FLAG 16MB\n");
 			return -1;
@@ -313,9 +330,11 @@ test_memzone_reserve_flags(void)
 		 * unless HINT flag is indicated
 		 */
 		if (!hugepage_16GB_avail) {
-			mz = rte_memzone_reserve("flag_zone_16G_HINT", size,
-				SOCKET_ID_ANY,
-				RTE_MEMZONE_16GB|RTE_MEMZONE_SIZE_HINT_ONLY);
+			mz = rte_memzone_reserve(
+					TEST_MEMZONE_NAME("flag_zone_16G_HINT"),
+					size, SOCKET_ID_ANY,
+					RTE_MEMZONE_16GB |
+					RTE_MEMZONE_SIZE_HINT_ONLY);
 			if (mz == NULL) {
 				printf("MEMZONE FLAG 16GB & HINT\n");
 				return -1;
@@ -329,8 +348,10 @@ test_memzone_reserve_flags(void)
 				return -1;
 			}
 
-			mz = rte_memzone_reserve("flag_zone_16G", size,
-				SOCKET_ID_ANY, RTE_MEMZONE_16GB);
+			mz = rte_memzone_reserve(
+					TEST_MEMZONE_NAME("flag_zone_16G"),
+					size,
+					SOCKET_ID_ANY, RTE_MEMZONE_16GB);
 			if (mz != NULL) {
 				printf("MEMZONE FLAG 16GB\n");
 				return -1;
@@ -339,8 +360,8 @@ test_memzone_reserve_flags(void)
 	}
 	/*As with 16MB tests above for 16GB huge page requests*/
 	if (hugepage_16GB_avail) {
-		mz = rte_memzone_reserve("flag_zone_16G", size, SOCKET_ID_ANY,
-				RTE_MEMZONE_16GB);
+		mz = rte_memzone_reserve(TEST_MEMZONE_NAME("flag_zone_16G"),
+				size, SOCKET_ID_ANY, RTE_MEMZONE_16GB);
 		if (mz == NULL) {
 			printf("MEMZONE FLAG 16GB\n");
 			return -1;
@@ -354,8 +375,10 @@ test_memzone_reserve_flags(void)
 			return -1;
 		}
 
-		mz = rte_memzone_reserve("flag_zone_16G_HINT", size,
-		SOCKET_ID_ANY, RTE_MEMZONE_16GB|RTE_MEMZONE_SIZE_HINT_ONLY);
+		mz = rte_memzone_reserve(
+				TEST_MEMZONE_NAME("flag_zone_16G_HINT"), size,
+				SOCKET_ID_ANY,
+				RTE_MEMZONE_16GB|RTE_MEMZONE_SIZE_HINT_ONLY);
 		if (mz == NULL) {
 			printf("MEMZONE FLAG 16GB\n");
 			return -1;
@@ -373,9 +396,11 @@ test_memzone_reserve_flags(void)
 		 * unless HINT flag is indicated
 		 */
 		if (!hugepage_16MB_avail) {
-			mz = rte_memzone_reserve("flag_zone_16M_HINT", size,
-				SOCKET_ID_ANY,
-				RTE_MEMZONE_16MB|RTE_MEMZONE_SIZE_HINT_ONLY);
+			mz = rte_memzone_reserve(
+					TEST_MEMZONE_NAME("flag_zone_16M_HINT"),
+					size, SOCKET_ID_ANY,
+					RTE_MEMZONE_16MB |
+					RTE_MEMZONE_SIZE_HINT_ONLY);
 			if (mz == NULL) {
 				printf("MEMZONE FLAG 16MB & HINT\n");
 				return -1;
@@ -388,8 +413,9 @@ test_memzone_reserve_flags(void)
 				printf("Fail memzone free\n");
 				return -1;
 			}
-			mz = rte_memzone_reserve("flag_zone_16M", size,
-				SOCKET_ID_ANY, RTE_MEMZONE_16MB);
+			mz = rte_memzone_reserve(
+					TEST_MEMZONE_NAME("flag_zone_16M"),
+					size, SOCKET_ID_ANY, RTE_MEMZONE_16MB);
 			if (mz != NULL) {
 				printf("MEMZONE FLAG 16MB\n");
 				return -1;
@@ -397,9 +423,10 @@ test_memzone_reserve_flags(void)
 		}
 
 		if (hugepage_16MB_avail && hugepage_16GB_avail) {
-			mz = rte_memzone_reserve("flag_zone_16M_HINT", size,
-				SOCKET_ID_ANY,
-				RTE_MEMZONE_16MB|RTE_MEMZONE_16GB);
+			mz = rte_memzone_reserve(
+					TEST_MEMZONE_NAME("flag_zone_16M_HINT"),
+					size, SOCKET_ID_ANY,
+					RTE_MEMZONE_16MB|RTE_MEMZONE_16GB);
 			if (mz == NULL) {
 				printf("BOTH SIZES SET\n");
 				return -1;
@@ -455,7 +482,8 @@ test_memzone_reserve_max(void)
 		return 0;
 	}
 
-	mz = rte_memzone_reserve("max_zone", 0, SOCKET_ID_ANY, 0);
+	mz = rte_memzone_reserve(TEST_MEMZONE_NAME("max_zone"), 0,
+			SOCKET_ID_ANY, 0);
 	if (mz == NULL){
 		printf("Failed to reserve a big chunk of memory - %s\n",
 				rte_strerror(rte_errno));
@@ -497,8 +525,8 @@ test_memzone_reserve_max_aligned(void)
 		return 0;
 	}
 
-	mz = rte_memzone_reserve_aligned("max_zone_aligned", 0,
-			SOCKET_ID_ANY, 0, align);
+	mz = rte_memzone_reserve_aligned(TEST_MEMZONE_NAME("max_zone_aligned"),
+			0, SOCKET_ID_ANY, 0, align);
 	if (mz == NULL){
 		printf("Failed to reserve a big chunk of memory - %s\n",
 				rte_strerror(rte_errno));
@@ -535,24 +563,29 @@ test_memzone_aligned(void)
 	const struct rte_memzone *memzone_aligned_1024;
 
 	/* memzone that should automatically be adjusted to align on 64 bytes */
-	memzone_aligned_32 = rte_memzone_reserve_aligned("aligned_32", 100,
-				SOCKET_ID_ANY, 0, 32);
+	memzone_aligned_32 = rte_memzone_reserve_aligned(
+			TEST_MEMZONE_NAME("aligned_32"), 100, SOCKET_ID_ANY, 0,
+			32);
 
 	/* memzone that is supposed to be aligned on a 128 byte boundary */
-	memzone_aligned_128 = rte_memzone_reserve_aligned("aligned_128", 100,
-				SOCKET_ID_ANY, 0, 128);
+	memzone_aligned_128 = rte_memzone_reserve_aligned(
+			TEST_MEMZONE_NAME("aligned_128"), 100, SOCKET_ID_ANY, 0,
+			128);
 
 	/* memzone that is supposed to be aligned on a 256 byte boundary */
-	memzone_aligned_256 = rte_memzone_reserve_aligned("aligned_256", 100,
-				SOCKET_ID_ANY, 0, 256);
+	memzone_aligned_256 = rte_memzone_reserve_aligned(
+			TEST_MEMZONE_NAME("aligned_256"), 100, SOCKET_ID_ANY, 0,
+			256);
 
 	/* memzone that is supposed to be aligned on a 512 byte boundary */
-	memzone_aligned_512 = rte_memzone_reserve_aligned("aligned_512", 100,
-				SOCKET_ID_ANY, 0, 512);
+	memzone_aligned_512 = rte_memzone_reserve_aligned(
+			TEST_MEMZONE_NAME("aligned_512"), 100, SOCKET_ID_ANY, 0,
+			512);
 
 	/* memzone that is supposed to be aligned on a 1024 byte boundary */
-	memzone_aligned_1024 = rte_memzone_reserve_aligned("aligned_1024", 100,
-				SOCKET_ID_ANY, 0, 1024);
+	memzone_aligned_1024 = rte_memzone_reserve_aligned(
+			TEST_MEMZONE_NAME("aligned_1024"), 100, SOCKET_ID_ANY,
+			0, 1024);
 
 	printf("check alignments and lengths\n");
 	if (memzone_aligned_32 == NULL) {
@@ -721,37 +754,46 @@ static int
 test_memzone_bounded(void)
 {
 	const struct rte_memzone *memzone_err;
-	const char *name;
 	int rc;
 
 	/* should fail as boundary is not power of two */
-	name = "bounded_error_31";
-	if ((memzone_err = rte_memzone_reserve_bounded(name,
-			100, SOCKET_ID_ANY, 0, 32, UINT32_MAX)) != NULL) {
+	memzone_err = rte_memzone_reserve_bounded(
+			TEST_MEMZONE_NAME("bounded_error_31"), 100,
+			SOCKET_ID_ANY, 0, 32, UINT32_MAX);
+	if (memzone_err != NULL) {
 		printf("%s(%s)created a memzone with invalid boundary "
 			"conditions\n", __func__, memzone_err->name);
 		return -1;
 	}
 
 	/* should fail as len is greater then boundary */
-	name = "bounded_error_32";
-	if ((memzone_err = rte_memzone_reserve_bounded(name,
-			100, SOCKET_ID_ANY, 0, 32, 32)) != NULL) {
+	memzone_err = rte_memzone_reserve_bounded(
+			TEST_MEMZONE_NAME("bounded_error_32"), 100,
+			SOCKET_ID_ANY, 0, 32, 32);
+	if (memzone_err != NULL) {
 		printf("%s(%s)created a memzone with invalid boundary "
 			"conditions\n", __func__, memzone_err->name);
 		return -1;
 	}
 
-	if ((rc = check_memzone_bounded("bounded_128", 100, 128, 128)) != 0)
+	rc = check_memzone_bounded(TEST_MEMZONE_NAME("bounded_128"), 100, 128,
+			128);
+	if (rc != 0)
 		return rc;
 
-	if ((rc = check_memzone_bounded("bounded_256", 100, 256, 128)) != 0)
+	rc = check_memzone_bounded(TEST_MEMZONE_NAME("bounded_256"), 100, 256,
+			128);
+	if (rc != 0)
 		return rc;
 
-	if ((rc = check_memzone_bounded("bounded_1K", 100, 64, 1024)) != 0)
+	rc = check_memzone_bounded(TEST_MEMZONE_NAME("bounded_1K"), 100, 64,
+			1024);
+	if (rc != 0)
 		return rc;
 
-	if ((rc = check_memzone_bounded("bounded_1K_MAX", 0, 64, 1024)) != 0)
+	rc = check_memzone_bounded(TEST_MEMZONE_NAME("bounded_1K_MAX"), 0, 64,
+			1024);
+	if (rc != 0)
 		return rc;
 
 	return 0;
@@ -764,25 +806,28 @@ test_memzone_free(void)
 	int i;
 	char name[20];
 
-	mz[0] = rte_memzone_reserve("tempzone0", 2000, SOCKET_ID_ANY, 0);
-	mz[1] = rte_memzone_reserve("tempzone1", 4000, SOCKET_ID_ANY, 0);
+	mz[0] = rte_memzone_reserve(TEST_MEMZONE_NAME("tempzone0"), 2000,
+			SOCKET_ID_ANY, 0);
+	mz[1] = rte_memzone_reserve(TEST_MEMZONE_NAME("tempzone1"), 4000,
+			SOCKET_ID_ANY, 0);
 
 	if (mz[0] > mz[1])
 		return -1;
-	if (!rte_memzone_lookup("tempzone0"))
+	if (!rte_memzone_lookup(TEST_MEMZONE_NAME("tempzone0")))
 		return -1;
-	if (!rte_memzone_lookup("tempzone1"))
+	if (!rte_memzone_lookup(TEST_MEMZONE_NAME("tempzone1")))
 		return -1;
 
 	if (rte_memzone_free(mz[0])) {
 		printf("Fail memzone free - tempzone0\n");
 		return -1;
 	}
-	if (rte_memzone_lookup("tempzone0")) {
+	if (rte_memzone_lookup(TEST_MEMZONE_NAME("tempzone0"))) {
 		printf("Found previously free memzone - tempzone0\n");
 		return -1;
 	}
-	mz[2] = rte_memzone_reserve("tempzone2", 2000, SOCKET_ID_ANY, 0);
+	mz[2] = rte_memzone_reserve(TEST_MEMZONE_NAME("tempzone2"), 2000,
+			SOCKET_ID_ANY, 0);
 
 	if (mz[2] > mz[1]) {
 		printf("tempzone2 should have gotten the free entry from tempzone0\n");
@@ -792,7 +837,7 @@ test_memzone_free(void)
 		printf("Fail memzone free - tempzone2\n");
 		return -1;
 	}
-	if (rte_memzone_lookup("tempzone2")) {
+	if (rte_memzone_lookup(TEST_MEMZONE_NAME("tempzone2"))) {
 		printf("Found previously free memzone - tempzone2\n");
 		return -1;
 	}
@@ -800,14 +845,15 @@ test_memzone_free(void)
 		printf("Fail memzone free - tempzone1\n");
 		return -1;
 	}
-	if (rte_memzone_lookup("tempzone1")) {
+	if (rte_memzone_lookup(TEST_MEMZONE_NAME("tempzone1"))) {
 		printf("Found previously free memzone - tempzone1\n");
 		return -1;
 	}
 
 	i = 0;
 	do {
-		snprintf(name, sizeof(name), "tempzone%u", i);
+		snprintf(name, sizeof(name), TEST_MEMZONE_NAME("tempzone%u"),
+				i);
 		mz[i] = rte_memzone_reserve(name, 1, SOCKET_ID_ANY, 0);
 	} while (mz[i++] != NULL);
 
@@ -815,7 +861,8 @@ test_memzone_free(void)
 		printf("Fail memzone free - tempzone0\n");
 		return -1;
 	}
-	mz[0] = rte_memzone_reserve("tempzone0new", 0, SOCKET_ID_ANY, 0);
+	mz[0] = rte_memzone_reserve(TEST_MEMZONE_NAME("tempzone0new"), 0,
+			SOCKET_ID_ANY, 0);
 
 	if (mz[0] == NULL) {
 		printf("Fail to create memzone - tempzone0new - when MAX memzones were "
@@ -845,16 +892,16 @@ test_memzone_basic(void)
 	int memzone_cnt_before =
 			rte_eal_get_configuration()->mem_config->memzone_cnt;
 
-	memzone1 = rte_memzone_reserve("testzone1", 100,
+	memzone1 = rte_memzone_reserve(TEST_MEMZONE_NAME("testzone1"), 100,
 				SOCKET_ID_ANY, 0);
 
-	memzone2 = rte_memzone_reserve("testzone2", 1000,
+	memzone2 = rte_memzone_reserve(TEST_MEMZONE_NAME("testzone2"), 1000,
 				0, 0);
 
-	memzone3 = rte_memzone_reserve("testzone3", 1000,
+	memzone3 = rte_memzone_reserve(TEST_MEMZONE_NAME("testzone3"), 1000,
 				1, 0);
 
-	memzone4 = rte_memzone_reserve("testzone4", 1024,
+	memzone4 = rte_memzone_reserve(TEST_MEMZONE_NAME("testzone4"), 1024,
 				SOCKET_ID_ANY, 0);
 
 	/* memzone3 may be NULL if we don't have NUMA */
@@ -918,12 +965,12 @@ test_memzone_basic(void)
 		return -1;
 
 	printf("test zone lookup\n");
-	mz = rte_memzone_lookup("testzone1");
+	mz = rte_memzone_lookup(TEST_MEMZONE_NAME("testzone1"));
 	if (mz != memzone1)
 		return -1;
 
 	printf("test duplcate zone name\n");
-	mz = rte_memzone_reserve("testzone1", 100,
+	mz = rte_memzone_reserve(TEST_MEMZONE_NAME("testzone1"), 100,
 			SOCKET_ID_ANY, 0);
 	if (mz != NULL)
 		return -1;
@@ -953,16 +1000,22 @@ test_memzone_basic(void)
 	return 0;
 }
 
-static int memzone_walk_called;
-static void memzone_walk_clb(const struct rte_memzone *mz __rte_unused,
+static int test_memzones_left;
+static int memzone_walk_cnt;
+static void memzone_walk_clb(const struct rte_memzone *mz,
 			     void *arg __rte_unused)
 {
-	memzone_walk_called = 1;
+	memzone_walk_cnt++;
+	if (!strncmp(TEST_MEMZONE_NAME(""), mz->name, RTE_MEMZONE_NAMESIZE))
+		test_memzones_left++;
 }
 
 static int
 test_memzone(void)
 {
+	/* take note of how many memzones were allocated before running */
+	int memzone_cnt = rte_eal_get_configuration()->mem_config->memzone_cnt;
+
 	printf("test basic memzone API\n");
 	if (test_memzone_basic() < 0)
 		return -1;
@@ -1000,8 +1053,10 @@ test_memzone(void)
 		return -1;
 
 	printf("check memzone cleanup\n");
+	memzone_walk_cnt = 0;
+	test_memzones_left = 0;
 	rte_memzone_walk(memzone_walk_clb, NULL);
-	if (memzone_walk_called) {
+	if (memzone_walk_cnt != memzone_cnt || test_memzones_left > 0) {
 		printf("there are some memzones left after test\n");
 		rte_memzone_dump(stdout);
 		return -1;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 1/2] test/memzone: add test for memzone count in eal mem config
  2018-01-31 15:29 ` [PATCH v2 " Anatoly Burakov
@ 2018-02-01  0:12   ` Thomas Monjalon
  2018-02-01 10:05     ` Burakov, Anatoly
  2018-02-01 10:02   ` [PATCH v3 " Anatoly Burakov
  2018-02-01 10:02   ` [PATCH v3 " Anatoly Burakov
  2 siblings, 1 reply; 17+ messages in thread
From: Thomas Monjalon @ 2018-02-01  0:12 UTC (permalink / raw)
  To: Anatoly Burakov; +Cc: dev

31/01/2018 16:29, Anatoly Burakov:
> Ensure that memzone count in eal mem config is incremented and
> decremented whenever memzones are allocated and freed.
> 
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>

Please report acks from previous version.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v3 1/2] test/memzone: add test for memzone count in eal mem config
  2018-01-31 15:29 ` [PATCH v2 " Anatoly Burakov
  2018-02-01  0:12   ` Thomas Monjalon
@ 2018-02-01 10:02   ` Anatoly Burakov
  2018-02-01 10:14     ` [PATCH v4 " Anatoly Burakov
  2018-02-01 10:14     ` [PATCH v4 2/2] test/memzone: handle previously allocated memzones Anatoly Burakov
  2018-02-01 10:02   ` [PATCH v3 " Anatoly Burakov
  2 siblings, 2 replies; 17+ messages in thread
From: Anatoly Burakov @ 2018-02-01 10:02 UTC (permalink / raw)
  To: dev

Ensure that memzone count in eal mem config is incremented and
decremented whenever memzones are allocated and freed.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
 test/test/test_memzone.c | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/test/test/test_memzone.c b/test/test/test_memzone.c
index f6c9b56..00d340f 100644
--- a/test/test/test_memzone.c
+++ b/test/test/test_memzone.c
@@ -841,6 +841,9 @@ test_memzone_basic(void)
 	const struct rte_memzone *memzone3;
 	const struct rte_memzone *memzone4;
 	const struct rte_memzone *mz;
+	int memzone_cnt_after, memzone_cnt_expected;
+	int memzone_cnt_before =
+			rte_eal_get_configuration()->mem_config->memzone_cnt;
 
 	memzone1 = rte_memzone_reserve("testzone1", 100,
 				SOCKET_ID_ANY, 0);
@@ -858,6 +861,18 @@ test_memzone_basic(void)
 	if (memzone1 == NULL || memzone2 == NULL || memzone4 == NULL)
 		return -1;
 
+	/* check how many memzones we are expecting */
+	memzone_cnt_expected = memzone_cnt_before +
+			(memzone1 != NULL) + (memzone2 != NULL) +
+			(memzone3 != NULL) + (memzone4 != NULL);
+
+	memzone_cnt_after =
+			rte_eal_get_configuration()->mem_config->memzone_cnt;
+
+	if (memzone_cnt_after != memzone_cnt_expected)
+		return -1;
+
+
 	rte_memzone_dump(stdout);
 
 	/* check cache-line alignments */
@@ -930,6 +945,11 @@ test_memzone_basic(void)
 		return -1;
 	}
 
+	memzone_cnt_after =
+			rte_eal_get_configuration()->mem_config->memzone_cnt;
+	if (memzone_cnt_after != memzone_cnt_before)
+		return -1;
+
 	return 0;
 }
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v3 2/2] test/memzone: handle previously allocated memzones
  2018-01-31 15:29 ` [PATCH v2 " Anatoly Burakov
  2018-02-01  0:12   ` Thomas Monjalon
  2018-02-01 10:02   ` [PATCH v3 " Anatoly Burakov
@ 2018-02-01 10:02   ` Anatoly Burakov
  2 siblings, 0 replies; 17+ messages in thread
From: Anatoly Burakov @ 2018-02-01 10:02 UTC (permalink / raw)
  To: dev; +Cc: radoslaw.biernacki, stable, Phil Yang

Currently, memzone autotest expects there to be no memzones
present by the time the test is run. Some hardware drivers
will allocate memzones for internal use during initialization,
resulting in tests failing due to unexpected memzones being
allocated before the test was run.

Fix this by making sure all memzones allocated by this test
have a common prefix, and making callback increment a counter
on encountering memzones with this prefix. Also, separately
increment another counter that will count total number of
memzones left after test, and compares it to previously stored
number of memzones, to ensure that we didn't accidentally
allocated/freed any memzones we weren't supposed to. This
also doubles as a test for correct operation of memzone_walk().

Fixes: 71330483a193 ("test/memzone: fix memory leak")
Cc: radoslaw.biernacki@linaro.org
Cc: stable@dpdk.org

Signed-off-by: Phil Yang <Phil.Yang@arm.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---

Notes:
    v3: suggested-by was supposed to be a signoff
    
    v2: incorporated Phil Yang's patch to better ensure
        no memzones were left behind by the test

 test/test/test_memzone.c | 225 +++++++++++++++++++++++++++++------------------
 1 file changed, 140 insertions(+), 85 deletions(-)

diff --git a/test/test/test_memzone.c b/test/test/test_memzone.c
index 00d340f..8ece1ac 100644
--- a/test/test/test_memzone.c
+++ b/test/test/test_memzone.c
@@ -4,6 +4,7 @@
 
 #include <stdio.h>
 #include <stdint.h>
+#include <string.h>
 #include <inttypes.h>
 #include <sys/queue.h>
 
@@ -47,6 +48,8 @@
  * - Check flags for specific huge page size reservation
  */
 
+#define TEST_MEMZONE_NAME(suffix) "MZ_TEST_" suffix
+
 /* Test if memory overlaps: return 1 if true, or 0 if false. */
 static int
 is_memory_overlap(rte_iova_t ptr1, size_t len1, rte_iova_t ptr2, size_t len2)
@@ -63,14 +66,14 @@ test_memzone_invalid_alignment(void)
 {
 	const struct rte_memzone * mz;
 
-	mz = rte_memzone_lookup("invalid_alignment");
+	mz = rte_memzone_lookup(TEST_MEMZONE_NAME("invalid_alignment"));
 	if (mz != NULL) {
 		printf("Zone with invalid alignment has been reserved\n");
 		return -1;
 	}
 
-	mz = rte_memzone_reserve_aligned("invalid_alignment", 100,
-			SOCKET_ID_ANY, 0, 100);
+	mz = rte_memzone_reserve_aligned(TEST_MEMZONE_NAME("invalid_alignment"),
+					 100, SOCKET_ID_ANY, 0, 100);
 	if (mz != NULL) {
 		printf("Zone with invalid alignment has been reserved\n");
 		return -1;
@@ -83,14 +86,16 @@ test_memzone_reserving_zone_size_bigger_than_the_maximum(void)
 {
 	const struct rte_memzone * mz;
 
-	mz = rte_memzone_lookup("zone_size_bigger_than_the_maximum");
+	mz = rte_memzone_lookup(
+			TEST_MEMZONE_NAME("zone_size_bigger_than_the_maximum"));
 	if (mz != NULL) {
 		printf("zone_size_bigger_than_the_maximum has been reserved\n");
 		return -1;
 	}
 
-	mz = rte_memzone_reserve("zone_size_bigger_than_the_maximum", (size_t)-1,
-			SOCKET_ID_ANY, 0);
+	mz = rte_memzone_reserve(
+			TEST_MEMZONE_NAME("zone_size_bigger_than_the_maximum"),
+			(size_t)-1, SOCKET_ID_ANY, 0);
 	if (mz != NULL) {
 		printf("It is impossible to reserve such big a memzone\n");
 		return -1;
@@ -137,8 +142,8 @@ test_memzone_reserve_flags(void)
 	 * available page size (i.e 1GB ) when 2MB pages are unavailable.
 	 */
 	if (hugepage_2MB_avail) {
-		mz = rte_memzone_reserve("flag_zone_2M", size, SOCKET_ID_ANY,
-				RTE_MEMZONE_2MB);
+		mz = rte_memzone_reserve(TEST_MEMZONE_NAME("flag_zone_2M"),
+				size, SOCKET_ID_ANY, RTE_MEMZONE_2MB);
 		if (mz == NULL) {
 			printf("MEMZONE FLAG 2MB\n");
 			return -1;
@@ -152,7 +157,8 @@ test_memzone_reserve_flags(void)
 			return -1;
 		}
 
-		mz = rte_memzone_reserve("flag_zone_2M_HINT", size, SOCKET_ID_ANY,
+		mz = rte_memzone_reserve(TEST_MEMZONE_NAME("flag_zone_2M_HINT"),
+				size, SOCKET_ID_ANY,
 				RTE_MEMZONE_2MB|RTE_MEMZONE_SIZE_HINT_ONLY);
 		if (mz == NULL) {
 			printf("MEMZONE FLAG 2MB\n");
@@ -171,7 +177,9 @@ test_memzone_reserve_flags(void)
 		 * HINT flag is indicated
 		 */
 		if (!hugepage_1GB_avail) {
-			mz = rte_memzone_reserve("flag_zone_1G_HINT", size, SOCKET_ID_ANY,
+			mz = rte_memzone_reserve(
+					TEST_MEMZONE_NAME("flag_zone_1G_HINT"),
+					size, SOCKET_ID_ANY,
 					RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY);
 			if (mz == NULL) {
 				printf("MEMZONE FLAG 1GB & HINT\n");
@@ -186,8 +194,9 @@ test_memzone_reserve_flags(void)
 				return -1;
 			}
 
-			mz = rte_memzone_reserve("flag_zone_1G", size, SOCKET_ID_ANY,
-					RTE_MEMZONE_1GB);
+			mz = rte_memzone_reserve(
+					TEST_MEMZONE_NAME("flag_zone_1G"), size,
+					SOCKET_ID_ANY, RTE_MEMZONE_1GB);
 			if (mz != NULL) {
 				printf("MEMZONE FLAG 1GB\n");
 				return -1;
@@ -197,8 +206,8 @@ test_memzone_reserve_flags(void)
 
 	/*As with 2MB tests above for 1GB huge page requests*/
 	if (hugepage_1GB_avail) {
-		mz = rte_memzone_reserve("flag_zone_1G", size, SOCKET_ID_ANY,
-				RTE_MEMZONE_1GB);
+		mz = rte_memzone_reserve(TEST_MEMZONE_NAME("flag_zone_1G"),
+				size, SOCKET_ID_ANY, RTE_MEMZONE_1GB);
 		if (mz == NULL) {
 			printf("MEMZONE FLAG 1GB\n");
 			return -1;
@@ -212,7 +221,8 @@ test_memzone_reserve_flags(void)
 			return -1;
 		}
 
-		mz = rte_memzone_reserve("flag_zone_1G_HINT", size, SOCKET_ID_ANY,
+		mz = rte_memzone_reserve(TEST_MEMZONE_NAME("flag_zone_1G_HINT"),
+				size, SOCKET_ID_ANY,
 				RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY);
 		if (mz == NULL) {
 			printf("MEMZONE FLAG 1GB\n");
@@ -231,7 +241,9 @@ test_memzone_reserve_flags(void)
 		 * HINT flag is indicated
 		 */
 		if (!hugepage_2MB_avail) {
-			mz = rte_memzone_reserve("flag_zone_2M_HINT", size, SOCKET_ID_ANY,
+			mz = rte_memzone_reserve(
+					TEST_MEMZONE_NAME("flag_zone_2M_HINT"),
+					size, SOCKET_ID_ANY,
 					RTE_MEMZONE_2MB|RTE_MEMZONE_SIZE_HINT_ONLY);
 			if (mz == NULL){
 				printf("MEMZONE FLAG 2MB & HINT\n");
@@ -245,8 +257,9 @@ test_memzone_reserve_flags(void)
 				printf("Fail memzone free\n");
 				return -1;
 			}
-			mz = rte_memzone_reserve("flag_zone_2M", size, SOCKET_ID_ANY,
-					RTE_MEMZONE_2MB);
+			mz = rte_memzone_reserve(
+					TEST_MEMZONE_NAME("flag_zone_2M"), size,
+					SOCKET_ID_ANY, RTE_MEMZONE_2MB);
 			if (mz != NULL) {
 				printf("MEMZONE FLAG 2MB\n");
 				return -1;
@@ -254,8 +267,10 @@ test_memzone_reserve_flags(void)
 		}
 
 		if (hugepage_2MB_avail && hugepage_1GB_avail) {
-			mz = rte_memzone_reserve("flag_zone_2M_HINT", size, SOCKET_ID_ANY,
-								RTE_MEMZONE_2MB|RTE_MEMZONE_1GB);
+			mz = rte_memzone_reserve(
+					TEST_MEMZONE_NAME("flag_zone_2M_HINT"),
+					size, SOCKET_ID_ANY,
+					RTE_MEMZONE_2MB|RTE_MEMZONE_1GB);
 			if (mz == NULL) {
 				printf("BOTH SIZES SET\n");
 				return -1;
@@ -279,8 +294,8 @@ test_memzone_reserve_flags(void)
 	 * page size (i.e 16GB ) when 16MB pages are unavailable.
 	 */
 	if (hugepage_16MB_avail) {
-		mz = rte_memzone_reserve("flag_zone_16M", size, SOCKET_ID_ANY,
-				RTE_MEMZONE_16MB);
+		mz = rte_memzone_reserve(TEST_MEMZONE_NAME("flag_zone_16M"),
+				size, SOCKET_ID_ANY, RTE_MEMZONE_16MB);
 		if (mz == NULL) {
 			printf("MEMZONE FLAG 16MB\n");
 			return -1;
@@ -294,8 +309,10 @@ test_memzone_reserve_flags(void)
 			return -1;
 		}
 
-		mz = rte_memzone_reserve("flag_zone_16M_HINT", size,
-		SOCKET_ID_ANY, RTE_MEMZONE_16MB|RTE_MEMZONE_SIZE_HINT_ONLY);
+		mz = rte_memzone_reserve(
+				TEST_MEMZONE_NAME("flag_zone_16M_HINT"), size,
+				SOCKET_ID_ANY,
+				RTE_MEMZONE_16MB|RTE_MEMZONE_SIZE_HINT_ONLY);
 		if (mz == NULL) {
 			printf("MEMZONE FLAG 16MB\n");
 			return -1;
@@ -313,9 +330,11 @@ test_memzone_reserve_flags(void)
 		 * unless HINT flag is indicated
 		 */
 		if (!hugepage_16GB_avail) {
-			mz = rte_memzone_reserve("flag_zone_16G_HINT", size,
-				SOCKET_ID_ANY,
-				RTE_MEMZONE_16GB|RTE_MEMZONE_SIZE_HINT_ONLY);
+			mz = rte_memzone_reserve(
+					TEST_MEMZONE_NAME("flag_zone_16G_HINT"),
+					size, SOCKET_ID_ANY,
+					RTE_MEMZONE_16GB |
+					RTE_MEMZONE_SIZE_HINT_ONLY);
 			if (mz == NULL) {
 				printf("MEMZONE FLAG 16GB & HINT\n");
 				return -1;
@@ -329,8 +348,10 @@ test_memzone_reserve_flags(void)
 				return -1;
 			}
 
-			mz = rte_memzone_reserve("flag_zone_16G", size,
-				SOCKET_ID_ANY, RTE_MEMZONE_16GB);
+			mz = rte_memzone_reserve(
+					TEST_MEMZONE_NAME("flag_zone_16G"),
+					size,
+					SOCKET_ID_ANY, RTE_MEMZONE_16GB);
 			if (mz != NULL) {
 				printf("MEMZONE FLAG 16GB\n");
 				return -1;
@@ -339,8 +360,8 @@ test_memzone_reserve_flags(void)
 	}
 	/*As with 16MB tests above for 16GB huge page requests*/
 	if (hugepage_16GB_avail) {
-		mz = rte_memzone_reserve("flag_zone_16G", size, SOCKET_ID_ANY,
-				RTE_MEMZONE_16GB);
+		mz = rte_memzone_reserve(TEST_MEMZONE_NAME("flag_zone_16G"),
+				size, SOCKET_ID_ANY, RTE_MEMZONE_16GB);
 		if (mz == NULL) {
 			printf("MEMZONE FLAG 16GB\n");
 			return -1;
@@ -354,8 +375,10 @@ test_memzone_reserve_flags(void)
 			return -1;
 		}
 
-		mz = rte_memzone_reserve("flag_zone_16G_HINT", size,
-		SOCKET_ID_ANY, RTE_MEMZONE_16GB|RTE_MEMZONE_SIZE_HINT_ONLY);
+		mz = rte_memzone_reserve(
+				TEST_MEMZONE_NAME("flag_zone_16G_HINT"), size,
+				SOCKET_ID_ANY,
+				RTE_MEMZONE_16GB|RTE_MEMZONE_SIZE_HINT_ONLY);
 		if (mz == NULL) {
 			printf("MEMZONE FLAG 16GB\n");
 			return -1;
@@ -373,9 +396,11 @@ test_memzone_reserve_flags(void)
 		 * unless HINT flag is indicated
 		 */
 		if (!hugepage_16MB_avail) {
-			mz = rte_memzone_reserve("flag_zone_16M_HINT", size,
-				SOCKET_ID_ANY,
-				RTE_MEMZONE_16MB|RTE_MEMZONE_SIZE_HINT_ONLY);
+			mz = rte_memzone_reserve(
+					TEST_MEMZONE_NAME("flag_zone_16M_HINT"),
+					size, SOCKET_ID_ANY,
+					RTE_MEMZONE_16MB |
+					RTE_MEMZONE_SIZE_HINT_ONLY);
 			if (mz == NULL) {
 				printf("MEMZONE FLAG 16MB & HINT\n");
 				return -1;
@@ -388,8 +413,9 @@ test_memzone_reserve_flags(void)
 				printf("Fail memzone free\n");
 				return -1;
 			}
-			mz = rte_memzone_reserve("flag_zone_16M", size,
-				SOCKET_ID_ANY, RTE_MEMZONE_16MB);
+			mz = rte_memzone_reserve(
+					TEST_MEMZONE_NAME("flag_zone_16M"),
+					size, SOCKET_ID_ANY, RTE_MEMZONE_16MB);
 			if (mz != NULL) {
 				printf("MEMZONE FLAG 16MB\n");
 				return -1;
@@ -397,9 +423,10 @@ test_memzone_reserve_flags(void)
 		}
 
 		if (hugepage_16MB_avail && hugepage_16GB_avail) {
-			mz = rte_memzone_reserve("flag_zone_16M_HINT", size,
-				SOCKET_ID_ANY,
-				RTE_MEMZONE_16MB|RTE_MEMZONE_16GB);
+			mz = rte_memzone_reserve(
+					TEST_MEMZONE_NAME("flag_zone_16M_HINT"),
+					size, SOCKET_ID_ANY,
+					RTE_MEMZONE_16MB|RTE_MEMZONE_16GB);
 			if (mz == NULL) {
 				printf("BOTH SIZES SET\n");
 				return -1;
@@ -455,7 +482,8 @@ test_memzone_reserve_max(void)
 		return 0;
 	}
 
-	mz = rte_memzone_reserve("max_zone", 0, SOCKET_ID_ANY, 0);
+	mz = rte_memzone_reserve(TEST_MEMZONE_NAME("max_zone"), 0,
+			SOCKET_ID_ANY, 0);
 	if (mz == NULL){
 		printf("Failed to reserve a big chunk of memory - %s\n",
 				rte_strerror(rte_errno));
@@ -497,8 +525,8 @@ test_memzone_reserve_max_aligned(void)
 		return 0;
 	}
 
-	mz = rte_memzone_reserve_aligned("max_zone_aligned", 0,
-			SOCKET_ID_ANY, 0, align);
+	mz = rte_memzone_reserve_aligned(TEST_MEMZONE_NAME("max_zone_aligned"),
+			0, SOCKET_ID_ANY, 0, align);
 	if (mz == NULL){
 		printf("Failed to reserve a big chunk of memory - %s\n",
 				rte_strerror(rte_errno));
@@ -535,24 +563,29 @@ test_memzone_aligned(void)
 	const struct rte_memzone *memzone_aligned_1024;
 
 	/* memzone that should automatically be adjusted to align on 64 bytes */
-	memzone_aligned_32 = rte_memzone_reserve_aligned("aligned_32", 100,
-				SOCKET_ID_ANY, 0, 32);
+	memzone_aligned_32 = rte_memzone_reserve_aligned(
+			TEST_MEMZONE_NAME("aligned_32"), 100, SOCKET_ID_ANY, 0,
+			32);
 
 	/* memzone that is supposed to be aligned on a 128 byte boundary */
-	memzone_aligned_128 = rte_memzone_reserve_aligned("aligned_128", 100,
-				SOCKET_ID_ANY, 0, 128);
+	memzone_aligned_128 = rte_memzone_reserve_aligned(
+			TEST_MEMZONE_NAME("aligned_128"), 100, SOCKET_ID_ANY, 0,
+			128);
 
 	/* memzone that is supposed to be aligned on a 256 byte boundary */
-	memzone_aligned_256 = rte_memzone_reserve_aligned("aligned_256", 100,
-				SOCKET_ID_ANY, 0, 256);
+	memzone_aligned_256 = rte_memzone_reserve_aligned(
+			TEST_MEMZONE_NAME("aligned_256"), 100, SOCKET_ID_ANY, 0,
+			256);
 
 	/* memzone that is supposed to be aligned on a 512 byte boundary */
-	memzone_aligned_512 = rte_memzone_reserve_aligned("aligned_512", 100,
-				SOCKET_ID_ANY, 0, 512);
+	memzone_aligned_512 = rte_memzone_reserve_aligned(
+			TEST_MEMZONE_NAME("aligned_512"), 100, SOCKET_ID_ANY, 0,
+			512);
 
 	/* memzone that is supposed to be aligned on a 1024 byte boundary */
-	memzone_aligned_1024 = rte_memzone_reserve_aligned("aligned_1024", 100,
-				SOCKET_ID_ANY, 0, 1024);
+	memzone_aligned_1024 = rte_memzone_reserve_aligned(
+			TEST_MEMZONE_NAME("aligned_1024"), 100, SOCKET_ID_ANY,
+			0, 1024);
 
 	printf("check alignments and lengths\n");
 	if (memzone_aligned_32 == NULL) {
@@ -721,37 +754,46 @@ static int
 test_memzone_bounded(void)
 {
 	const struct rte_memzone *memzone_err;
-	const char *name;
 	int rc;
 
 	/* should fail as boundary is not power of two */
-	name = "bounded_error_31";
-	if ((memzone_err = rte_memzone_reserve_bounded(name,
-			100, SOCKET_ID_ANY, 0, 32, UINT32_MAX)) != NULL) {
+	memzone_err = rte_memzone_reserve_bounded(
+			TEST_MEMZONE_NAME("bounded_error_31"), 100,
+			SOCKET_ID_ANY, 0, 32, UINT32_MAX);
+	if (memzone_err != NULL) {
 		printf("%s(%s)created a memzone with invalid boundary "
 			"conditions\n", __func__, memzone_err->name);
 		return -1;
 	}
 
 	/* should fail as len is greater then boundary */
-	name = "bounded_error_32";
-	if ((memzone_err = rte_memzone_reserve_bounded(name,
-			100, SOCKET_ID_ANY, 0, 32, 32)) != NULL) {
+	memzone_err = rte_memzone_reserve_bounded(
+			TEST_MEMZONE_NAME("bounded_error_32"), 100,
+			SOCKET_ID_ANY, 0, 32, 32);
+	if (memzone_err != NULL) {
 		printf("%s(%s)created a memzone with invalid boundary "
 			"conditions\n", __func__, memzone_err->name);
 		return -1;
 	}
 
-	if ((rc = check_memzone_bounded("bounded_128", 100, 128, 128)) != 0)
+	rc = check_memzone_bounded(TEST_MEMZONE_NAME("bounded_128"), 100, 128,
+			128);
+	if (rc != 0)
 		return rc;
 
-	if ((rc = check_memzone_bounded("bounded_256", 100, 256, 128)) != 0)
+	rc = check_memzone_bounded(TEST_MEMZONE_NAME("bounded_256"), 100, 256,
+			128);
+	if (rc != 0)
 		return rc;
 
-	if ((rc = check_memzone_bounded("bounded_1K", 100, 64, 1024)) != 0)
+	rc = check_memzone_bounded(TEST_MEMZONE_NAME("bounded_1K"), 100, 64,
+			1024);
+	if (rc != 0)
 		return rc;
 
-	if ((rc = check_memzone_bounded("bounded_1K_MAX", 0, 64, 1024)) != 0)
+	rc = check_memzone_bounded(TEST_MEMZONE_NAME("bounded_1K_MAX"), 0, 64,
+			1024);
+	if (rc != 0)
 		return rc;
 
 	return 0;
@@ -764,25 +806,28 @@ test_memzone_free(void)
 	int i;
 	char name[20];
 
-	mz[0] = rte_memzone_reserve("tempzone0", 2000, SOCKET_ID_ANY, 0);
-	mz[1] = rte_memzone_reserve("tempzone1", 4000, SOCKET_ID_ANY, 0);
+	mz[0] = rte_memzone_reserve(TEST_MEMZONE_NAME("tempzone0"), 2000,
+			SOCKET_ID_ANY, 0);
+	mz[1] = rte_memzone_reserve(TEST_MEMZONE_NAME("tempzone1"), 4000,
+			SOCKET_ID_ANY, 0);
 
 	if (mz[0] > mz[1])
 		return -1;
-	if (!rte_memzone_lookup("tempzone0"))
+	if (!rte_memzone_lookup(TEST_MEMZONE_NAME("tempzone0")))
 		return -1;
-	if (!rte_memzone_lookup("tempzone1"))
+	if (!rte_memzone_lookup(TEST_MEMZONE_NAME("tempzone1")))
 		return -1;
 
 	if (rte_memzone_free(mz[0])) {
 		printf("Fail memzone free - tempzone0\n");
 		return -1;
 	}
-	if (rte_memzone_lookup("tempzone0")) {
+	if (rte_memzone_lookup(TEST_MEMZONE_NAME("tempzone0"))) {
 		printf("Found previously free memzone - tempzone0\n");
 		return -1;
 	}
-	mz[2] = rte_memzone_reserve("tempzone2", 2000, SOCKET_ID_ANY, 0);
+	mz[2] = rte_memzone_reserve(TEST_MEMZONE_NAME("tempzone2"), 2000,
+			SOCKET_ID_ANY, 0);
 
 	if (mz[2] > mz[1]) {
 		printf("tempzone2 should have gotten the free entry from tempzone0\n");
@@ -792,7 +837,7 @@ test_memzone_free(void)
 		printf("Fail memzone free - tempzone2\n");
 		return -1;
 	}
-	if (rte_memzone_lookup("tempzone2")) {
+	if (rte_memzone_lookup(TEST_MEMZONE_NAME("tempzone2"))) {
 		printf("Found previously free memzone - tempzone2\n");
 		return -1;
 	}
@@ -800,14 +845,15 @@ test_memzone_free(void)
 		printf("Fail memzone free - tempzone1\n");
 		return -1;
 	}
-	if (rte_memzone_lookup("tempzone1")) {
+	if (rte_memzone_lookup(TEST_MEMZONE_NAME("tempzone1"))) {
 		printf("Found previously free memzone - tempzone1\n");
 		return -1;
 	}
 
 	i = 0;
 	do {
-		snprintf(name, sizeof(name), "tempzone%u", i);
+		snprintf(name, sizeof(name), TEST_MEMZONE_NAME("tempzone%u"),
+				i);
 		mz[i] = rte_memzone_reserve(name, 1, SOCKET_ID_ANY, 0);
 	} while (mz[i++] != NULL);
 
@@ -815,7 +861,8 @@ test_memzone_free(void)
 		printf("Fail memzone free - tempzone0\n");
 		return -1;
 	}
-	mz[0] = rte_memzone_reserve("tempzone0new", 0, SOCKET_ID_ANY, 0);
+	mz[0] = rte_memzone_reserve(TEST_MEMZONE_NAME("tempzone0new"), 0,
+			SOCKET_ID_ANY, 0);
 
 	if (mz[0] == NULL) {
 		printf("Fail to create memzone - tempzone0new - when MAX memzones were "
@@ -845,16 +892,16 @@ test_memzone_basic(void)
 	int memzone_cnt_before =
 			rte_eal_get_configuration()->mem_config->memzone_cnt;
 
-	memzone1 = rte_memzone_reserve("testzone1", 100,
+	memzone1 = rte_memzone_reserve(TEST_MEMZONE_NAME("testzone1"), 100,
 				SOCKET_ID_ANY, 0);
 
-	memzone2 = rte_memzone_reserve("testzone2", 1000,
+	memzone2 = rte_memzone_reserve(TEST_MEMZONE_NAME("testzone2"), 1000,
 				0, 0);
 
-	memzone3 = rte_memzone_reserve("testzone3", 1000,
+	memzone3 = rte_memzone_reserve(TEST_MEMZONE_NAME("testzone3"), 1000,
 				1, 0);
 
-	memzone4 = rte_memzone_reserve("testzone4", 1024,
+	memzone4 = rte_memzone_reserve(TEST_MEMZONE_NAME("testzone4"), 1024,
 				SOCKET_ID_ANY, 0);
 
 	/* memzone3 may be NULL if we don't have NUMA */
@@ -918,12 +965,12 @@ test_memzone_basic(void)
 		return -1;
 
 	printf("test zone lookup\n");
-	mz = rte_memzone_lookup("testzone1");
+	mz = rte_memzone_lookup(TEST_MEMZONE_NAME("testzone1"));
 	if (mz != memzone1)
 		return -1;
 
 	printf("test duplcate zone name\n");
-	mz = rte_memzone_reserve("testzone1", 100,
+	mz = rte_memzone_reserve(TEST_MEMZONE_NAME("testzone1"), 100,
 			SOCKET_ID_ANY, 0);
 	if (mz != NULL)
 		return -1;
@@ -953,16 +1000,22 @@ test_memzone_basic(void)
 	return 0;
 }
 
-static int memzone_walk_called;
-static void memzone_walk_clb(const struct rte_memzone *mz __rte_unused,
+static int test_memzones_left;
+static int memzone_walk_cnt;
+static void memzone_walk_clb(const struct rte_memzone *mz,
 			     void *arg __rte_unused)
 {
-	memzone_walk_called = 1;
+	memzone_walk_cnt++;
+	if (!strncmp(TEST_MEMZONE_NAME(""), mz->name, RTE_MEMZONE_NAMESIZE))
+		test_memzones_left++;
 }
 
 static int
 test_memzone(void)
 {
+	/* take note of how many memzones were allocated before running */
+	int memzone_cnt = rte_eal_get_configuration()->mem_config->memzone_cnt;
+
 	printf("test basic memzone API\n");
 	if (test_memzone_basic() < 0)
 		return -1;
@@ -1000,8 +1053,10 @@ test_memzone(void)
 		return -1;
 
 	printf("check memzone cleanup\n");
+	memzone_walk_cnt = 0;
+	test_memzones_left = 0;
 	rte_memzone_walk(memzone_walk_clb, NULL);
-	if (memzone_walk_called) {
+	if (memzone_walk_cnt != memzone_cnt || test_memzones_left > 0) {
 		printf("there are some memzones left after test\n");
 		rte_memzone_dump(stdout);
 		return -1;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 1/2] test/memzone: add test for memzone count in eal mem config
  2018-02-01  0:12   ` Thomas Monjalon
@ 2018-02-01 10:05     ` Burakov, Anatoly
  0 siblings, 0 replies; 17+ messages in thread
From: Burakov, Anatoly @ 2018-02-01 10:05 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: dev

On 01-Feb-18 12:12 AM, Thomas Monjalon wrote:
> 31/01/2018 16:29, Anatoly Burakov:
>> Ensure that memzone count in eal mem config is incremented and
>> decremented whenever memzones are allocated and freed.
>>
>> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> 
> Please report acks from previous version.
> 
> 
OK, will submit a v4.

-- 
Thanks,
Anatoly

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v4 1/2] test/memzone: add test for memzone count in eal mem config
  2018-02-01 10:02   ` [PATCH v3 " Anatoly Burakov
@ 2018-02-01 10:14     ` Anatoly Burakov
  2018-02-06  0:49       ` Thomas Monjalon
  2018-02-01 10:14     ` [PATCH v4 2/2] test/memzone: handle previously allocated memzones Anatoly Burakov
  1 sibling, 1 reply; 17+ messages in thread
From: Anatoly Burakov @ 2018-02-01 10:14 UTC (permalink / raw)
  To: dev

Ensure that memzone count in eal mem config is incremented and
decremented whenever memzones are allocated and freed.

Reviewed-by: Radoslaw Biernacki <radoslaw.biernacki@linaro.com>

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---

Notes:
    v4: added missing reviewed-by tag

 test/test/test_memzone.c | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/test/test/test_memzone.c b/test/test/test_memzone.c
index f6c9b56..00d340f 100644
--- a/test/test/test_memzone.c
+++ b/test/test/test_memzone.c
@@ -841,6 +841,9 @@ test_memzone_basic(void)
 	const struct rte_memzone *memzone3;
 	const struct rte_memzone *memzone4;
 	const struct rte_memzone *mz;
+	int memzone_cnt_after, memzone_cnt_expected;
+	int memzone_cnt_before =
+			rte_eal_get_configuration()->mem_config->memzone_cnt;
 
 	memzone1 = rte_memzone_reserve("testzone1", 100,
 				SOCKET_ID_ANY, 0);
@@ -858,6 +861,18 @@ test_memzone_basic(void)
 	if (memzone1 == NULL || memzone2 == NULL || memzone4 == NULL)
 		return -1;
 
+	/* check how many memzones we are expecting */
+	memzone_cnt_expected = memzone_cnt_before +
+			(memzone1 != NULL) + (memzone2 != NULL) +
+			(memzone3 != NULL) + (memzone4 != NULL);
+
+	memzone_cnt_after =
+			rte_eal_get_configuration()->mem_config->memzone_cnt;
+
+	if (memzone_cnt_after != memzone_cnt_expected)
+		return -1;
+
+
 	rte_memzone_dump(stdout);
 
 	/* check cache-line alignments */
@@ -930,6 +945,11 @@ test_memzone_basic(void)
 		return -1;
 	}
 
+	memzone_cnt_after =
+			rte_eal_get_configuration()->mem_config->memzone_cnt;
+	if (memzone_cnt_after != memzone_cnt_before)
+		return -1;
+
 	return 0;
 }
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v4 2/2] test/memzone: handle previously allocated memzones
  2018-02-01 10:02   ` [PATCH v3 " Anatoly Burakov
  2018-02-01 10:14     ` [PATCH v4 " Anatoly Burakov
@ 2018-02-01 10:14     ` Anatoly Burakov
  1 sibling, 0 replies; 17+ messages in thread
From: Anatoly Burakov @ 2018-02-01 10:14 UTC (permalink / raw)
  To: dev; +Cc: radoslaw.biernacki, stable, Phil Yang

Currently, memzone autotest expects there to be no memzones
present by the time the test is run. Some hardware drivers
will allocate memzones for internal use during initialization,
resulting in tests failing due to unexpected memzones being
allocated before the test was run.

Fix this by making sure all memzones allocated by this test
have a common prefix, and making callback increment a counter
on encountering memzones with this prefix. Also, separately
increment another counter that will count total number of
memzones left after test, and compares it to previously stored
number of memzones, to ensure that we didn't accidentally
allocated/freed any memzones we weren't supposed to. This
also doubles as a test for correct operation of memzone_walk().

Fixes: 71330483a193 ("test/memzone: fix memory leak")
Cc: radoslaw.biernacki@linaro.org
Cc: stable@dpdk.org

Reviewed-by: Radoslaw Biernacki <radoslaw.biernacki@linaro.com>

Signed-off-by: Phil Yang <Phil.Yang@arm.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---

Notes:
    v4: added missing reviewed-by tag
    
    v3: suggested-by was supposed to be a signoff
    
    v2: incorporated Phil Yang's patch to better ensure
        no memzones were left behind by the test

 test/test/test_memzone.c | 225 +++++++++++++++++++++++++++++------------------
 1 file changed, 140 insertions(+), 85 deletions(-)

diff --git a/test/test/test_memzone.c b/test/test/test_memzone.c
index 00d340f..8ece1ac 100644
--- a/test/test/test_memzone.c
+++ b/test/test/test_memzone.c
@@ -4,6 +4,7 @@
 
 #include <stdio.h>
 #include <stdint.h>
+#include <string.h>
 #include <inttypes.h>
 #include <sys/queue.h>
 
@@ -47,6 +48,8 @@
  * - Check flags for specific huge page size reservation
  */
 
+#define TEST_MEMZONE_NAME(suffix) "MZ_TEST_" suffix
+
 /* Test if memory overlaps: return 1 if true, or 0 if false. */
 static int
 is_memory_overlap(rte_iova_t ptr1, size_t len1, rte_iova_t ptr2, size_t len2)
@@ -63,14 +66,14 @@ test_memzone_invalid_alignment(void)
 {
 	const struct rte_memzone * mz;
 
-	mz = rte_memzone_lookup("invalid_alignment");
+	mz = rte_memzone_lookup(TEST_MEMZONE_NAME("invalid_alignment"));
 	if (mz != NULL) {
 		printf("Zone with invalid alignment has been reserved\n");
 		return -1;
 	}
 
-	mz = rte_memzone_reserve_aligned("invalid_alignment", 100,
-			SOCKET_ID_ANY, 0, 100);
+	mz = rte_memzone_reserve_aligned(TEST_MEMZONE_NAME("invalid_alignment"),
+					 100, SOCKET_ID_ANY, 0, 100);
 	if (mz != NULL) {
 		printf("Zone with invalid alignment has been reserved\n");
 		return -1;
@@ -83,14 +86,16 @@ test_memzone_reserving_zone_size_bigger_than_the_maximum(void)
 {
 	const struct rte_memzone * mz;
 
-	mz = rte_memzone_lookup("zone_size_bigger_than_the_maximum");
+	mz = rte_memzone_lookup(
+			TEST_MEMZONE_NAME("zone_size_bigger_than_the_maximum"));
 	if (mz != NULL) {
 		printf("zone_size_bigger_than_the_maximum has been reserved\n");
 		return -1;
 	}
 
-	mz = rte_memzone_reserve("zone_size_bigger_than_the_maximum", (size_t)-1,
-			SOCKET_ID_ANY, 0);
+	mz = rte_memzone_reserve(
+			TEST_MEMZONE_NAME("zone_size_bigger_than_the_maximum"),
+			(size_t)-1, SOCKET_ID_ANY, 0);
 	if (mz != NULL) {
 		printf("It is impossible to reserve such big a memzone\n");
 		return -1;
@@ -137,8 +142,8 @@ test_memzone_reserve_flags(void)
 	 * available page size (i.e 1GB ) when 2MB pages are unavailable.
 	 */
 	if (hugepage_2MB_avail) {
-		mz = rte_memzone_reserve("flag_zone_2M", size, SOCKET_ID_ANY,
-				RTE_MEMZONE_2MB);
+		mz = rte_memzone_reserve(TEST_MEMZONE_NAME("flag_zone_2M"),
+				size, SOCKET_ID_ANY, RTE_MEMZONE_2MB);
 		if (mz == NULL) {
 			printf("MEMZONE FLAG 2MB\n");
 			return -1;
@@ -152,7 +157,8 @@ test_memzone_reserve_flags(void)
 			return -1;
 		}
 
-		mz = rte_memzone_reserve("flag_zone_2M_HINT", size, SOCKET_ID_ANY,
+		mz = rte_memzone_reserve(TEST_MEMZONE_NAME("flag_zone_2M_HINT"),
+				size, SOCKET_ID_ANY,
 				RTE_MEMZONE_2MB|RTE_MEMZONE_SIZE_HINT_ONLY);
 		if (mz == NULL) {
 			printf("MEMZONE FLAG 2MB\n");
@@ -171,7 +177,9 @@ test_memzone_reserve_flags(void)
 		 * HINT flag is indicated
 		 */
 		if (!hugepage_1GB_avail) {
-			mz = rte_memzone_reserve("flag_zone_1G_HINT", size, SOCKET_ID_ANY,
+			mz = rte_memzone_reserve(
+					TEST_MEMZONE_NAME("flag_zone_1G_HINT"),
+					size, SOCKET_ID_ANY,
 					RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY);
 			if (mz == NULL) {
 				printf("MEMZONE FLAG 1GB & HINT\n");
@@ -186,8 +194,9 @@ test_memzone_reserve_flags(void)
 				return -1;
 			}
 
-			mz = rte_memzone_reserve("flag_zone_1G", size, SOCKET_ID_ANY,
-					RTE_MEMZONE_1GB);
+			mz = rte_memzone_reserve(
+					TEST_MEMZONE_NAME("flag_zone_1G"), size,
+					SOCKET_ID_ANY, RTE_MEMZONE_1GB);
 			if (mz != NULL) {
 				printf("MEMZONE FLAG 1GB\n");
 				return -1;
@@ -197,8 +206,8 @@ test_memzone_reserve_flags(void)
 
 	/*As with 2MB tests above for 1GB huge page requests*/
 	if (hugepage_1GB_avail) {
-		mz = rte_memzone_reserve("flag_zone_1G", size, SOCKET_ID_ANY,
-				RTE_MEMZONE_1GB);
+		mz = rte_memzone_reserve(TEST_MEMZONE_NAME("flag_zone_1G"),
+				size, SOCKET_ID_ANY, RTE_MEMZONE_1GB);
 		if (mz == NULL) {
 			printf("MEMZONE FLAG 1GB\n");
 			return -1;
@@ -212,7 +221,8 @@ test_memzone_reserve_flags(void)
 			return -1;
 		}
 
-		mz = rte_memzone_reserve("flag_zone_1G_HINT", size, SOCKET_ID_ANY,
+		mz = rte_memzone_reserve(TEST_MEMZONE_NAME("flag_zone_1G_HINT"),
+				size, SOCKET_ID_ANY,
 				RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY);
 		if (mz == NULL) {
 			printf("MEMZONE FLAG 1GB\n");
@@ -231,7 +241,9 @@ test_memzone_reserve_flags(void)
 		 * HINT flag is indicated
 		 */
 		if (!hugepage_2MB_avail) {
-			mz = rte_memzone_reserve("flag_zone_2M_HINT", size, SOCKET_ID_ANY,
+			mz = rte_memzone_reserve(
+					TEST_MEMZONE_NAME("flag_zone_2M_HINT"),
+					size, SOCKET_ID_ANY,
 					RTE_MEMZONE_2MB|RTE_MEMZONE_SIZE_HINT_ONLY);
 			if (mz == NULL){
 				printf("MEMZONE FLAG 2MB & HINT\n");
@@ -245,8 +257,9 @@ test_memzone_reserve_flags(void)
 				printf("Fail memzone free\n");
 				return -1;
 			}
-			mz = rte_memzone_reserve("flag_zone_2M", size, SOCKET_ID_ANY,
-					RTE_MEMZONE_2MB);
+			mz = rte_memzone_reserve(
+					TEST_MEMZONE_NAME("flag_zone_2M"), size,
+					SOCKET_ID_ANY, RTE_MEMZONE_2MB);
 			if (mz != NULL) {
 				printf("MEMZONE FLAG 2MB\n");
 				return -1;
@@ -254,8 +267,10 @@ test_memzone_reserve_flags(void)
 		}
 
 		if (hugepage_2MB_avail && hugepage_1GB_avail) {
-			mz = rte_memzone_reserve("flag_zone_2M_HINT", size, SOCKET_ID_ANY,
-								RTE_MEMZONE_2MB|RTE_MEMZONE_1GB);
+			mz = rte_memzone_reserve(
+					TEST_MEMZONE_NAME("flag_zone_2M_HINT"),
+					size, SOCKET_ID_ANY,
+					RTE_MEMZONE_2MB|RTE_MEMZONE_1GB);
 			if (mz == NULL) {
 				printf("BOTH SIZES SET\n");
 				return -1;
@@ -279,8 +294,8 @@ test_memzone_reserve_flags(void)
 	 * page size (i.e 16GB ) when 16MB pages are unavailable.
 	 */
 	if (hugepage_16MB_avail) {
-		mz = rte_memzone_reserve("flag_zone_16M", size, SOCKET_ID_ANY,
-				RTE_MEMZONE_16MB);
+		mz = rte_memzone_reserve(TEST_MEMZONE_NAME("flag_zone_16M"),
+				size, SOCKET_ID_ANY, RTE_MEMZONE_16MB);
 		if (mz == NULL) {
 			printf("MEMZONE FLAG 16MB\n");
 			return -1;
@@ -294,8 +309,10 @@ test_memzone_reserve_flags(void)
 			return -1;
 		}
 
-		mz = rte_memzone_reserve("flag_zone_16M_HINT", size,
-		SOCKET_ID_ANY, RTE_MEMZONE_16MB|RTE_MEMZONE_SIZE_HINT_ONLY);
+		mz = rte_memzone_reserve(
+				TEST_MEMZONE_NAME("flag_zone_16M_HINT"), size,
+				SOCKET_ID_ANY,
+				RTE_MEMZONE_16MB|RTE_MEMZONE_SIZE_HINT_ONLY);
 		if (mz == NULL) {
 			printf("MEMZONE FLAG 16MB\n");
 			return -1;
@@ -313,9 +330,11 @@ test_memzone_reserve_flags(void)
 		 * unless HINT flag is indicated
 		 */
 		if (!hugepage_16GB_avail) {
-			mz = rte_memzone_reserve("flag_zone_16G_HINT", size,
-				SOCKET_ID_ANY,
-				RTE_MEMZONE_16GB|RTE_MEMZONE_SIZE_HINT_ONLY);
+			mz = rte_memzone_reserve(
+					TEST_MEMZONE_NAME("flag_zone_16G_HINT"),
+					size, SOCKET_ID_ANY,
+					RTE_MEMZONE_16GB |
+					RTE_MEMZONE_SIZE_HINT_ONLY);
 			if (mz == NULL) {
 				printf("MEMZONE FLAG 16GB & HINT\n");
 				return -1;
@@ -329,8 +348,10 @@ test_memzone_reserve_flags(void)
 				return -1;
 			}
 
-			mz = rte_memzone_reserve("flag_zone_16G", size,
-				SOCKET_ID_ANY, RTE_MEMZONE_16GB);
+			mz = rte_memzone_reserve(
+					TEST_MEMZONE_NAME("flag_zone_16G"),
+					size,
+					SOCKET_ID_ANY, RTE_MEMZONE_16GB);
 			if (mz != NULL) {
 				printf("MEMZONE FLAG 16GB\n");
 				return -1;
@@ -339,8 +360,8 @@ test_memzone_reserve_flags(void)
 	}
 	/*As with 16MB tests above for 16GB huge page requests*/
 	if (hugepage_16GB_avail) {
-		mz = rte_memzone_reserve("flag_zone_16G", size, SOCKET_ID_ANY,
-				RTE_MEMZONE_16GB);
+		mz = rte_memzone_reserve(TEST_MEMZONE_NAME("flag_zone_16G"),
+				size, SOCKET_ID_ANY, RTE_MEMZONE_16GB);
 		if (mz == NULL) {
 			printf("MEMZONE FLAG 16GB\n");
 			return -1;
@@ -354,8 +375,10 @@ test_memzone_reserve_flags(void)
 			return -1;
 		}
 
-		mz = rte_memzone_reserve("flag_zone_16G_HINT", size,
-		SOCKET_ID_ANY, RTE_MEMZONE_16GB|RTE_MEMZONE_SIZE_HINT_ONLY);
+		mz = rte_memzone_reserve(
+				TEST_MEMZONE_NAME("flag_zone_16G_HINT"), size,
+				SOCKET_ID_ANY,
+				RTE_MEMZONE_16GB|RTE_MEMZONE_SIZE_HINT_ONLY);
 		if (mz == NULL) {
 			printf("MEMZONE FLAG 16GB\n");
 			return -1;
@@ -373,9 +396,11 @@ test_memzone_reserve_flags(void)
 		 * unless HINT flag is indicated
 		 */
 		if (!hugepage_16MB_avail) {
-			mz = rte_memzone_reserve("flag_zone_16M_HINT", size,
-				SOCKET_ID_ANY,
-				RTE_MEMZONE_16MB|RTE_MEMZONE_SIZE_HINT_ONLY);
+			mz = rte_memzone_reserve(
+					TEST_MEMZONE_NAME("flag_zone_16M_HINT"),
+					size, SOCKET_ID_ANY,
+					RTE_MEMZONE_16MB |
+					RTE_MEMZONE_SIZE_HINT_ONLY);
 			if (mz == NULL) {
 				printf("MEMZONE FLAG 16MB & HINT\n");
 				return -1;
@@ -388,8 +413,9 @@ test_memzone_reserve_flags(void)
 				printf("Fail memzone free\n");
 				return -1;
 			}
-			mz = rte_memzone_reserve("flag_zone_16M", size,
-				SOCKET_ID_ANY, RTE_MEMZONE_16MB);
+			mz = rte_memzone_reserve(
+					TEST_MEMZONE_NAME("flag_zone_16M"),
+					size, SOCKET_ID_ANY, RTE_MEMZONE_16MB);
 			if (mz != NULL) {
 				printf("MEMZONE FLAG 16MB\n");
 				return -1;
@@ -397,9 +423,10 @@ test_memzone_reserve_flags(void)
 		}
 
 		if (hugepage_16MB_avail && hugepage_16GB_avail) {
-			mz = rte_memzone_reserve("flag_zone_16M_HINT", size,
-				SOCKET_ID_ANY,
-				RTE_MEMZONE_16MB|RTE_MEMZONE_16GB);
+			mz = rte_memzone_reserve(
+					TEST_MEMZONE_NAME("flag_zone_16M_HINT"),
+					size, SOCKET_ID_ANY,
+					RTE_MEMZONE_16MB|RTE_MEMZONE_16GB);
 			if (mz == NULL) {
 				printf("BOTH SIZES SET\n");
 				return -1;
@@ -455,7 +482,8 @@ test_memzone_reserve_max(void)
 		return 0;
 	}
 
-	mz = rte_memzone_reserve("max_zone", 0, SOCKET_ID_ANY, 0);
+	mz = rte_memzone_reserve(TEST_MEMZONE_NAME("max_zone"), 0,
+			SOCKET_ID_ANY, 0);
 	if (mz == NULL){
 		printf("Failed to reserve a big chunk of memory - %s\n",
 				rte_strerror(rte_errno));
@@ -497,8 +525,8 @@ test_memzone_reserve_max_aligned(void)
 		return 0;
 	}
 
-	mz = rte_memzone_reserve_aligned("max_zone_aligned", 0,
-			SOCKET_ID_ANY, 0, align);
+	mz = rte_memzone_reserve_aligned(TEST_MEMZONE_NAME("max_zone_aligned"),
+			0, SOCKET_ID_ANY, 0, align);
 	if (mz == NULL){
 		printf("Failed to reserve a big chunk of memory - %s\n",
 				rte_strerror(rte_errno));
@@ -535,24 +563,29 @@ test_memzone_aligned(void)
 	const struct rte_memzone *memzone_aligned_1024;
 
 	/* memzone that should automatically be adjusted to align on 64 bytes */
-	memzone_aligned_32 = rte_memzone_reserve_aligned("aligned_32", 100,
-				SOCKET_ID_ANY, 0, 32);
+	memzone_aligned_32 = rte_memzone_reserve_aligned(
+			TEST_MEMZONE_NAME("aligned_32"), 100, SOCKET_ID_ANY, 0,
+			32);
 
 	/* memzone that is supposed to be aligned on a 128 byte boundary */
-	memzone_aligned_128 = rte_memzone_reserve_aligned("aligned_128", 100,
-				SOCKET_ID_ANY, 0, 128);
+	memzone_aligned_128 = rte_memzone_reserve_aligned(
+			TEST_MEMZONE_NAME("aligned_128"), 100, SOCKET_ID_ANY, 0,
+			128);
 
 	/* memzone that is supposed to be aligned on a 256 byte boundary */
-	memzone_aligned_256 = rte_memzone_reserve_aligned("aligned_256", 100,
-				SOCKET_ID_ANY, 0, 256);
+	memzone_aligned_256 = rte_memzone_reserve_aligned(
+			TEST_MEMZONE_NAME("aligned_256"), 100, SOCKET_ID_ANY, 0,
+			256);
 
 	/* memzone that is supposed to be aligned on a 512 byte boundary */
-	memzone_aligned_512 = rte_memzone_reserve_aligned("aligned_512", 100,
-				SOCKET_ID_ANY, 0, 512);
+	memzone_aligned_512 = rte_memzone_reserve_aligned(
+			TEST_MEMZONE_NAME("aligned_512"), 100, SOCKET_ID_ANY, 0,
+			512);
 
 	/* memzone that is supposed to be aligned on a 1024 byte boundary */
-	memzone_aligned_1024 = rte_memzone_reserve_aligned("aligned_1024", 100,
-				SOCKET_ID_ANY, 0, 1024);
+	memzone_aligned_1024 = rte_memzone_reserve_aligned(
+			TEST_MEMZONE_NAME("aligned_1024"), 100, SOCKET_ID_ANY,
+			0, 1024);
 
 	printf("check alignments and lengths\n");
 	if (memzone_aligned_32 == NULL) {
@@ -721,37 +754,46 @@ static int
 test_memzone_bounded(void)
 {
 	const struct rte_memzone *memzone_err;
-	const char *name;
 	int rc;
 
 	/* should fail as boundary is not power of two */
-	name = "bounded_error_31";
-	if ((memzone_err = rte_memzone_reserve_bounded(name,
-			100, SOCKET_ID_ANY, 0, 32, UINT32_MAX)) != NULL) {
+	memzone_err = rte_memzone_reserve_bounded(
+			TEST_MEMZONE_NAME("bounded_error_31"), 100,
+			SOCKET_ID_ANY, 0, 32, UINT32_MAX);
+	if (memzone_err != NULL) {
 		printf("%s(%s)created a memzone with invalid boundary "
 			"conditions\n", __func__, memzone_err->name);
 		return -1;
 	}
 
 	/* should fail as len is greater then boundary */
-	name = "bounded_error_32";
-	if ((memzone_err = rte_memzone_reserve_bounded(name,
-			100, SOCKET_ID_ANY, 0, 32, 32)) != NULL) {
+	memzone_err = rte_memzone_reserve_bounded(
+			TEST_MEMZONE_NAME("bounded_error_32"), 100,
+			SOCKET_ID_ANY, 0, 32, 32);
+	if (memzone_err != NULL) {
 		printf("%s(%s)created a memzone with invalid boundary "
 			"conditions\n", __func__, memzone_err->name);
 		return -1;
 	}
 
-	if ((rc = check_memzone_bounded("bounded_128", 100, 128, 128)) != 0)
+	rc = check_memzone_bounded(TEST_MEMZONE_NAME("bounded_128"), 100, 128,
+			128);
+	if (rc != 0)
 		return rc;
 
-	if ((rc = check_memzone_bounded("bounded_256", 100, 256, 128)) != 0)
+	rc = check_memzone_bounded(TEST_MEMZONE_NAME("bounded_256"), 100, 256,
+			128);
+	if (rc != 0)
 		return rc;
 
-	if ((rc = check_memzone_bounded("bounded_1K", 100, 64, 1024)) != 0)
+	rc = check_memzone_bounded(TEST_MEMZONE_NAME("bounded_1K"), 100, 64,
+			1024);
+	if (rc != 0)
 		return rc;
 
-	if ((rc = check_memzone_bounded("bounded_1K_MAX", 0, 64, 1024)) != 0)
+	rc = check_memzone_bounded(TEST_MEMZONE_NAME("bounded_1K_MAX"), 0, 64,
+			1024);
+	if (rc != 0)
 		return rc;
 
 	return 0;
@@ -764,25 +806,28 @@ test_memzone_free(void)
 	int i;
 	char name[20];
 
-	mz[0] = rte_memzone_reserve("tempzone0", 2000, SOCKET_ID_ANY, 0);
-	mz[1] = rte_memzone_reserve("tempzone1", 4000, SOCKET_ID_ANY, 0);
+	mz[0] = rte_memzone_reserve(TEST_MEMZONE_NAME("tempzone0"), 2000,
+			SOCKET_ID_ANY, 0);
+	mz[1] = rte_memzone_reserve(TEST_MEMZONE_NAME("tempzone1"), 4000,
+			SOCKET_ID_ANY, 0);
 
 	if (mz[0] > mz[1])
 		return -1;
-	if (!rte_memzone_lookup("tempzone0"))
+	if (!rte_memzone_lookup(TEST_MEMZONE_NAME("tempzone0")))
 		return -1;
-	if (!rte_memzone_lookup("tempzone1"))
+	if (!rte_memzone_lookup(TEST_MEMZONE_NAME("tempzone1")))
 		return -1;
 
 	if (rte_memzone_free(mz[0])) {
 		printf("Fail memzone free - tempzone0\n");
 		return -1;
 	}
-	if (rte_memzone_lookup("tempzone0")) {
+	if (rte_memzone_lookup(TEST_MEMZONE_NAME("tempzone0"))) {
 		printf("Found previously free memzone - tempzone0\n");
 		return -1;
 	}
-	mz[2] = rte_memzone_reserve("tempzone2", 2000, SOCKET_ID_ANY, 0);
+	mz[2] = rte_memzone_reserve(TEST_MEMZONE_NAME("tempzone2"), 2000,
+			SOCKET_ID_ANY, 0);
 
 	if (mz[2] > mz[1]) {
 		printf("tempzone2 should have gotten the free entry from tempzone0\n");
@@ -792,7 +837,7 @@ test_memzone_free(void)
 		printf("Fail memzone free - tempzone2\n");
 		return -1;
 	}
-	if (rte_memzone_lookup("tempzone2")) {
+	if (rte_memzone_lookup(TEST_MEMZONE_NAME("tempzone2"))) {
 		printf("Found previously free memzone - tempzone2\n");
 		return -1;
 	}
@@ -800,14 +845,15 @@ test_memzone_free(void)
 		printf("Fail memzone free - tempzone1\n");
 		return -1;
 	}
-	if (rte_memzone_lookup("tempzone1")) {
+	if (rte_memzone_lookup(TEST_MEMZONE_NAME("tempzone1"))) {
 		printf("Found previously free memzone - tempzone1\n");
 		return -1;
 	}
 
 	i = 0;
 	do {
-		snprintf(name, sizeof(name), "tempzone%u", i);
+		snprintf(name, sizeof(name), TEST_MEMZONE_NAME("tempzone%u"),
+				i);
 		mz[i] = rte_memzone_reserve(name, 1, SOCKET_ID_ANY, 0);
 	} while (mz[i++] != NULL);
 
@@ -815,7 +861,8 @@ test_memzone_free(void)
 		printf("Fail memzone free - tempzone0\n");
 		return -1;
 	}
-	mz[0] = rte_memzone_reserve("tempzone0new", 0, SOCKET_ID_ANY, 0);
+	mz[0] = rte_memzone_reserve(TEST_MEMZONE_NAME("tempzone0new"), 0,
+			SOCKET_ID_ANY, 0);
 
 	if (mz[0] == NULL) {
 		printf("Fail to create memzone - tempzone0new - when MAX memzones were "
@@ -845,16 +892,16 @@ test_memzone_basic(void)
 	int memzone_cnt_before =
 			rte_eal_get_configuration()->mem_config->memzone_cnt;
 
-	memzone1 = rte_memzone_reserve("testzone1", 100,
+	memzone1 = rte_memzone_reserve(TEST_MEMZONE_NAME("testzone1"), 100,
 				SOCKET_ID_ANY, 0);
 
-	memzone2 = rte_memzone_reserve("testzone2", 1000,
+	memzone2 = rte_memzone_reserve(TEST_MEMZONE_NAME("testzone2"), 1000,
 				0, 0);
 
-	memzone3 = rte_memzone_reserve("testzone3", 1000,
+	memzone3 = rte_memzone_reserve(TEST_MEMZONE_NAME("testzone3"), 1000,
 				1, 0);
 
-	memzone4 = rte_memzone_reserve("testzone4", 1024,
+	memzone4 = rte_memzone_reserve(TEST_MEMZONE_NAME("testzone4"), 1024,
 				SOCKET_ID_ANY, 0);
 
 	/* memzone3 may be NULL if we don't have NUMA */
@@ -918,12 +965,12 @@ test_memzone_basic(void)
 		return -1;
 
 	printf("test zone lookup\n");
-	mz = rte_memzone_lookup("testzone1");
+	mz = rte_memzone_lookup(TEST_MEMZONE_NAME("testzone1"));
 	if (mz != memzone1)
 		return -1;
 
 	printf("test duplcate zone name\n");
-	mz = rte_memzone_reserve("testzone1", 100,
+	mz = rte_memzone_reserve(TEST_MEMZONE_NAME("testzone1"), 100,
 			SOCKET_ID_ANY, 0);
 	if (mz != NULL)
 		return -1;
@@ -953,16 +1000,22 @@ test_memzone_basic(void)
 	return 0;
 }
 
-static int memzone_walk_called;
-static void memzone_walk_clb(const struct rte_memzone *mz __rte_unused,
+static int test_memzones_left;
+static int memzone_walk_cnt;
+static void memzone_walk_clb(const struct rte_memzone *mz,
 			     void *arg __rte_unused)
 {
-	memzone_walk_called = 1;
+	memzone_walk_cnt++;
+	if (!strncmp(TEST_MEMZONE_NAME(""), mz->name, RTE_MEMZONE_NAMESIZE))
+		test_memzones_left++;
 }
 
 static int
 test_memzone(void)
 {
+	/* take note of how many memzones were allocated before running */
+	int memzone_cnt = rte_eal_get_configuration()->mem_config->memzone_cnt;
+
 	printf("test basic memzone API\n");
 	if (test_memzone_basic() < 0)
 		return -1;
@@ -1000,8 +1053,10 @@ test_memzone(void)
 		return -1;
 
 	printf("check memzone cleanup\n");
+	memzone_walk_cnt = 0;
+	test_memzones_left = 0;
 	rte_memzone_walk(memzone_walk_clb, NULL);
-	if (memzone_walk_called) {
+	if (memzone_walk_cnt != memzone_cnt || test_memzones_left > 0) {
 		printf("there are some memzones left after test\n");
 		rte_memzone_dump(stdout);
 		return -1;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 1/2] test/memzone: add test for memzone count in eal mem config
  2018-02-01 10:14     ` [PATCH v4 " Anatoly Burakov
@ 2018-02-06  0:49       ` Thomas Monjalon
  0 siblings, 0 replies; 17+ messages in thread
From: Thomas Monjalon @ 2018-02-06  0:49 UTC (permalink / raw)
  To: Anatoly Burakov; +Cc: dev, Radoslaw Biernacki

01/02/2018 11:14, Anatoly Burakov:
> Ensure that memzone count in eal mem config is incremented and
> decremented whenever memzones are allocated and freed.
> 
> Reviewed-by: Radoslaw Biernacki <radoslaw.biernacki@linaro.com>
> 
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>

Series applied, thanks

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2018-02-06  0:49 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-01-26 17:40 [PATCH 1/2] test/memzone: add test for memzone count in eal mem config Anatoly Burakov
2018-01-26 17:40 ` [PATCH 2/2] test/memzone: handle previously allocated memzones Anatoly Burakov
2018-01-27 14:46   ` Radoslaw Biernacki
2018-01-31  7:51   ` Phil Yang
2018-01-31 10:05     ` Burakov, Anatoly
2018-01-31 10:08       ` Phil Yang
2018-01-27 14:53 ` [PATCH 1/2] test/memzone: add test for memzone count in eal mem config Radoslaw Biernacki
2018-01-29  9:40   ` Burakov, Anatoly
2018-01-31 15:29 ` [PATCH v2 " Anatoly Burakov
2018-02-01  0:12   ` Thomas Monjalon
2018-02-01 10:05     ` Burakov, Anatoly
2018-02-01 10:02   ` [PATCH v3 " Anatoly Burakov
2018-02-01 10:14     ` [PATCH v4 " Anatoly Burakov
2018-02-06  0:49       ` Thomas Monjalon
2018-02-01 10:14     ` [PATCH v4 2/2] test/memzone: handle previously allocated memzones Anatoly Burakov
2018-02-01 10:02   ` [PATCH v3 " Anatoly Burakov
2018-01-31 15:29 ` [PATCH v2 " Anatoly Burakov

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.