fstests.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2] generic/233,270: unlimit the max locked memory size for io_uring
@ 2021-03-30  0:59 Zorro Lang
  2021-03-30  4:55 ` Darrick J. Wong
  0 siblings, 1 reply; 2+ messages in thread
From: Zorro Lang @ 2021-03-30  0:59 UTC (permalink / raw)
  To: fstests

The ltp/fsstress always fails on io_uring_queue_init() by returnning
ENOMEM. Due to io_uring accounts memory it needs under the rlimit
memlocked option, which can be quite low on some setups, especially
on 64K pagesize machine. root isn't under this restriction, but
regular users are. So only g/233 and g/270 which use $qa_user to run
fsstress are failed.

To avoid this failure, set max locked memory to unlimited before doing
fsstress, then restore it after test done.

Signed-off-by: Zorro Lang <zlang@redhat.com>
---

Hi,

V2 removed `ulimit -l $lmem`, due to each case runs in child process, won't
affect other testing.

Thanks,
Zorro

 tests/generic/233 | 6 ++++++
 tests/generic/270 | 6 ++++++
 2 files changed, 12 insertions(+)

diff --git a/tests/generic/233 b/tests/generic/233
index 7eda5774..cc794c79 100755
--- a/tests/generic/233
+++ b/tests/generic/233
@@ -43,6 +43,12 @@ _fsstress()
 -f rename=10 -f fsync=2 -f write=15 -f dwrite=15 \
 -n $count -d $out -p 7`
 
+	# io_uring accounts memory it needs under the rlimit memlocked option,
+	# which can be quite low on some setups (especially 64K pagesize). root
+	# isn't under this restriction, but regular users are. To avoid the
+	# io_uring_queue_init fail on ENOMEM, set max locked memory to unlimited
+	# temporarily.
+	ulimit -l unlimited
 	echo "fsstress $args" >> $seqres.full
 	if ! su $qa_user -c "$FSSTRESS_PROG $args" | tee -a $seqres.full | _filter_num
 	then
diff --git a/tests/generic/270 b/tests/generic/270
index 3d8656d4..e93940ef 100755
--- a/tests/generic/270
+++ b/tests/generic/270
@@ -37,6 +37,12 @@ _workout()
 	cp $FSSTRESS_PROG  $tmp.fsstress.bin
 	$SETCAP_PROG cap_chown=epi  $tmp.fsstress.bin
 
+	# io_uring accounts memory it needs under the rlimit memlocked option,
+	# which can be quite low on some setups (especially 64K pagesize). root
+	# isn't under this restriction, but regular users are. To avoid the
+	# io_uring_queue_init fail on ENOMEM, set max locked memory to unlimited
+	# temporarily.
+	ulimit -l unlimited
 	(su $qa_user -c "$tmp.fsstress.bin $args" &) > /dev/null 2>&1
 
 	echo "Run dd writers in parallel"
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH v2] generic/233,270: unlimit the max locked memory size for io_uring
  2021-03-30  0:59 [PATCH v2] generic/233,270: unlimit the max locked memory size for io_uring Zorro Lang
@ 2021-03-30  4:55 ` Darrick J. Wong
  0 siblings, 0 replies; 2+ messages in thread
From: Darrick J. Wong @ 2021-03-30  4:55 UTC (permalink / raw)
  To: Zorro Lang; +Cc: fstests

On Tue, Mar 30, 2021 at 08:59:42AM +0800, Zorro Lang wrote:
> The ltp/fsstress always fails on io_uring_queue_init() by returnning
> ENOMEM. Due to io_uring accounts memory it needs under the rlimit
> memlocked option, which can be quite low on some setups, especially
> on 64K pagesize machine. root isn't under this restriction, but
> regular users are. So only g/233 and g/270 which use $qa_user to run
> fsstress are failed.
> 
> To avoid this failure, set max locked memory to unlimited before doing
> fsstress, then restore it after test done.
> 
> Signed-off-by: Zorro Lang <zlang@redhat.com>
> ---
> 
> Hi,
> 
> V2 removed `ulimit -l $lmem`, due to each case runs in child process, won't
> affect other testing.
> 
> Thanks,
> Zorro
> 
>  tests/generic/233 | 6 ++++++
>  tests/generic/270 | 6 ++++++
>  2 files changed, 12 insertions(+)
> 
> diff --git a/tests/generic/233 b/tests/generic/233
> index 7eda5774..cc794c79 100755
> --- a/tests/generic/233
> +++ b/tests/generic/233
> @@ -43,6 +43,12 @@ _fsstress()
>  -f rename=10 -f fsync=2 -f write=15 -f dwrite=15 \
>  -n $count -d $out -p 7`
>  
> +	# io_uring accounts memory it needs under the rlimit memlocked option,
> +	# which can be quite low on some setups (especially 64K pagesize). root
> +	# isn't under this restriction, but regular users are. To avoid the
> +	# io_uring_queue_init fail on ENOMEM, set max locked memory to unlimited
> +	# temporarily.
> +	ulimit -l unlimited
>  	echo "fsstress $args" >> $seqres.full
>  	if ! su $qa_user -c "$FSSTRESS_PROG $args" | tee -a $seqres.full | _filter_num

/me kinda feels like this should be refactored into a common helper, but
somehow when I try to picture that in my head all I can see is a
screeching nightmare of bash goop so feel free to ignore me. :)

--D

>  	then
> diff --git a/tests/generic/270 b/tests/generic/270
> index 3d8656d4..e93940ef 100755
> --- a/tests/generic/270
> +++ b/tests/generic/270
> @@ -37,6 +37,12 @@ _workout()
>  	cp $FSSTRESS_PROG  $tmp.fsstress.bin
>  	$SETCAP_PROG cap_chown=epi  $tmp.fsstress.bin
>  
> +	# io_uring accounts memory it needs under the rlimit memlocked option,
> +	# which can be quite low on some setups (especially 64K pagesize). root
> +	# isn't under this restriction, but regular users are. To avoid the
> +	# io_uring_queue_init fail on ENOMEM, set max locked memory to unlimited
> +	# temporarily.
> +	ulimit -l unlimited
>  	(su $qa_user -c "$tmp.fsstress.bin $args" &) > /dev/null 2>&1
>  
>  	echo "Run dd writers in parallel"
> -- 
> 2.30.2
> 

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2021-03-30  4:55 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-30  0:59 [PATCH v2] generic/233,270: unlimit the max locked memory size for io_uring Zorro Lang
2021-03-30  4:55 ` Darrick J. Wong

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).