linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH blktests 0/5] Fix failures found with zoned block devices
@ 2019-02-20  8:12 Shin'ichiro Kawasaki
  2019-02-20  8:12 ` [PATCH blktests 1/5] block/024: Increase I/O time Shin'ichiro Kawasaki
                   ` (5 more replies)
  0 siblings, 6 replies; 12+ messages in thread
From: Shin'ichiro Kawasaki @ 2019-02-20  8:12 UTC (permalink / raw)
  To: linux-block, Omar Sandoval, Masato Suzuki, Shinichiro Kawasaki
  Cc: Omar Sandoval, Chaitanya Kulkarni

This patch series addresses two incorrect test failures found in the zbd test
group. Two other problems with the check script and the common rc script are
also fixed.

More specifically,
* Patch 1 addresses an incorrect failure of block/024 caused by shorter write
  I/O time than expected on very fast systems with a low overhead.
* Patch 2 fixes test zbd/004 which can fail when a disk closes an implicitly
  open zone after completion of a write command. To avoid this failure, the
  closed zone condition is added as an allowed condition.
* Patch 3 to 5 fix problems to access a block device sysfs attributes if the
  target device is a partition.

Of note is that test block/004 still fails with a target device that is a
partition of a zoned block device. The failure cause is due to an incorrect
access to sysfs disk attributes by fio. A patch to fix this issue was sent to
fio mailing list.

Masato Suzuki (1):
  block/024: Increase I/O time

Shin'ichiro Kawasaki (4):
  zbd/004: Add zone condition "Closed" for sequential write required
    zones
  check: Add TEST_DEV_PART_SYSFS variable
  common: Add _test_dev_is_partition() helper function
  zbd: Change sysfs path for partition devices

 check           | 51 ++++++++++++++++++++++++++++++++-----------------
 common/rc       |  8 ++++++++
 new             | 16 ++++++++++++++--
 tests/block/024 |  5 +++--
 tests/zbd/004   |  9 ++++++++-
 tests/zbd/rc    |  9 +++++++--
 6 files changed, 74 insertions(+), 24 deletions(-)

-- 
2.20.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH blktests 1/5] block/024: Increase I/O time
  2019-02-20  8:12 [PATCH blktests 0/5] Fix failures found with zoned block devices Shin'ichiro Kawasaki
@ 2019-02-20  8:12 ` Shin'ichiro Kawasaki
  2019-02-20  8:12 ` [PATCH blktests 2/5] zbd/004: Add zone condition "Closed" for sequential write required zones Shin'ichiro Kawasaki
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 12+ messages in thread
From: Shin'ichiro Kawasaki @ 2019-02-20  8:12 UTC (permalink / raw)
  To: linux-block, Omar Sandoval, Masato Suzuki, Shinichiro Kawasaki
  Cc: Omar Sandoval, Chaitanya Kulkarni

From: Masato Suzuki <masato.suzuki@wdc.com>

The total time spent on write I/Os for the 3 dd executions will be at most
1500 * 3 * 0.5ms = 2250ms, that is, 2.25s. Due to system overhead and
timer triggers, a total write I/O time larger than this value is generally
reported through I/O stat. However, for a system with very low overhead
and/or very precise timers, the reported value can be under 2.5s, leading
to a rounded value equal to 2s, and not the 3s expected.

To avoid the test to fail due to this problem, increase the number of write
I/Os executed so that the total exact I/O time exceeds 2.5s, leading to a
rounded value equal to 3s. The change increases the number of I/Os of the
second and third dd calls to 1800, leading to an exact write I/O time of
(1500 + 1800 * 2) * 0.5 ms = 2.55s.

Signed-off-by: Masato Suzuki <masato.suzuki@wdc.com>
---
 tests/block/024 | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/tests/block/024 b/tests/block/024
index cf14707..b40a869 100755
--- a/tests/block/024
+++ b/tests/block/024
@@ -53,9 +53,10 @@ test() {
 	dd if=/dev/zero of=/dev/nullb0 bs=4096 oflag=direct count=1500 status=none
 	show_times
 
+	# 1800 * 0.5 ms is 0.9 seconds.
 	dd if=/dev/nullb0 of=/dev/null bs=4096 iflag=direct count=1500 status=none &
-	dd if=/dev/zero of=/dev/nullb0 bs=4096 oflag=direct count=1500 status=none &
-	dd if=/dev/zero of=/dev/nullb0 bs=4096 oflag=direct count=1500 status=none &
+	dd if=/dev/zero of=/dev/nullb0 bs=4096 oflag=direct count=1800 status=none &
+	dd if=/dev/zero of=/dev/nullb0 bs=4096 oflag=direct count=1800 status=none &
 	wait
 	show_times
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH blktests 2/5] zbd/004: Add zone condition "Closed" for sequential write required zones
  2019-02-20  8:12 [PATCH blktests 0/5] Fix failures found with zoned block devices Shin'ichiro Kawasaki
  2019-02-20  8:12 ` [PATCH blktests 1/5] block/024: Increase I/O time Shin'ichiro Kawasaki
@ 2019-02-20  8:12 ` Shin'ichiro Kawasaki
  2019-02-20  8:12 ` [PATCH blktests 3/5] check: Add TEST_DEV_PART_SYSFS variable Shin'ichiro Kawasaki
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 12+ messages in thread
From: Shin'ichiro Kawasaki @ 2019-02-20  8:12 UTC (permalink / raw)
  To: linux-block, Omar Sandoval, Masato Suzuki, Shinichiro Kawasaki
  Cc: Omar Sandoval, Chaitanya Kulkarni

The test case zbd/004 executes write operations for two sequential write
required zones across the zone boundary between them. After the write
operation, the second zone has non-zero write pointer. At that status,
the zone can have not only "Implicit Open" condition but also "Closed"
condition based on zone status management logic of the block zoned device.

Add "Closed" condition to the zone condition check logic in zbd/004. Add
ZONE_COND_CLOSED constant definition in zbd/rc.

Signed-off-by: Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
---
 tests/zbd/004 | 9 ++++++++-
 tests/zbd/rc  | 1 +
 2 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/tests/zbd/004 b/tests/zbd/004
index 291626d..ac0cf50 100755
--- a/tests/zbd/004
+++ b/tests/zbd/004
@@ -83,7 +83,14 @@ test_device() {
 	_put_blkzone_report
 	_get_blkzone_report "${TEST_DEV}" || return $?
 	_check_zone_cond "${idx}" "${ZONE_COND_FULL}" || return $?
-	_check_zone_cond "$((idx+1))" "${ZONE_COND_IMPLICIT_OPEN}" || return $?
+	if ((ZONE_CONDS[idx+1] != ZONE_COND_IMPLICIT_OPEN)) && \
+		   ((ZONE_CONDS[idx+1] != ZONE_COND_CLOSED)); then
+		echo -n "Zone ${idx+1} condition is neither "
+		echo -n "${ZONE_COND_ARRAY[ZONE_COND_IMPLICIT_OPEN]} nor "
+		echo -n "${ZONE_COND_ARRAY[ZONE_COND_CLOSED]} "
+		echo "cond: ${ZONE_COND_ARRAY[ZONE_CONDS[idx+1]]}"
+		return 1
+	fi
 	if [[ ${ZONE_WPTRS[idx+1]} -ne ${phys_blk_sectors} ]]; then
 		echo -n "Unexpected write pointer for zone $((idx+1)) "
 		echo "wp: ${ZONE_WPTRS[idx+1]}"
diff --git a/tests/zbd/rc b/tests/zbd/rc
index 1d6f80a..c32bf31 100644
--- a/tests/zbd/rc
+++ b/tests/zbd/rc
@@ -39,6 +39,7 @@ export ZONE_TYPE_SEQ_WRITE_PREFERRED=3
 
 export ZONE_COND_EMPTY=1
 export ZONE_COND_IMPLICIT_OPEN=2
+export ZONE_COND_CLOSED=4
 export ZONE_COND_FULL=14
 
 export ZONE_TYPE_ARRAY=(
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH blktests 3/5] check: Add TEST_DEV_PART_SYSFS variable
  2019-02-20  8:12 [PATCH blktests 0/5] Fix failures found with zoned block devices Shin'ichiro Kawasaki
  2019-02-20  8:12 ` [PATCH blktests 1/5] block/024: Increase I/O time Shin'ichiro Kawasaki
  2019-02-20  8:12 ` [PATCH blktests 2/5] zbd/004: Add zone condition "Closed" for sequential write required zones Shin'ichiro Kawasaki
@ 2019-02-20  8:12 ` Shin'ichiro Kawasaki
  2019-03-04 22:34   ` Omar Sandoval
  2019-02-20  8:12 ` [PATCH blktests 4/5] common: Add _test_dev_is_partition() helper function Shin'ichiro Kawasaki
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 12+ messages in thread
From: Shin'ichiro Kawasaki @ 2019-02-20  8:12 UTC (permalink / raw)
  To: linux-block, Omar Sandoval, Masato Suzuki, Shinichiro Kawasaki
  Cc: Omar Sandoval, Chaitanya Kulkarni

When partition devices are specified in TEST_DEV, TEST_DEV_SYSFS
variable points to the sysfs paths of holder devices of the partition
devices (e.g., /sys/block/sda). This sysfs path is different from the
sysfs path of the partition devices (e.g., /sys/block/sda/sda1). For
example, size parameters exist in both the holder device sysfs and
the partition device sysfs with different values.

To allow test cases to access sysfs path of the partition devices,
add TEST_DEV_PART_SYSFS variable. TEST_DEV_SYSFS is set as is to refer
the sysfs path of the holder devices. If the TEST_DEV is not a partition
device, an empty string is set to the TEST_DEV_PART_SYSFS variable.

Change _find_sysfs_dir() function to return the holder device sysfs as
well as the partition device sysfs. The function obtains the canonical
sysfs path, and if the device is a partition device, the function cut the
last device name in the canonical sysfs path to obtain the holder device
sysfs path.

Signed-off-by: Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
---
 check | 51 ++++++++++++++++++++++++++++++++++-----------------
 new   | 16 ++++++++++++++--
 2 files changed, 48 insertions(+), 19 deletions(-)

diff --git a/check b/check
index f41ecba..e45b34f 100755
--- a/check
+++ b/check
@@ -442,13 +442,19 @@ _run_test() {
 				_warning "$TEST_NAME: fallback_device call failure"
 				return 0
 			fi
-			if ! sysfs_dir="$(_find_sysfs_dir "$test_dev")"; then
+
+			local dirs
+			local sysfs_dir
+			local part_sysfs_dir
+			if ! dirs=$(_find_sysfs_dir "$test_dev") ; then
 				_warning "$TEST_NAME: could not find sysfs directory for ${test_dev}"
 				cleanup_fallback_device
 				return 0
 			fi
+			read -r sysfs_dir part_sysfs_dir < <(echo "${dirs}")
 			TEST_DEVS=( "${test_dev}" )
 			TEST_DEV_SYSFS_DIRS["$test_dev"]="$sysfs_dir"
+			TEST_DEV_PART_SYSFS_DIRS["$test_dev"]="$part_sysfs_dir"
 			FALLBACK_DEVICE=1
 		fi
 
@@ -464,6 +470,7 @@ _run_test() {
 		local ret=0
 		for TEST_DEV in "${TEST_DEVS[@]}"; do
 			TEST_DEV_SYSFS="${TEST_DEV_SYSFS_DIRS["$TEST_DEV"]}"
+			TEST_DEV_PART_SYSFS="${TEST_DEV_PART_SYSFS_DIRS["$TEST_DEV"]}"
 			if (( !CAN_BE_ZONED )) && _test_dev_is_zoned; then
 				SKIP_REASON="${TEST_DEV} is a zoned block device"
 				_output_notrun "$TEST_NAME => $(basename "$TEST_DEV")"
@@ -483,6 +490,7 @@ _run_test() {
 		if (( FALLBACK_DEVICE )); then
 			cleanup_fallback_device
 			unset TEST_DEV_SYSFS_DIRS["${TEST_DEVS[0]}"]
+			unset TEST_DEV_PART_SYSFS_DIRS["${TEST_DEVS[0]}"]
 			TEST_DEVS=()
 		fi
 
@@ -507,6 +515,8 @@ _run_group() {
 		for i in "${!TEST_DEVS[@]}"; do
 			TEST_DEV="${TEST_DEVS[$i]}"
 			TEST_DEV_SYSFS="${TEST_DEV_SYSFS_DIRS["$TEST_DEV"]}"
+			# shellcheck disable=SC2034
+			TEST_DEV_PART_SYSFS="${TEST_DEV_PART_SYSFS_DIRS["$TEST_DEV"]}"
 			if ! group_device_requires; then
 				_output_notrun "${group}/*** => $(basename "$TEST_DEV")"
 				unset TEST_DEVS["$i"]
@@ -529,28 +539,31 @@ _run_group() {
 
 _find_sysfs_dir() {
 	local test_dev="$1"
+	local sysfs_path
 	local major=$((0x$(stat -L -c '%t' "$test_dev")))
 	local minor=$((0x$(stat -L -c '%T' "$test_dev")))
-	local dev="$major:$minor"
+	local sysdev_path="/sys/dev/block/${major}:${minor}"
 
-	local block_dir part_dir
-	for block_dir in /sys/block/*; do
-		if [[ $(cat "${block_dir}/dev") = "$dev" ]]; then
-			echo "$block_dir"
-			return
-		fi
-		for part_dir in "$block_dir"/*; do
-			if [[ -r ${part_dir}/dev && $(cat "${part_dir}/dev") = "$dev" ]]; then
-				echo "$block_dir"
-				return
-			fi
-		done
-	done
+	# Get the canonical sysfs path
+	if ! sysfs_path=/sys/dev/block/$(readlink "${sysdev_path}"); then
+		return 1
+	fi
 
-	return 1
+	if [[ -r "${sysfs_path}"/partition ]]; then
+		# If the device is a partition device, cut the last device name
+		# of the canonical sysfs path to access to the sysfs of its
+		# holder device.
+		#   e.g.   .../block/sda/sda1  ->  ...block/sda
+		# Return both the holder device sysfs path and the partition
+		# device sysfs path.
+		echo "${sysfs_path%/*}" "${sysfs_path}"
+	else
+		echo "${sysfs_path}" ""
+	fi
 }
 
 declare -A TEST_DEV_SYSFS_DIRS
+declare -A TEST_DEV_PART_SYSFS_DIRS
 _check() {
 	# shellcheck disable=SC2034
 	SRCDIR="$(realpath src)"
@@ -563,11 +576,15 @@ _check() {
 			_error "${test_dev} is not a block device"
 		fi
 
+		local dirs
 		local sysfs_dir
-		if ! sysfs_dir="$(_find_sysfs_dir "$test_dev")"; then
+		local part_sysfs_dir
+		if ! dirs=$(_find_sysfs_dir "$test_dev") ; then
 			_error "could not find sysfs directory for ${test_dev}"
 		fi
+		read -r sysfs_dir part_sysfs_dir < <(echo "${dirs}")
 		TEST_DEV_SYSFS_DIRS["$test_dev"]="$sysfs_dir"
+		TEST_DEV_PART_SYSFS_DIRS["$test_dev"]="$part_sysfs_dir"
 	done
 
 	local test_name group prev_group
diff --git a/new b/new
index d7d5f7c..24c066d 100755
--- a/new
+++ b/new
@@ -80,7 +80,10 @@ group_requires() {
 # should return non-zero and set the \$SKIP_REASON variable. \$TEST_DEV is the
 # full path of the block device (e.g., /dev/nvme0n1 or /dev/sda1), and
 # \$TEST_DEV_SYSFS is the sysfs path of the disk (not the partition, e.g.,
-# /sys/block/nvme0n1 or /sys/block/sda).
+# /sys/block/nvme0n1 or /sys/block/sda). If the target device is a partition
+# device, \$TEST_DEV_PART_SYSFS is the sysfs path of the partition device
+# (e.g., /sys/block/nvme0n1/nvme0n1p1 or /sys/block/sda/sda1). Otherwise,
+# \$TEST_DEV_PART_SYSFS is an empty string.
 #
 # Usually, group_device_requires() just needs to check that the test device is
 # the right type of hardware or supports any necessary features using the
@@ -165,7 +168,10 @@ DESCRIPTION=""
 # set the \$SKIP_REASON variable. \$TEST_DEV is the full path of the block
 # device (e.g., /dev/nvme0n1 or /dev/sda1), and \$TEST_DEV_SYSFS is the sysfs
 # path of the disk (not the partition, e.g., /sys/block/nvme0n1 or
-# /sys/block/sda).
+# /sys/block/sda). If the target device is a partition device,
+# \$TEST_DEV_PART_SYSFS is the sysfs path of the partition device (e.g.,
+# /sys/block/nvme0n1/nvme0n1p1 or /sys/block/sda/sda1). Otherwise,
+# \$TEST_DEV_PART_SYSFS is an empty string.
 #
 # Usually, device_requires() just needs to check that the test device is the
 # right type of hardware or supports any necessary features using the
@@ -207,6 +213,12 @@ DESCRIPTION=""
 #    - \$TEST_DEV_SYSFS -- the sysfs directory of the device (e.g.,
 #                         /sys/block/sda). In general, you should use the
 #                         _test_dev_queue_{get,set} helpers.
+#                         If the device is a partition device, the sysfs
+#                         directory of its holder device is set.
+#    - \$TEST_DEV_PART_SYSFS -- the sysfs directory of the device if the device
+#                               is a partition device (e.g.,
+#                               /sys/block/sda/sda1). Empty string is set if
+#                               the device is not a partition device.
 test() {
 	echo "Running \${TEST_NAME}"
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH blktests 4/5] common: Add _test_dev_is_partition() helper function
  2019-02-20  8:12 [PATCH blktests 0/5] Fix failures found with zoned block devices Shin'ichiro Kawasaki
                   ` (2 preceding siblings ...)
  2019-02-20  8:12 ` [PATCH blktests 3/5] check: Add TEST_DEV_PART_SYSFS variable Shin'ichiro Kawasaki
@ 2019-02-20  8:12 ` Shin'ichiro Kawasaki
  2019-03-04 22:36   ` Omar Sandoval
  2019-02-20  8:12 ` [PATCH blktests 5/5] zbd: Change sysfs path for partition devices Shin'ichiro Kawasaki
  2019-02-20 18:22 ` [PATCH blktests 0/5] Fix failures found with zoned block devices Omar Sandoval
  5 siblings, 1 reply; 12+ messages in thread
From: Shin'ichiro Kawasaki @ 2019-02-20  8:12 UTC (permalink / raw)
  To: linux-block, Omar Sandoval, Masato Suzuki, Shinichiro Kawasaki
  Cc: Omar Sandoval, Chaitanya Kulkarni

To control test conditions unique for partition devices, introduce the
_test_dev_is_partition() helper function. Refer TEST_DEV_PART_SYSFS
variable to tell if the TEST_DEV is a partition device or not.

Signed-off-by: Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
---
 common/rc | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/common/rc b/common/rc
index 153a323..0c354a2 100644
--- a/common/rc
+++ b/common/rc
@@ -208,6 +208,14 @@ _test_dev_in_hotplug_slot() {
 	return 0
 }
 
+_test_dev_is_partition() {
+	if [[ -z ${TEST_DEV_PART_SYSFS} ]]; then
+		SKIP_REASON="${TEST_DEV} is not a partition device"
+		return 1
+	fi
+	return 0
+}
+
 # Older versions of xfs_io use pwrite64 and such, so the error messages won't
 # match current versions of xfs_io. See c52086226bc6 ("filter: xfs_io output
 # has dropped "64" from error messages") in xfstests.
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH blktests 5/5] zbd: Change sysfs path for partition devices
  2019-02-20  8:12 [PATCH blktests 0/5] Fix failures found with zoned block devices Shin'ichiro Kawasaki
                   ` (3 preceding siblings ...)
  2019-02-20  8:12 ` [PATCH blktests 4/5] common: Add _test_dev_is_partition() helper function Shin'ichiro Kawasaki
@ 2019-02-20  8:12 ` Shin'ichiro Kawasaki
  2019-03-04 22:36   ` Omar Sandoval
  2019-02-20 18:22 ` [PATCH blktests 0/5] Fix failures found with zoned block devices Omar Sandoval
  5 siblings, 1 reply; 12+ messages in thread
From: Shin'ichiro Kawasaki @ 2019-02-20  8:12 UTC (permalink / raw)
  To: linux-block, Omar Sandoval, Masato Suzuki, Shinichiro Kawasaki
  Cc: Omar Sandoval, Chaitanya Kulkarni

zbd/001 and zbd/002 test cases fail for partition devices because of
sysfs path difference between partition devices and their holder
devices. The size parameter in sysfs path is different between the
partition devices and their holder devices. The holder devices have
nr_zones parameter in sysfs but the partition devices do not.

Utilize _test_dev_is_partition() helper function and TEST_DEV_PART_SYSFS
variable to refer correct sysfs size parameter for the partition devices.
Do not refer sysfs nr_zones parameter for the partition devices. Instead,
calculate the expected nr_zones from device capacity and zone size.

Signed-off-by: Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
---
 tests/zbd/rc | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/tests/zbd/rc b/tests/zbd/rc
index c32bf31..88538d0 100644
--- a/tests/zbd/rc
+++ b/tests/zbd/rc
@@ -75,7 +75,11 @@ export SV_NR_ZONES=4
 _get_sysfs_variable() {
 	unset SYSFS_VARS
 	local _dir=${TEST_DEV_SYSFS}
-	SYSFS_VARS[$SV_CAPACITY]=$(<"${_dir}"/size)
+	if _test_dev_is_partition; then
+		SYSFS_VARS[$SV_CAPACITY]=$(<"${TEST_DEV_PART_SYSFS}"/size)
+	else
+		SYSFS_VARS[$SV_CAPACITY]=$(<"${_dir}"/size)
+	fi
 	SYSFS_VARS[$SV_CHUNK_SECTORS]=$(<"${_dir}"/queue/chunk_sectors)
 	SYSFS_VARS[$SV_PHYS_BLK_SIZE]=$(<"${_dir}"/queue/physical_block_size)
 	SYSFS_VARS[$SV_PHYS_BLK_SECTORS]=$((SYSFS_VARS[SV_PHYS_BLK_SIZE] / 512))
@@ -83,7 +87,7 @@ _get_sysfs_variable() {
 	# If the nr_zones sysfs attribute exists, get its value. Otherwise,
 	# calculate its value based on the total capacity and zone size, taking
 	# into account that the last zone can be smaller than other zones.
-	if [[ -e ${TEST_DEV_SYSFS}/queue/nr_zones ]]; then
+	if [[ -e "${_dir}"/queue/nr_zones ]] && ! _test_dev_is_partition; then
 		SYSFS_VARS[$SV_NR_ZONES]=$(<"${_dir}"/queue/nr_zones)
 	else
 		SYSFS_VARS[$SV_NR_ZONES]=$(( (SYSFS_VARS[SV_CAPACITY] - 1) \
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH blktests 0/5] Fix failures found with zoned block devices
  2019-02-20  8:12 [PATCH blktests 0/5] Fix failures found with zoned block devices Shin'ichiro Kawasaki
                   ` (4 preceding siblings ...)
  2019-02-20  8:12 ` [PATCH blktests 5/5] zbd: Change sysfs path for partition devices Shin'ichiro Kawasaki
@ 2019-02-20 18:22 ` Omar Sandoval
  2019-02-21  2:50   ` Damien Le Moal
  5 siblings, 1 reply; 12+ messages in thread
From: Omar Sandoval @ 2019-02-20 18:22 UTC (permalink / raw)
  To: Shin'ichiro Kawasaki
  Cc: linux-block, Omar Sandoval, Masato Suzuki, Chaitanya Kulkarni

On Wed, Feb 20, 2019 at 05:12:26PM +0900, Shin'ichiro Kawasaki wrote:
> This patch series addresses two incorrect test failures found in the zbd test
> group. Two other problems with the check script and the common rc script are
> also fixed.
> 
> More specifically,
> * Patch 1 addresses an incorrect failure of block/024 caused by shorter write
>   I/O time than expected on very fast systems with a low overhead.
> * Patch 2 fixes test zbd/004 which can fail when a disk closes an implicitly
>   open zone after completion of a write command. To avoid this failure, the
>   closed zone condition is added as an allowed condition.
> * Patch 3 to 5 fix problems to access a block device sysfs attributes if the
>   target device is a partition.
> 
> Of note is that test block/004 still fails with a target device that is a
> partition of a zoned block device. The failure cause is due to an incorrect
> access to sysfs disk attributes by fio. A patch to fix this issue was sent to
> fio mailing list.

Thanks, I merged 1 and 2. I'm not necessarily opposed to the rest, but
I'm curious what your use case is for testing on partitions?

> Masato Suzuki (1):
>   block/024: Increase I/O time
> 
> Shin'ichiro Kawasaki (4):
>   zbd/004: Add zone condition "Closed" for sequential write required
>     zones
>   check: Add TEST_DEV_PART_SYSFS variable
>   common: Add _test_dev_is_partition() helper function
>   zbd: Change sysfs path for partition devices
> 
>  check           | 51 ++++++++++++++++++++++++++++++++-----------------
>  common/rc       |  8 ++++++++
>  new             | 16 ++++++++++++++--
>  tests/block/024 |  5 +++--
>  tests/zbd/004   |  9 ++++++++-
>  tests/zbd/rc    |  9 +++++++--
>  6 files changed, 74 insertions(+), 24 deletions(-)
> 
> -- 
> 2.20.1
> 

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH blktests 0/5] Fix failures found with zoned block devices
  2019-02-20 18:22 ` [PATCH blktests 0/5] Fix failures found with zoned block devices Omar Sandoval
@ 2019-02-21  2:50   ` Damien Le Moal
  0 siblings, 0 replies; 12+ messages in thread
From: Damien Le Moal @ 2019-02-21  2:50 UTC (permalink / raw)
  To: Shinichiro Kawasaki, osandov
  Cc: Chaitanya Kulkarni, linux-block, osandov, Masato Suzuki

Hi Omar,

On Wed, 2019-02-20 at 10:22 -0800, Omar Sandoval wrote:
> On Wed, Feb 20, 2019 at 05:12:26PM +0900, Shin'ichiro Kawasaki wrote:
> > This patch series addresses two incorrect test failures found in the
> > zbd test
> > group. Two other problems with the check script and the common rc
> > script are
> > also fixed.
> > 
> > More specifically,
> > * Patch 1 addresses an incorrect failure of block/024 caused by
> > shorter write
> >   I/O time than expected on very fast systems with a low overhead.
> > * Patch 2 fixes test zbd/004 which can fail when a disk closes an
> > implicitly
> >   open zone after completion of a write command. To avoid this
> > failure, the
> >   closed zone condition is added as an allowed condition.
> > * Patch 3 to 5 fix problems to access a block device sysfs
> > attributes if the
> >   target device is a partition.
> > 
> > Of note is that test block/004 still fails with a target device that
> > is a
> > partition of a zoned block device. The failure cause is due to an
> > incorrect
> > access to sysfs disk attributes by fio. A patch to fix this issue
> > was sent to
> > fio mailing list.
> 
> Thanks, I merged 1 and 2. I'm not necessarily opposed to the rest, but
> I'm curious what your use case is for testing on partitions?

Paraphrasing my answer to Bart who had a similar question on the fio
mailing list.

For host-managed disks, partitioning a zoned block device is not a very
compeling use case, nor is it commonly in use in the field as far as I
know. Chunking a host-managed disk into smaller drives with dm-linear is
likely a better option. There is one use case I have seen in the field
though where a partition was created over the conventional zones space
of a disk to create an essentially normal (i.e. randomly writable) disk
to put an ext4 file system on top for a metadata DB to manage data
stored on the remaining sequential zones of the disk. Such choice allows
to simplify the overall system design by enabling the use of proven
components (ext4, DB) rather than writing or porting everything to fit a
pure zoned block device model.

The main use case for using partitions is with host-aware disk models as
these can be written randomly anywhere and so are in essence "normal"
disks, more so considering the fact that these drives SCSI device type
is "0x0", equivalent to regular disk devices. As such, partitioning
host-aware disks in the same manner as regular disks generally are is a
valid use case.

The intent of these fixes is to allow for running tests against
partitions to check the kernel handling of partition sector remapping
for zoned block disks. Since the kernel supports (I mean does not
prevent) partitions of zoned disks, extending test coverage to that code
is I think useful. Specific test cases or groups in blktests are not
needed to specifically test that, we only need the ability to specify a
partition device file in blktests config (TEST_DEVS variable). Tests
runs on top of zoned dm-linear devices have the same intent (sector
remapping verification) and that works as is now. Getting the same to
run for partitions would close a gap.

Thank you for your review.

Best regards.

-- 
Damien Le Moal
Western Digital Research

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH blktests 3/5] check: Add TEST_DEV_PART_SYSFS variable
  2019-02-20  8:12 ` [PATCH blktests 3/5] check: Add TEST_DEV_PART_SYSFS variable Shin'ichiro Kawasaki
@ 2019-03-04 22:34   ` Omar Sandoval
  2019-03-05  1:47     ` Shinichiro Kawasaki
  0 siblings, 1 reply; 12+ messages in thread
From: Omar Sandoval @ 2019-03-04 22:34 UTC (permalink / raw)
  To: Shin'ichiro Kawasaki
  Cc: linux-block, Omar Sandoval, Masato Suzuki, Chaitanya Kulkarni

On Wed, Feb 20, 2019 at 05:12:29PM +0900, Shin'ichiro Kawasaki wrote:
> When partition devices are specified in TEST_DEV, TEST_DEV_SYSFS
> variable points to the sysfs paths of holder devices of the partition
> devices (e.g., /sys/block/sda). This sysfs path is different from the
> sysfs path of the partition devices (e.g., /sys/block/sda/sda1). For
> example, size parameters exist in both the holder device sysfs and
> the partition device sysfs with different values.
> 
> To allow test cases to access sysfs path of the partition devices,
> add TEST_DEV_PART_SYSFS variable. TEST_DEV_SYSFS is set as is to refer
> the sysfs path of the holder devices. If the TEST_DEV is not a partition
> device, an empty string is set to the TEST_DEV_PART_SYSFS variable.
> 
> Change _find_sysfs_dir() function to return the holder device sysfs as
> well as the partition device sysfs. The function obtains the canonical
> sysfs path, and if the device is a partition device, the function cut the
> last device name in the canonical sysfs path to obtain the holder device
> sysfs path.

This makes sense. A couple of small tweaks below.

> Signed-off-by: Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
> ---
>  check | 51 ++++++++++++++++++++++++++++++++++-----------------
>  new   | 16 ++++++++++++++--
>  2 files changed, 48 insertions(+), 19 deletions(-)
> 
> diff --git a/check b/check
> index f41ecba..e45b34f 100755
> --- a/check
> +++ b/check
> @@ -442,13 +442,19 @@ _run_test() {
>  				_warning "$TEST_NAME: fallback_device call failure"
>  				return 0
>  			fi
> -			if ! sysfs_dir="$(_find_sysfs_dir "$test_dev")"; then
> +
> +			local dirs
> +			local sysfs_dir
> +			local part_sysfs_dir
> +			if ! dirs=$(_find_sysfs_dir "$test_dev") ; then
>  				_warning "$TEST_NAME: could not find sysfs directory for ${test_dev}"
>  				cleanup_fallback_device
>  				return 0
>  			fi
> +			read -r sysfs_dir part_sysfs_dir < <(echo "${dirs}")

Let's rename _find_sysfs_dir to find_sysfs_dirs and make it set
TEST_DEV_SYSFS_DIRS["$test_dev"] and
TEST_DEV_PART_SYSFS_DIRS["$test_dev"] itself instead of returning the
string.

>  			TEST_DEVS=( "${test_dev}" )
>  			TEST_DEV_SYSFS_DIRS["$test_dev"]="$sysfs_dir"
> +			TEST_DEV_PART_SYSFS_DIRS["$test_dev"]="$part_sysfs_dir"
>  			FALLBACK_DEVICE=1
>  		fi
>  
> @@ -464,6 +470,7 @@ _run_test() {
>  		local ret=0
>  		for TEST_DEV in "${TEST_DEVS[@]}"; do
>  			TEST_DEV_SYSFS="${TEST_DEV_SYSFS_DIRS["$TEST_DEV"]}"
> +			TEST_DEV_PART_SYSFS="${TEST_DEV_PART_SYSFS_DIRS["$TEST_DEV"]}"
>  			if (( !CAN_BE_ZONED )) && _test_dev_is_zoned; then
>  				SKIP_REASON="${TEST_DEV} is a zoned block device"
>  				_output_notrun "$TEST_NAME => $(basename "$TEST_DEV")"
> @@ -483,6 +490,7 @@ _run_test() {
>  		if (( FALLBACK_DEVICE )); then
>  			cleanup_fallback_device
>  			unset TEST_DEV_SYSFS_DIRS["${TEST_DEVS[0]}"]
> +			unset TEST_DEV_PART_SYSFS_DIRS["${TEST_DEVS[0]}"]
>  			TEST_DEVS=()
>  		fi
>  
> @@ -507,6 +515,8 @@ _run_group() {
>  		for i in "${!TEST_DEVS[@]}"; do
>  			TEST_DEV="${TEST_DEVS[$i]}"
>  			TEST_DEV_SYSFS="${TEST_DEV_SYSFS_DIRS["$TEST_DEV"]}"
> +			# shellcheck disable=SC2034
> +			TEST_DEV_PART_SYSFS="${TEST_DEV_PART_SYSFS_DIRS["$TEST_DEV"]}"
>  			if ! group_device_requires; then
>  				_output_notrun "${group}/*** => $(basename "$TEST_DEV")"
>  				unset TEST_DEVS["$i"]
> @@ -529,28 +539,31 @@ _run_group() {
>  
>  _find_sysfs_dir() {
>  	local test_dev="$1"
> +	local sysfs_path
>  	local major=$((0x$(stat -L -c '%t' "$test_dev")))
>  	local minor=$((0x$(stat -L -c '%T' "$test_dev")))
> -	local dev="$major:$minor"
> +	local sysdev_path="/sys/dev/block/${major}:${minor}"
>  
> -	local block_dir part_dir
> -	for block_dir in /sys/block/*; do
> -		if [[ $(cat "${block_dir}/dev") = "$dev" ]]; then
> -			echo "$block_dir"
> -			return
> -		fi
> -		for part_dir in "$block_dir"/*; do
> -			if [[ -r ${part_dir}/dev && $(cat "${part_dir}/dev") = "$dev" ]]; then
> -				echo "$block_dir"
> -				return
> -			fi
> -		done
> -	done
> +	# Get the canonical sysfs path
> +	if ! sysfs_path=/sys/dev/block/$(readlink "${sysdev_path}"); then

sysfs_path="$(realpath "/sys/dev/block/${major}:${minor}")" is a bit
shorter, does that still work?

> +		return 1
> +	fi
>  
> -	return 1
> +	if [[ -r "${sysfs_path}"/partition ]]; then
> +		# If the device is a partition device, cut the last device name
> +		# of the canonical sysfs path to access to the sysfs of its
> +		# holder device.
> +		#   e.g.   .../block/sda/sda1  ->  ...block/sda
> +		# Return both the holder device sysfs path and the partition
> +		# device sysfs path.
> +		echo "${sysfs_path%/*}" "${sysfs_path}"
> +	else
> +		echo "${sysfs_path}" ""
> +	fi
>  }
>  
>  declare -A TEST_DEV_SYSFS_DIRS
> +declare -A TEST_DEV_PART_SYSFS_DIRS
>  _check() {
>  	# shellcheck disable=SC2034
>  	SRCDIR="$(realpath src)"
> @@ -563,11 +576,15 @@ _check() {
>  			_error "${test_dev} is not a block device"
>  		fi
>  
> +		local dirs
>  		local sysfs_dir
> -		if ! sysfs_dir="$(_find_sysfs_dir "$test_dev")"; then
> +		local part_sysfs_dir
> +		if ! dirs=$(_find_sysfs_dir "$test_dev") ; then
>  			_error "could not find sysfs directory for ${test_dev}"
>  		fi
> +		read -r sysfs_dir part_sysfs_dir < <(echo "${dirs}")
>  		TEST_DEV_SYSFS_DIRS["$test_dev"]="$sysfs_dir"
> +		TEST_DEV_PART_SYSFS_DIRS["$test_dev"]="$part_sysfs_dir"
>  	done
>  
>  	local test_name group prev_group
> diff --git a/new b/new
> index d7d5f7c..24c066d 100755
> --- a/new
> +++ b/new
> @@ -80,7 +80,10 @@ group_requires() {
>  # should return non-zero and set the \$SKIP_REASON variable. \$TEST_DEV is the
>  # full path of the block device (e.g., /dev/nvme0n1 or /dev/sda1), and
>  # \$TEST_DEV_SYSFS is the sysfs path of the disk (not the partition, e.g.,
> -# /sys/block/nvme0n1 or /sys/block/sda).
> +# /sys/block/nvme0n1 or /sys/block/sda). If the target device is a partition
> +# device, \$TEST_DEV_PART_SYSFS is the sysfs path of the partition device
> +# (e.g., /sys/block/nvme0n1/nvme0n1p1 or /sys/block/sda/sda1). Otherwise,
> +# \$TEST_DEV_PART_SYSFS is an empty string.
>  #
>  # Usually, group_device_requires() just needs to check that the test device is
>  # the right type of hardware or supports any necessary features using the
> @@ -165,7 +168,10 @@ DESCRIPTION=""
>  # set the \$SKIP_REASON variable. \$TEST_DEV is the full path of the block
>  # device (e.g., /dev/nvme0n1 or /dev/sda1), and \$TEST_DEV_SYSFS is the sysfs
>  # path of the disk (not the partition, e.g., /sys/block/nvme0n1 or
> -# /sys/block/sda).
> +# /sys/block/sda). If the target device is a partition device,
> +# \$TEST_DEV_PART_SYSFS is the sysfs path of the partition device (e.g.,
> +# /sys/block/nvme0n1/nvme0n1p1 or /sys/block/sda/sda1). Otherwise,
> +# \$TEST_DEV_PART_SYSFS is an empty string.
>  #
>  # Usually, device_requires() just needs to check that the test device is the
>  # right type of hardware or supports any necessary features using the
> @@ -207,6 +213,12 @@ DESCRIPTION=""
>  #    - \$TEST_DEV_SYSFS -- the sysfs directory of the device (e.g.,
>  #                         /sys/block/sda). In general, you should use the
>  #                         _test_dev_queue_{get,set} helpers.
> +#                         If the device is a partition device, the sysfs
> +#                         directory of its holder device is set.
> +#    - \$TEST_DEV_PART_SYSFS -- the sysfs directory of the device if the device
> +#                               is a partition device (e.g.,
> +#                               /sys/block/sda/sda1). Empty string is set if
> +#                               the device is not a partition device.
>  test() {
>  	echo "Running \${TEST_NAME}"
>  
> -- 
> 2.20.1
> 

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH blktests 4/5] common: Add _test_dev_is_partition() helper function
  2019-02-20  8:12 ` [PATCH blktests 4/5] common: Add _test_dev_is_partition() helper function Shin'ichiro Kawasaki
@ 2019-03-04 22:36   ` Omar Sandoval
  0 siblings, 0 replies; 12+ messages in thread
From: Omar Sandoval @ 2019-03-04 22:36 UTC (permalink / raw)
  To: Shin'ichiro Kawasaki
  Cc: linux-block, Omar Sandoval, Masato Suzuki, Chaitanya Kulkarni

On Wed, Feb 20, 2019 at 05:12:30PM +0900, Shin'ichiro Kawasaki wrote:
> To control test conditions unique for partition devices, introduce the
> _test_dev_is_partition() helper function. Refer TEST_DEV_PART_SYSFS
> variable to tell if the TEST_DEV is a partition device or not.

Reviewed-by: Omar Sandoval <osandov@fb.com>

> Signed-off-by: Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
> ---
>  common/rc | 8 ++++++++
>  1 file changed, 8 insertions(+)
> 
> diff --git a/common/rc b/common/rc
> index 153a323..0c354a2 100644
> --- a/common/rc
> +++ b/common/rc
> @@ -208,6 +208,14 @@ _test_dev_in_hotplug_slot() {
>  	return 0
>  }
>  
> +_test_dev_is_partition() {
> +	if [[ -z ${TEST_DEV_PART_SYSFS} ]]; then
> +		SKIP_REASON="${TEST_DEV} is not a partition device"
> +		return 1
> +	fi
> +	return 0
> +}
> +
>  # Older versions of xfs_io use pwrite64 and such, so the error messages won't
>  # match current versions of xfs_io. See c52086226bc6 ("filter: xfs_io output
>  # has dropped "64" from error messages") in xfstests.
> -- 
> 2.20.1
> 

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH blktests 5/5] zbd: Change sysfs path for partition devices
  2019-02-20  8:12 ` [PATCH blktests 5/5] zbd: Change sysfs path for partition devices Shin'ichiro Kawasaki
@ 2019-03-04 22:36   ` Omar Sandoval
  0 siblings, 0 replies; 12+ messages in thread
From: Omar Sandoval @ 2019-03-04 22:36 UTC (permalink / raw)
  To: Shin'ichiro Kawasaki
  Cc: linux-block, Omar Sandoval, Masato Suzuki, Chaitanya Kulkarni

On Wed, Feb 20, 2019 at 05:12:31PM +0900, Shin'ichiro Kawasaki wrote:
> zbd/001 and zbd/002 test cases fail for partition devices because of
> sysfs path difference between partition devices and their holder
> devices. The size parameter in sysfs path is different between the
> partition devices and their holder devices. The holder devices have
> nr_zones parameter in sysfs but the partition devices do not.
> 
> Utilize _test_dev_is_partition() helper function and TEST_DEV_PART_SYSFS
> variable to refer correct sysfs size parameter for the partition devices.
> Do not refer sysfs nr_zones parameter for the partition devices. Instead,
> calculate the expected nr_zones from device capacity and zone size.

Reviewed-by: Omar Sandoval <osandov@fb.com>

> Signed-off-by: Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
> ---
>  tests/zbd/rc | 8 ++++++--
>  1 file changed, 6 insertions(+), 2 deletions(-)
> 
> diff --git a/tests/zbd/rc b/tests/zbd/rc
> index c32bf31..88538d0 100644
> --- a/tests/zbd/rc
> +++ b/tests/zbd/rc
> @@ -75,7 +75,11 @@ export SV_NR_ZONES=4
>  _get_sysfs_variable() {
>  	unset SYSFS_VARS
>  	local _dir=${TEST_DEV_SYSFS}
> -	SYSFS_VARS[$SV_CAPACITY]=$(<"${_dir}"/size)
> +	if _test_dev_is_partition; then
> +		SYSFS_VARS[$SV_CAPACITY]=$(<"${TEST_DEV_PART_SYSFS}"/size)
> +	else
> +		SYSFS_VARS[$SV_CAPACITY]=$(<"${_dir}"/size)
> +	fi
>  	SYSFS_VARS[$SV_CHUNK_SECTORS]=$(<"${_dir}"/queue/chunk_sectors)
>  	SYSFS_VARS[$SV_PHYS_BLK_SIZE]=$(<"${_dir}"/queue/physical_block_size)
>  	SYSFS_VARS[$SV_PHYS_BLK_SECTORS]=$((SYSFS_VARS[SV_PHYS_BLK_SIZE] / 512))
> @@ -83,7 +87,7 @@ _get_sysfs_variable() {
>  	# If the nr_zones sysfs attribute exists, get its value. Otherwise,
>  	# calculate its value based on the total capacity and zone size, taking
>  	# into account that the last zone can be smaller than other zones.
> -	if [[ -e ${TEST_DEV_SYSFS}/queue/nr_zones ]]; then
> +	if [[ -e "${_dir}"/queue/nr_zones ]] && ! _test_dev_is_partition; then
>  		SYSFS_VARS[$SV_NR_ZONES]=$(<"${_dir}"/queue/nr_zones)
>  	else
>  		SYSFS_VARS[$SV_NR_ZONES]=$(( (SYSFS_VARS[SV_CAPACITY] - 1) \
> -- 
> 2.20.1
> 

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH blktests 3/5] check: Add TEST_DEV_PART_SYSFS variable
  2019-03-04 22:34   ` Omar Sandoval
@ 2019-03-05  1:47     ` Shinichiro Kawasaki
  0 siblings, 0 replies; 12+ messages in thread
From: Shinichiro Kawasaki @ 2019-03-05  1:47 UTC (permalink / raw)
  To: Omar Sandoval
  Cc: linux-block, Omar Sandoval, Masato Suzuki, Chaitanya Kulkarni

On 3/5/19 7:34 AM, Omar Sandoval wrote:
> On Wed, Feb 20, 2019 at 05:12:29PM +0900, Shin'ichiro Kawasaki wrote:
>> When partition devices are specified in TEST_DEV, TEST_DEV_SYSFS
>> variable points to the sysfs paths of holder devices of the partition
>> devices (e.g., /sys/block/sda). This sysfs path is different from the
>> sysfs path of the partition devices (e.g., /sys/block/sda/sda1). For
>> example, size parameters exist in both the holder device sysfs and
>> the partition device sysfs with different values.
>>
>> To allow test cases to access sysfs path of the partition devices,
>> add TEST_DEV_PART_SYSFS variable. TEST_DEV_SYSFS is set as is to refer
>> the sysfs path of the holder devices. If the TEST_DEV is not a partition
>> device, an empty string is set to the TEST_DEV_PART_SYSFS variable.
>>
>> Change _find_sysfs_dir() function to return the holder device sysfs as
>> well as the partition device sysfs. The function obtains the canonical
>> sysfs path, and if the device is a partition device, the function cut the
>> last device name in the canonical sysfs path to obtain the holder device
>> sysfs path.
> 
> This makes sense. A couple of small tweaks below.
> 
>> Signed-off-by: Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
>> ---
>>   check | 51 ++++++++++++++++++++++++++++++++++-----------------
>>   new   | 16 ++++++++++++++--
>>   2 files changed, 48 insertions(+), 19 deletions(-)
>>
>> diff --git a/check b/check
>> index f41ecba..e45b34f 100755
>> --- a/check
>> +++ b/check
>> @@ -442,13 +442,19 @@ _run_test() {
>>   				_warning "$TEST_NAME: fallback_device call failure"
>>   				return 0
>>   			fi
>> -			if ! sysfs_dir="$(_find_sysfs_dir "$test_dev")"; then
>> +
>> +			local dirs
>> +			local sysfs_dir
>> +			local part_sysfs_dir
>> +			if ! dirs=$(_find_sysfs_dir "$test_dev") ; then
>>   				_warning "$TEST_NAME: could not find sysfs directory for ${test_dev}"
>>   				cleanup_fallback_device
>>   				return 0
>>   			fi
>> +			read -r sysfs_dir part_sysfs_dir < <(echo "${dirs}")
> 
> Let's rename _find_sysfs_dir to find_sysfs_dirs and make it set
> TEST_DEV_SYSFS_DIRS["$test_dev"] and
> TEST_DEV_PART_SYSFS_DIRS["$test_dev"] itself instead of returning the
> string.

OK. Will revise the patch.

> 
>>   			TEST_DEVS=( "${test_dev}" )
>>   			TEST_DEV_SYSFS_DIRS["$test_dev"]="$sysfs_dir"
>> +			TEST_DEV_PART_SYSFS_DIRS["$test_dev"]="$part_sysfs_dir"
>>   			FALLBACK_DEVICE=1
>>   		fi
>>   
>> @@ -464,6 +470,7 @@ _run_test() {
>>   		local ret=0
>>   		for TEST_DEV in "${TEST_DEVS[@]}"; do
>>   			TEST_DEV_SYSFS="${TEST_DEV_SYSFS_DIRS["$TEST_DEV"]}"
>> +			TEST_DEV_PART_SYSFS="${TEST_DEV_PART_SYSFS_DIRS["$TEST_DEV"]}"
>>   			if (( !CAN_BE_ZONED )) && _test_dev_is_zoned; then
>>   				SKIP_REASON="${TEST_DEV} is a zoned block device"
>>   				_output_notrun "$TEST_NAME => $(basename "$TEST_DEV")"
>> @@ -483,6 +490,7 @@ _run_test() {
>>   		if (( FALLBACK_DEVICE )); then
>>   			cleanup_fallback_device
>>   			unset TEST_DEV_SYSFS_DIRS["${TEST_DEVS[0]}"]
>> +			unset TEST_DEV_PART_SYSFS_DIRS["${TEST_DEVS[0]}"]
>>   			TEST_DEVS=()
>>   		fi
>>   
>> @@ -507,6 +515,8 @@ _run_group() {
>>   		for i in "${!TEST_DEVS[@]}"; do
>>   			TEST_DEV="${TEST_DEVS[$i]}"
>>   			TEST_DEV_SYSFS="${TEST_DEV_SYSFS_DIRS["$TEST_DEV"]}"
>> +			# shellcheck disable=SC2034
>> +			TEST_DEV_PART_SYSFS="${TEST_DEV_PART_SYSFS_DIRS["$TEST_DEV"]}"
>>   			if ! group_device_requires; then
>>   				_output_notrun "${group}/*** => $(basename "$TEST_DEV")"
>>   				unset TEST_DEVS["$i"]
>> @@ -529,28 +539,31 @@ _run_group() {
>>   
>>   _find_sysfs_dir() {
>>   	local test_dev="$1"
>> +	local sysfs_path
>>   	local major=$((0x$(stat -L -c '%t' "$test_dev")))
>>   	local minor=$((0x$(stat -L -c '%T' "$test_dev")))
>> -	local dev="$major:$minor"
>> +	local sysdev_path="/sys/dev/block/${major}:${minor}"
>>   
>> -	local block_dir part_dir
>> -	for block_dir in /sys/block/*; do
>> -		if [[ $(cat "${block_dir}/dev") = "$dev" ]]; then
>> -			echo "$block_dir"
>> -			return
>> -		fi
>> -		for part_dir in "$block_dir"/*; do
>> -			if [[ -r ${part_dir}/dev && $(cat "${part_dir}/dev") = "$dev" ]]; then
>> -				echo "$block_dir"
>> -				return
>> -			fi
>> -		done
>> -	done
>> +	# Get the canonical sysfs path
>> +	if ! sysfs_path=/sys/dev/block/$(readlink "${sysdev_path}"); then
> 
> sysfs_path="$(realpath "/sys/dev/block/${major}:${minor}")" is a bit
> shorter, does that still work?

Yes, the suggested code works well. Will revise the patch.

I will resend this patch and the following two patches as a new series.
Thank you for the review!

-- 
Best Regards,
Shin'ichiro Kawasaki

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2019-03-05  1:47 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-20  8:12 [PATCH blktests 0/5] Fix failures found with zoned block devices Shin'ichiro Kawasaki
2019-02-20  8:12 ` [PATCH blktests 1/5] block/024: Increase I/O time Shin'ichiro Kawasaki
2019-02-20  8:12 ` [PATCH blktests 2/5] zbd/004: Add zone condition "Closed" for sequential write required zones Shin'ichiro Kawasaki
2019-02-20  8:12 ` [PATCH blktests 3/5] check: Add TEST_DEV_PART_SYSFS variable Shin'ichiro Kawasaki
2019-03-04 22:34   ` Omar Sandoval
2019-03-05  1:47     ` Shinichiro Kawasaki
2019-02-20  8:12 ` [PATCH blktests 4/5] common: Add _test_dev_is_partition() helper function Shin'ichiro Kawasaki
2019-03-04 22:36   ` Omar Sandoval
2019-02-20  8:12 ` [PATCH blktests 5/5] zbd: Change sysfs path for partition devices Shin'ichiro Kawasaki
2019-03-04 22:36   ` Omar Sandoval
2019-02-20 18:22 ` [PATCH blktests 0/5] Fix failures found with zoned block devices Omar Sandoval
2019-02-21  2:50   ` Damien Le Moal

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).