linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/1] null_blk: add option for managing virtual boundary
@ 2021-04-11 16:26 Max Gurtovoy
  2021-04-11 22:13 ` Chaitanya Kulkarni
  2021-04-11 23:34 ` Damien Le Moal
  0 siblings, 2 replies; 5+ messages in thread
From: Max Gurtovoy @ 2021-04-11 16:26 UTC (permalink / raw)
  To: linux-block, martin.petersen, axboe; +Cc: oren, idanb, yossike, Max Gurtovoy

This will enable changing the virtual boundary of null blk devices. For
now, null blk devices didn't have any restriction on the scatter/gather
elements received from the block layer. Add a module parameter that will
control the virtual boundary. This will enable testing the efficiency of
the block layer bounce buffer in case a suitable application will send
discontiguous IO to the given device.

Initial testing with patched FIO showed the following results (64 jobs,
128 iodepth):
IO size      READ (virt=false)   READ (virt=true)   Write (virt=false)  Write (virt=true)
----------  ------------------- -----------------  ------------------- -------------------
 1k            10.7M                8482k               10.8M              8471k
 2k            10.4M                8266k               10.4M              8271k
 4k            10.4M                8274k               10.3M              8226k
 8k            10.2M                8131k               9800k              7933k
 16k           9567k                7764k               8081k              6828k
 32k           8865k                7309k               5570k              5153k
 64k           7695k                6586k               2682k              2617k
 128k          5346k                5489k               1320k              1296k

Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
---
 drivers/block/null_blk/main.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c
index d6c821d48090..9ca80e38f7e5 100644
--- a/drivers/block/null_blk/main.c
+++ b/drivers/block/null_blk/main.c
@@ -84,6 +84,10 @@ enum {
 	NULL_Q_MQ		= 2,
 };
 
+static bool g_virt_boundary = false;
+module_param_named(virt_boundary, g_virt_boundary, bool, 0444);
+MODULE_PARM_DESC(virt_boundary, "Require a virtual boundary for the device. Default: False");
+
 static int g_no_sched;
 module_param_named(no_sched, g_no_sched, int, 0444);
 MODULE_PARM_DESC(no_sched, "No io scheduler");
@@ -1880,6 +1884,9 @@ static int null_add_dev(struct nullb_device *dev)
 				 BLK_DEF_MAX_SECTORS);
 	blk_queue_max_hw_sectors(nullb->q, dev->max_sectors);
 
+	if (g_virt_boundary)
+		blk_queue_virt_boundary(nullb->q, PAGE_SIZE - 1);
+
 	null_config_discard(nullb);
 
 	sprintf(nullb->disk_name, "nullb%d", nullb->index);
-- 
2.18.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH 1/1] null_blk: add option for managing virtual boundary
  2021-04-11 16:26 [PATCH 1/1] null_blk: add option for managing virtual boundary Max Gurtovoy
@ 2021-04-11 22:13 ` Chaitanya Kulkarni
  2021-04-11 23:34 ` Damien Le Moal
  1 sibling, 0 replies; 5+ messages in thread
From: Chaitanya Kulkarni @ 2021-04-11 22:13 UTC (permalink / raw)
  To: Max Gurtovoy, linux-block, martin.petersen, axboe; +Cc: oren, idanb, yossike

On 4/11/21 09:30, Max Gurtovoy wrote:
> diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c
> index d6c821d48090..9ca80e38f7e5 100644
> --- a/drivers/block/null_blk/main.c
> +++ b/drivers/block/null_blk/main.c
> @@ -84,6 +84,10 @@ enum {
>  	NULL_Q_MQ		= 2,
>  };
>  
> +static bool g_virt_boundary = false;
> +module_param_named(virt_boundary, g_virt_boundary, bool, 0444);
> +MODULE_PARM_DESC(virt_boundary, "Require a virtual boundary for the device. Default: False");
> +

The patch looks useful to me. But this will unconditionally enable
the virt boundry for all the devices created from configfs based on
the module param. Such enforcing will create an inflexibility when
user wants to create different devices for testing with different
configuration (e.g. virt bouncry on/off) with single modprobe.



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH 1/1] null_blk: add option for managing virtual boundary
  2021-04-11 16:26 [PATCH 1/1] null_blk: add option for managing virtual boundary Max Gurtovoy
  2021-04-11 22:13 ` Chaitanya Kulkarni
@ 2021-04-11 23:34 ` Damien Le Moal
  2021-04-11 23:36   ` Damien Le Moal
  1 sibling, 1 reply; 5+ messages in thread
From: Damien Le Moal @ 2021-04-11 23:34 UTC (permalink / raw)
  To: Max Gurtovoy, linux-block, martin.petersen, axboe; +Cc: oren, idanb, yossike

On 2021/04/12 1:30, Max Gurtovoy wrote:
> This will enable changing the virtual boundary of null blk devices. For
> now, null blk devices didn't have any restriction on the scatter/gather
> elements received from the block layer. Add a module parameter that will
> control the virtual boundary. This will enable testing the efficiency of
> the block layer bounce buffer in case a suitable application will send
> discontiguous IO to the given device.
> 
> Initial testing with patched FIO showed the following results (64 jobs,
> 128 iodepth):
> IO size      READ (virt=false)   READ (virt=true)   Write (virt=false)  Write (virt=true)
> ----------  ------------------- -----------------  ------------------- -------------------
>  1k            10.7M                8482k               10.8M              8471k
>  2k            10.4M                8266k               10.4M              8271k
>  4k            10.4M                8274k               10.3M              8226k
>  8k            10.2M                8131k               9800k              7933k
>  16k           9567k                7764k               8081k              6828k
>  32k           8865k                7309k               5570k              5153k
>  64k           7695k                6586k               2682k              2617k
>  128k          5346k                5489k               1320k              1296k
> 
> Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
> ---
>  drivers/block/null_blk/main.c | 7 +++++++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c
> index d6c821d48090..9ca80e38f7e5 100644
> --- a/drivers/block/null_blk/main.c
> +++ b/drivers/block/null_blk/main.c
> @@ -84,6 +84,10 @@ enum {
>  	NULL_Q_MQ		= 2,
>  };
>  
> +static bool g_virt_boundary = false;
> +module_param_named(virt_boundary, g_virt_boundary, bool, 0444);
> +MODULE_PARM_DESC(virt_boundary, "Require a virtual boundary for the device. Default: False");
> +
>  static int g_no_sched;
>  module_param_named(no_sched, g_no_sched, int, 0444);
>  MODULE_PARM_DESC(no_sched, "No io scheduler");
> @@ -1880,6 +1884,9 @@ static int null_add_dev(struct nullb_device *dev)
>  				 BLK_DEF_MAX_SECTORS);
>  	blk_queue_max_hw_sectors(nullb->q, dev->max_sectors);
>  
> +	if (g_virt_boundary)
> +		blk_queue_virt_boundary(nullb->q, PAGE_SIZE - 1);
> +
>  	null_config_discard(nullb);
>  
>  	sprintf(nullb->disk_name, "nullb%d", nullb->index);
> 

Looks good to me, but could you also add the configfs equivalent setting ?


-- 
Damien Le Moal
Western Digital Research

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH 1/1] null_blk: add option for managing virtual boundary
  2021-04-11 23:34 ` Damien Le Moal
@ 2021-04-11 23:36   ` Damien Le Moal
  2021-04-12  9:39     ` Max Gurtovoy
  0 siblings, 1 reply; 5+ messages in thread
From: Damien Le Moal @ 2021-04-11 23:36 UTC (permalink / raw)
  To: Max Gurtovoy, linux-block, martin.petersen, axboe; +Cc: oren, idanb, yossike

On 2021/04/12 8:34, Damien Le Moal wrote:
> On 2021/04/12 1:30, Max Gurtovoy wrote:
>> This will enable changing the virtual boundary of null blk devices. For
>> now, null blk devices didn't have any restriction on the scatter/gather
>> elements received from the block layer. Add a module parameter that will
>> control the virtual boundary. This will enable testing the efficiency of
>> the block layer bounce buffer in case a suitable application will send
>> discontiguous IO to the given device.
>>
>> Initial testing with patched FIO showed the following results (64 jobs,
>> 128 iodepth):
>> IO size      READ (virt=false)   READ (virt=true)   Write (virt=false)  Write (virt=true)
>> ----------  ------------------- -----------------  ------------------- -------------------
>>  1k            10.7M                8482k               10.8M              8471k
>>  2k            10.4M                8266k               10.4M              8271k
>>  4k            10.4M                8274k               10.3M              8226k
>>  8k            10.2M                8131k               9800k              7933k
>>  16k           9567k                7764k               8081k              6828k
>>  32k           8865k                7309k               5570k              5153k
>>  64k           7695k                6586k               2682k              2617k
>>  128k          5346k                5489k               1320k              1296k
>>
>> Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
>> ---
>>  drivers/block/null_blk/main.c | 7 +++++++
>>  1 file changed, 7 insertions(+)
>>
>> diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c
>> index d6c821d48090..9ca80e38f7e5 100644
>> --- a/drivers/block/null_blk/main.c
>> +++ b/drivers/block/null_blk/main.c
>> @@ -84,6 +84,10 @@ enum {
>>  	NULL_Q_MQ		= 2,
>>  };
>>  
>> +static bool g_virt_boundary = false;
>> +module_param_named(virt_boundary, g_virt_boundary, bool, 0444);
>> +MODULE_PARM_DESC(virt_boundary, "Require a virtual boundary for the device. Default: False");
>> +
>>  static int g_no_sched;
>>  module_param_named(no_sched, g_no_sched, int, 0444);
>>  MODULE_PARM_DESC(no_sched, "No io scheduler");
>> @@ -1880,6 +1884,9 @@ static int null_add_dev(struct nullb_device *dev)
>>  				 BLK_DEF_MAX_SECTORS);
>>  	blk_queue_max_hw_sectors(nullb->q, dev->max_sectors);
>>  
>> +	if (g_virt_boundary)
>> +		blk_queue_virt_boundary(nullb->q, PAGE_SIZE - 1);
>> +
>>  	null_config_discard(nullb);
>>  
>>  	sprintf(nullb->disk_name, "nullb%d", nullb->index);
>>
> 
> Looks good to me, but could you also add the configfs equivalent setting ?

Oops. Chaitanya already had pointed this out... Sorry about the noise :)


-- 
Damien Le Moal
Western Digital Research

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH 1/1] null_blk: add option for managing virtual boundary
  2021-04-11 23:36   ` Damien Le Moal
@ 2021-04-12  9:39     ` Max Gurtovoy
  0 siblings, 0 replies; 5+ messages in thread
From: Max Gurtovoy @ 2021-04-12  9:39 UTC (permalink / raw)
  To: Damien Le Moal, linux-block, martin.petersen, axboe; +Cc: oren, idanb, yossike


On 4/12/2021 2:36 AM, Damien Le Moal wrote:
> On 2021/04/12 8:34, Damien Le Moal wrote:
>> On 2021/04/12 1:30, Max Gurtovoy wrote:
>>> This will enable changing the virtual boundary of null blk devices. For
>>> now, null blk devices didn't have any restriction on the scatter/gather
>>> elements received from the block layer. Add a module parameter that will
>>> control the virtual boundary. This will enable testing the efficiency of
>>> the block layer bounce buffer in case a suitable application will send
>>> discontiguous IO to the given device.
>>>
>>> Initial testing with patched FIO showed the following results (64 jobs,
>>> 128 iodepth):
>>> IO size      READ (virt=false)   READ (virt=true)   Write (virt=false)  Write (virt=true)
>>> ----------  ------------------- -----------------  ------------------- -------------------
>>>   1k            10.7M                8482k               10.8M              8471k
>>>   2k            10.4M                8266k               10.4M              8271k
>>>   4k            10.4M                8274k               10.3M              8226k
>>>   8k            10.2M                8131k               9800k              7933k
>>>   16k           9567k                7764k               8081k              6828k
>>>   32k           8865k                7309k               5570k              5153k
>>>   64k           7695k                6586k               2682k              2617k
>>>   128k          5346k                5489k               1320k              1296k
>>>
>>> Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
>>> ---
>>>   drivers/block/null_blk/main.c | 7 +++++++
>>>   1 file changed, 7 insertions(+)
>>>
>>> diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c
>>> index d6c821d48090..9ca80e38f7e5 100644
>>> --- a/drivers/block/null_blk/main.c
>>> +++ b/drivers/block/null_blk/main.c
>>> @@ -84,6 +84,10 @@ enum {
>>>   	NULL_Q_MQ		= 2,
>>>   };
>>>   
>>> +static bool g_virt_boundary = false;
>>> +module_param_named(virt_boundary, g_virt_boundary, bool, 0444);
>>> +MODULE_PARM_DESC(virt_boundary, "Require a virtual boundary for the device. Default: False");
>>> +
>>>   static int g_no_sched;
>>>   module_param_named(no_sched, g_no_sched, int, 0444);
>>>   MODULE_PARM_DESC(no_sched, "No io scheduler");
>>> @@ -1880,6 +1884,9 @@ static int null_add_dev(struct nullb_device *dev)
>>>   				 BLK_DEF_MAX_SECTORS);
>>>   	blk_queue_max_hw_sectors(nullb->q, dev->max_sectors);
>>>   
>>> +	if (g_virt_boundary)
>>> +		blk_queue_virt_boundary(nullb->q, PAGE_SIZE - 1);
>>> +
>>>   	null_config_discard(nullb);
>>>   
>>>   	sprintf(nullb->disk_name, "nullb%d", nullb->index);
>>>
>> Looks good to me, but could you also add the configfs equivalent setting ?
> Oops. Chaitanya already had pointed this out... Sorry about the noise :)

Sure, I'll add it in v2.

Thanks.


>
>

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2021-04-12  9:46 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-11 16:26 [PATCH 1/1] null_blk: add option for managing virtual boundary Max Gurtovoy
2021-04-11 22:13 ` Chaitanya Kulkarni
2021-04-11 23:34 ` Damien Le Moal
2021-04-11 23:36   ` Damien Le Moal
2021-04-12  9:39     ` Max Gurtovoy

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).