linux-cifs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/3] ksmbd: reduce smb direct max read/write size
@ 2022-01-30  9:34 Namjae Jeon
  2022-01-30  9:34 ` [PATCH 2/3] ksmbd: fix same UniqueId for dot and dotdot entries Namjae Jeon
                   ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Namjae Jeon @ 2022-01-30  9:34 UTC (permalink / raw)
  To: linux-cifs; +Cc: Namjae Jeon

To support RDMA in chelsio NICs, Reduce smb direct read/write size
to about 512KB. With this change, we have checked that a single buffer
descriptor was sent from Windows client, and intel X722 was operated at
that size.

Signed-off-by: Namjae Jeon <linkinjeon@kernel.org>
---
 fs/ksmbd/transport_rdma.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/ksmbd/transport_rdma.c b/fs/ksmbd/transport_rdma.c
index 3c1ec1ac0b27..ba5a22bc2e6d 100644
--- a/fs/ksmbd/transport_rdma.c
+++ b/fs/ksmbd/transport_rdma.c
@@ -80,7 +80,7 @@ static int smb_direct_max_fragmented_recv_size = 1024 * 1024;
 /*  The maximum single-message size which can be received */
 static int smb_direct_max_receive_size = 8192;
 
-static int smb_direct_max_read_write_size = 1048512;
+static int smb_direct_max_read_write_size = 524224;
 
 static int smb_direct_max_outstanding_rw_ops = 8;
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 2/3] ksmbd: fix same UniqueId for dot and dotdot entries
  2022-01-30  9:34 [PATCH 1/3] ksmbd: reduce smb direct max read/write size Namjae Jeon
@ 2022-01-30  9:34 ` Namjae Jeon
  2022-01-30  9:34 ` [PATCH 3/3] ksmbd: don't align last entry offset in smb2 query directory Namjae Jeon
  2022-01-30 19:04 ` [PATCH 1/3] ksmbd: reduce smb direct max read/write size Tom Talpey
  2 siblings, 0 replies; 7+ messages in thread
From: Namjae Jeon @ 2022-01-30  9:34 UTC (permalink / raw)
  To: linux-cifs; +Cc: Namjae Jeon

ksmbd sets the inode number to UniqueId. However, the same UniqueId for
dot and dotdot entry is set to the inode number of the parent inode.
This patch set them using the current inode and parent inode.

Signed-off-by: Namjae Jeon <linkinjeon@kernel.org>
---
 fs/ksmbd/smb_common.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/fs/ksmbd/smb_common.c b/fs/ksmbd/smb_common.c
index ef7f42b0290a..9a7e211dbf4f 100644
--- a/fs/ksmbd/smb_common.c
+++ b/fs/ksmbd/smb_common.c
@@ -308,14 +308,17 @@ int ksmbd_populate_dot_dotdot_entries(struct ksmbd_work *work, int info_level,
 	for (i = 0; i < 2; i++) {
 		struct kstat kstat;
 		struct ksmbd_kstat ksmbd_kstat;
+		struct dentry *dentry;
 
 		if (!dir->dot_dotdot[i]) { /* fill dot entry info */
 			if (i == 0) {
 				d_info->name = ".";
 				d_info->name_len = 1;
+				dentry = dir->filp->f_path.dentry;
 			} else {
 				d_info->name = "..";
 				d_info->name_len = 2;
+				dentry = dir->filp->f_path.dentry->d_parent;
 			}
 
 			if (!match_pattern(d_info->name, d_info->name_len,
@@ -327,7 +330,7 @@ int ksmbd_populate_dot_dotdot_entries(struct ksmbd_work *work, int info_level,
 			ksmbd_kstat.kstat = &kstat;
 			ksmbd_vfs_fill_dentry_attrs(work,
 						    user_ns,
-						    dir->filp->f_path.dentry->d_parent,
+						    dentry,
 						    &ksmbd_kstat);
 			rc = fn(conn, info_level, d_info, &ksmbd_kstat);
 			if (rc)
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 3/3] ksmbd: don't align last entry offset in smb2 query directory
  2022-01-30  9:34 [PATCH 1/3] ksmbd: reduce smb direct max read/write size Namjae Jeon
  2022-01-30  9:34 ` [PATCH 2/3] ksmbd: fix same UniqueId for dot and dotdot entries Namjae Jeon
@ 2022-01-30  9:34 ` Namjae Jeon
  2022-01-30 19:04 ` [PATCH 1/3] ksmbd: reduce smb direct max read/write size Tom Talpey
  2 siblings, 0 replies; 7+ messages in thread
From: Namjae Jeon @ 2022-01-30  9:34 UTC (permalink / raw)
  To: linux-cifs; +Cc: Namjae Jeon

When checking smb2 query directory packets from other servers,
OutputBufferLength is different with ksmbd. Other servers add an unaligned
next offset to OutputBufferLength for the last entry.

Signed-off-by: Namjae Jeon <linkinjeon@kernel.org>
---
 fs/ksmbd/smb2pdu.c | 7 ++++---
 fs/ksmbd/vfs.h     | 1 +
 2 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/fs/ksmbd/smb2pdu.c b/fs/ksmbd/smb2pdu.c
index 6806994383d9..67e8e28e3fc3 100644
--- a/fs/ksmbd/smb2pdu.c
+++ b/fs/ksmbd/smb2pdu.c
@@ -3422,9 +3422,9 @@ static int smb2_populate_readdir_entry(struct ksmbd_conn *conn, int info_level,
 		goto free_conv_name;
 	}
 
-	struct_sz = readdir_info_level_struct_sz(info_level);
-	next_entry_offset = ALIGN(struct_sz - 1 + conv_len,
-				  KSMBD_DIR_INFO_ALIGNMENT);
+	struct_sz = readdir_info_level_struct_sz(info_level) - 1 + conv_len;
+	next_entry_offset = ALIGN(struct_sz, KSMBD_DIR_INFO_ALIGNMENT);
+	d_info->last_entry_off_align = next_entry_offset - struct_sz;
 
 	if (next_entry_offset > d_info->out_buf_len) {
 		d_info->out_buf_len = 0;
@@ -3976,6 +3976,7 @@ int smb2_query_dir(struct ksmbd_work *work)
 		((struct file_directory_info *)
 		((char *)rsp->Buffer + d_info.last_entry_offset))
 		->NextEntryOffset = 0;
+		d_info.data_count -= d_info.last_entry_off_align;
 
 		rsp->StructureSize = cpu_to_le16(9);
 		rsp->OutputBufferOffset = cpu_to_le16(72);
diff --git a/fs/ksmbd/vfs.h b/fs/ksmbd/vfs.h
index adf94a4f22fa..8c37aaf936ab 100644
--- a/fs/ksmbd/vfs.h
+++ b/fs/ksmbd/vfs.h
@@ -47,6 +47,7 @@ struct ksmbd_dir_info {
 	int		last_entry_offset;
 	bool		hide_dot_file;
 	int		flags;
+	int		last_entry_off_align;
 };
 
 struct ksmbd_readdir_data {
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH 1/3] ksmbd: reduce smb direct max read/write size
  2022-01-30  9:34 [PATCH 1/3] ksmbd: reduce smb direct max read/write size Namjae Jeon
  2022-01-30  9:34 ` [PATCH 2/3] ksmbd: fix same UniqueId for dot and dotdot entries Namjae Jeon
  2022-01-30  9:34 ` [PATCH 3/3] ksmbd: don't align last entry offset in smb2 query directory Namjae Jeon
@ 2022-01-30 19:04 ` Tom Talpey
  2022-01-31  1:07   ` Namjae Jeon
  2 siblings, 1 reply; 7+ messages in thread
From: Tom Talpey @ 2022-01-30 19:04 UTC (permalink / raw)
  To: Namjae Jeon, linux-cifs

On 1/30/2022 4:34 AM, Namjae Jeon wrote:
> To support RDMA in chelsio NICs, Reduce smb direct read/write size
> to about 512KB. With this change, we have checked that a single buffer
> descriptor was sent from Windows client, and intel X722 was operated at
> that size.

I am guessing that the larger payload required a fast-register of a page 
count which was larger than the adapter supports? Can you provide more
detail?

Also, what exactly does "single buffer descriptor from Windows client"
mean, and why is it relevant?

Confused,
Tom.

> Signed-off-by: Namjae Jeon <linkinjeon@kernel.org>
> ---
>   fs/ksmbd/transport_rdma.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/fs/ksmbd/transport_rdma.c b/fs/ksmbd/transport_rdma.c
> index 3c1ec1ac0b27..ba5a22bc2e6d 100644
> --- a/fs/ksmbd/transport_rdma.c
> +++ b/fs/ksmbd/transport_rdma.c
> @@ -80,7 +80,7 @@ static int smb_direct_max_fragmented_recv_size = 1024 * 1024;
>   /*  The maximum single-message size which can be received */
>   static int smb_direct_max_receive_size = 8192;
>   
> -static int smb_direct_max_read_write_size = 1048512;
> +static int smb_direct_max_read_write_size = 524224;
>   
>   static int smb_direct_max_outstanding_rw_ops = 8;
>   

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 1/3] ksmbd: reduce smb direct max read/write size
  2022-01-30 19:04 ` [PATCH 1/3] ksmbd: reduce smb direct max read/write size Tom Talpey
@ 2022-01-31  1:07   ` Namjae Jeon
  2022-01-31  3:18     ` Tom Talpey
  0 siblings, 1 reply; 7+ messages in thread
From: Namjae Jeon @ 2022-01-31  1:07 UTC (permalink / raw)
  To: Tom Talpey; +Cc: linux-cifs

2022-01-31 4:04 GMT+09:00, Tom Talpey <tom@talpey.com>:
> On 1/30/2022 4:34 AM, Namjae Jeon wrote:
>> To support RDMA in chelsio NICs, Reduce smb direct read/write size
>> to about 512KB. With this change, we have checked that a single buffer
>> descriptor was sent from Windows client, and intel X722 was operated at
>> that size.
>
> I am guessing that the larger payload required a fast-register of a page
> count which was larger than the adapter supports? Can you provide more
> detail?
Windows client can send multiple Buffer Descriptor V1 structure
elements to server.
ksmbd server doesn't support it yet. So it can handle only single element.
We have known that Windows sends multiple elements according to the
size of smb direct max read/write size. For Melanox adapters, 1MB
size, and Chelsea O, 512KB seems to be the threshold. I thought that
windows would send a single buffer descriptor element when setting the
adapter's max_fast_reg_page_list_len value to read/write size, but it
was not.
chelsio's max_fast_reg_page_list_len: 128
mellanox's max_fast_reg_page_list_len: 511
I don't know exactly what factor Windows client uses to send multiple
elements. Even in MS-SMB2, It is not described. So I am trying to set
the minimum read/write size until multiple elements handling is
supported.
>
> Also, what exactly does "single buffer descriptor from Windows client"
> mean, and why is it relevant?
Windows can send an array of one or more Buffer Descriptor V1
structures, i.e. multiple elements. Currently, ksmbd can handle only
one Buffer Descriptor V1 structure element.

If there's anything I've missed, please let me know.
>
> Confused,
> Tom.
Thanks!
>
>> Signed-off-by: Namjae Jeon <linkinjeon@kernel.org>
>> ---
>>   fs/ksmbd/transport_rdma.c | 2 +-
>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/fs/ksmbd/transport_rdma.c b/fs/ksmbd/transport_rdma.c
>> index 3c1ec1ac0b27..ba5a22bc2e6d 100644
>> --- a/fs/ksmbd/transport_rdma.c
>> +++ b/fs/ksmbd/transport_rdma.c
>> @@ -80,7 +80,7 @@ static int smb_direct_max_fragmented_recv_size = 1024 *
>> 1024;
>>   /*  The maximum single-message size which can be received */
>>   static int smb_direct_max_receive_size = 8192;
>>
>> -static int smb_direct_max_read_write_size = 1048512;
>> +static int smb_direct_max_read_write_size = 524224;
>>
>>   static int smb_direct_max_outstanding_rw_ops = 8;
>>
>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 1/3] ksmbd: reduce smb direct max read/write size
  2022-01-31  1:07   ` Namjae Jeon
@ 2022-01-31  3:18     ` Tom Talpey
  2022-01-31  5:09       ` Namjae Jeon
  0 siblings, 1 reply; 7+ messages in thread
From: Tom Talpey @ 2022-01-31  3:18 UTC (permalink / raw)
  To: Namjae Jeon; +Cc: linux-cifs

On 1/30/2022 8:07 PM, Namjae Jeon wrote:
> 2022-01-31 4:04 GMT+09:00, Tom Talpey <tom@talpey.com>:
>> On 1/30/2022 4:34 AM, Namjae Jeon wrote:
>>> To support RDMA in chelsio NICs, Reduce smb direct read/write size
>>> to about 512KB. With this change, we have checked that a single buffer
>>> descriptor was sent from Windows client, and intel X722 was operated at
>>> that size.
>>
>> I am guessing that the larger payload required a fast-register of a page
>> count which was larger than the adapter supports? Can you provide more
>> detail?
> Windows client can send multiple Buffer Descriptor V1 structure
> elements to server.
> ksmbd server doesn't support it yet. So it can handle only single element.

Oh! So it's a bug in ksmbd which isn't supporting the protocol.
Presumably this will be fixed in the future, and this patch
would be reversed.

In any case, the smaller size is purely a workaround which permits
it to interoperate with the Windows client. It's not actually a fix,
and has nothing fundamentally to do with Chelsio or Intel NICs.

The patch needs to say these. How about

"ksmbd does not support more than one Buffer Descriptor V1 element in
an smbdirect protocol request. Reducing the maximum read/write size to
about 512KB allows interoperability with Windows over a wider variety
of RDMA NICs, as an interim workaround."

> We have known that Windows sends multiple elements according to the
> size of smb direct max read/write size. For Melanox adapters, 1MB
> size, and Chelsea O, 512KB seems to be the threshold. I thought that
> windows would send a single buffer descriptor element when setting the
> adapter's max_fast_reg_page_list_len value to read/write size, but it
> was not.
> chelsio's max_fast_reg_page_list_len: 128
> mellanox's max_fast_reg_page_list_len: 511
> I don't know exactly what factor Windows client uses to send multiple
> elements. Even in MS-SMB2, It is not described. So I am trying to set
> the minimum read/write size until multiple elements handling is
> supported.

The protocol documents are about the protocol, and they intentionally
avoid specifying the behavior of each implementation. You could ask
the dochelp folks, but you may not get a clear answer, because as
you can see, "it depends" :)

In practice, a client will probably try to pack as many pages into
a single registration (memory handle) as possible. This will depend
on the memory layout, the adapter capabilities, and the way the
client was actually coded (fast-register has very different requirements
from other memreg models). I take it the Linux smbdirect client does
not trigger this issue?

Is there some reason you can't currently support multiple descriptors?
Or is it simply deferred for now?

Tom.

>> Also, what exactly does "single buffer descriptor from Windows client"
>> mean, and why is it relevant?
> Windows can send an array of one or more Buffer Descriptor V1
> structures, i.e. multiple elements. Currently, ksmbd can handle only
> one Buffer Descriptor V1 structure element.
> 
> If there's anything I've missed, please let me know.
>>
>> Confused,
>> Tom.
> Thanks!
>>
>>> Signed-off-by: Namjae Jeon <linkinjeon@kernel.org>
>>> ---
>>>    fs/ksmbd/transport_rdma.c | 2 +-
>>>    1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/fs/ksmbd/transport_rdma.c b/fs/ksmbd/transport_rdma.c
>>> index 3c1ec1ac0b27..ba5a22bc2e6d 100644
>>> --- a/fs/ksmbd/transport_rdma.c
>>> +++ b/fs/ksmbd/transport_rdma.c
>>> @@ -80,7 +80,7 @@ static int smb_direct_max_fragmented_recv_size = 1024 *
>>> 1024;
>>>    /*  The maximum single-message size which can be received */
>>>    static int smb_direct_max_receive_size = 8192;
>>>
>>> -static int smb_direct_max_read_write_size = 1048512;
>>> +static int smb_direct_max_read_write_size = 524224;
>>>
>>>    static int smb_direct_max_outstanding_rw_ops = 8;
>>>
>>
> 

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 1/3] ksmbd: reduce smb direct max read/write size
  2022-01-31  3:18     ` Tom Talpey
@ 2022-01-31  5:09       ` Namjae Jeon
  0 siblings, 0 replies; 7+ messages in thread
From: Namjae Jeon @ 2022-01-31  5:09 UTC (permalink / raw)
  To: Tom Talpey; +Cc: linux-cifs

2022-01-31 12:18 GMT+09:00, Tom Talpey <tom@talpey.com>:
> On 1/30/2022 8:07 PM, Namjae Jeon wrote:
>> 2022-01-31 4:04 GMT+09:00, Tom Talpey <tom@talpey.com>:
>>> On 1/30/2022 4:34 AM, Namjae Jeon wrote:
>>>> To support RDMA in chelsio NICs, Reduce smb direct read/write size
>>>> to about 512KB. With this change, we have checked that a single buffer
>>>> descriptor was sent from Windows client, and intel X722 was operated at
>>>> that size.
>>>
>>> I am guessing that the larger payload required a fast-register of a page
>>> count which was larger than the adapter supports? Can you provide more
>>> detail?
>> Windows client can send multiple Buffer Descriptor V1 structure
>> elements to server.
>> ksmbd server doesn't support it yet. So it can handle only single
>> element.
>
> Oh! So it's a bug in ksmbd which isn't supporting the protocol.
> Presumably this will be fixed in the future, and this patch
> would be reversed.
Right. Work to support it is in progress, Need the time to complete it.
>
> In any case, the smaller size is purely a workaround which permits
> it to interoperate with the Windows client. It's not actually a fix,
> and has nothing fundamentally to do with Chelsio or Intel NICs.
Right.
>
> The patch needs to say these. How about
>
> "ksmbd does not support more than one Buffer Descriptor V1 element in
> an smbdirect protocol request. Reducing the maximum read/write size to
> about 512KB allows interoperability with Windows over a wider variety
> of RDMA NICs, as an interim workaround."
Thanks:)  I will update patch description.
>
>> We have known that Windows sends multiple elements according to the
>> size of smb direct max read/write size. For Melanox adapters, 1MB
>> size, and Chelsea O, 512KB seems to be the threshold. I thought that
>> windows would send a single buffer descriptor element when setting the
>> adapter's max_fast_reg_page_list_len value to read/write size, but it
>> was not.
>> chelsio's max_fast_reg_page_list_len: 128
>> mellanox's max_fast_reg_page_list_len: 511
>> I don't know exactly what factor Windows client uses to send multiple
>> elements. Even in MS-SMB2, It is not described. So I am trying to set
>> the minimum read/write size until multiple elements handling is
>> supported.
>
> The protocol documents are about the protocol, and they intentionally
> avoid specifying the behavior of each implementation. You could ask
> the dochelp folks, but you may not get a clear answer, because as
> you can see, "it depends" :)
Okay.
>
> In practice, a client will probably try to pack as many pages into
> a single registration (memory handle) as possible. This will depend
> on the memory layout, the adapter capabilities, and the way the
> client was actually coded (fast-register has very different requirements
> from other memreg models). I take it the Linux smbdirect client does
> not trigger this issue?
Yes, Because linux smbdirect client send one Buffer Descriptor V1 element:)
So there is no issue between linux smbdirect client and ksmbd.
>
> Is there some reason you can't currently support multiple descriptors?
> Or is it simply deferred for now?
smbdirect codes of ksmbd are referenced from the linux cifs client's one.
and we found that windows client could send more than one element later.
It is in progress. I want that ksmbd work RDMA with Windows over a
wider variety of RDMA NICs in the meantime.

Thanks!
>
> Tom.
>
>>> Also, what exactly does "single buffer descriptor from Windows client"
>>> mean, and why is it relevant?
>> Windows can send an array of one or more Buffer Descriptor V1
>> structures, i.e. multiple elements. Currently, ksmbd can handle only
>> one Buffer Descriptor V1 structure element.
>>
>> If there's anything I've missed, please let me know.
>>>
>>> Confused,
>>> Tom.
>> Thanks!
>>>
>>>> Signed-off-by: Namjae Jeon <linkinjeon@kernel.org>
>>>> ---
>>>>    fs/ksmbd/transport_rdma.c | 2 +-
>>>>    1 file changed, 1 insertion(+), 1 deletion(-)
>>>>
>>>> diff --git a/fs/ksmbd/transport_rdma.c b/fs/ksmbd/transport_rdma.c
>>>> index 3c1ec1ac0b27..ba5a22bc2e6d 100644
>>>> --- a/fs/ksmbd/transport_rdma.c
>>>> +++ b/fs/ksmbd/transport_rdma.c
>>>> @@ -80,7 +80,7 @@ static int smb_direct_max_fragmented_recv_size = 1024
>>>> *
>>>> 1024;
>>>>    /*  The maximum single-message size which can be received */
>>>>    static int smb_direct_max_receive_size = 8192;
>>>>
>>>> -static int smb_direct_max_read_write_size = 1048512;
>>>> +static int smb_direct_max_read_write_size = 524224;
>>>>
>>>>    static int smb_direct_max_outstanding_rw_ops = 8;
>>>>
>>>
>>
>

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2022-01-31  5:09 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-01-30  9:34 [PATCH 1/3] ksmbd: reduce smb direct max read/write size Namjae Jeon
2022-01-30  9:34 ` [PATCH 2/3] ksmbd: fix same UniqueId for dot and dotdot entries Namjae Jeon
2022-01-30  9:34 ` [PATCH 3/3] ksmbd: don't align last entry offset in smb2 query directory Namjae Jeon
2022-01-30 19:04 ` [PATCH 1/3] ksmbd: reduce smb direct max read/write size Tom Talpey
2022-01-31  1:07   ` Namjae Jeon
2022-01-31  3:18     ` Tom Talpey
2022-01-31  5:09       ` Namjae Jeon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).