linux-cifs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Tom Talpey <tom@talpey.com>
To: Namjae Jeon <linkinjeon@kernel.org>
Cc: linux-cifs@vger.kernel.org
Subject: Re: [PATCH 1/3] ksmbd: reduce smb direct max read/write size
Date: Sun, 30 Jan 2022 22:18:12 -0500	[thread overview]
Message-ID: <b22e3dd4-51d4-31d2-ac69-7cb4510860fc@talpey.com> (raw)
In-Reply-To: <CAKYAXd_kaQBYyj68-ijxnxt1VsZMj09Qovss1vuzDGdF3CsP2A@mail.gmail.com>

On 1/30/2022 8:07 PM, Namjae Jeon wrote:
> 2022-01-31 4:04 GMT+09:00, Tom Talpey <tom@talpey.com>:
>> On 1/30/2022 4:34 AM, Namjae Jeon wrote:
>>> To support RDMA in chelsio NICs, Reduce smb direct read/write size
>>> to about 512KB. With this change, we have checked that a single buffer
>>> descriptor was sent from Windows client, and intel X722 was operated at
>>> that size.
>>
>> I am guessing that the larger payload required a fast-register of a page
>> count which was larger than the adapter supports? Can you provide more
>> detail?
> Windows client can send multiple Buffer Descriptor V1 structure
> elements to server.
> ksmbd server doesn't support it yet. So it can handle only single element.

Oh! So it's a bug in ksmbd which isn't supporting the protocol.
Presumably this will be fixed in the future, and this patch
would be reversed.

In any case, the smaller size is purely a workaround which permits
it to interoperate with the Windows client. It's not actually a fix,
and has nothing fundamentally to do with Chelsio or Intel NICs.

The patch needs to say these. How about

"ksmbd does not support more than one Buffer Descriptor V1 element in
an smbdirect protocol request. Reducing the maximum read/write size to
about 512KB allows interoperability with Windows over a wider variety
of RDMA NICs, as an interim workaround."

> We have known that Windows sends multiple elements according to the
> size of smb direct max read/write size. For Melanox adapters, 1MB
> size, and Chelsea O, 512KB seems to be the threshold. I thought that
> windows would send a single buffer descriptor element when setting the
> adapter's max_fast_reg_page_list_len value to read/write size, but it
> was not.
> chelsio's max_fast_reg_page_list_len: 128
> mellanox's max_fast_reg_page_list_len: 511
> I don't know exactly what factor Windows client uses to send multiple
> elements. Even in MS-SMB2, It is not described. So I am trying to set
> the minimum read/write size until multiple elements handling is
> supported.

The protocol documents are about the protocol, and they intentionally
avoid specifying the behavior of each implementation. You could ask
the dochelp folks, but you may not get a clear answer, because as
you can see, "it depends" :)

In practice, a client will probably try to pack as many pages into
a single registration (memory handle) as possible. This will depend
on the memory layout, the adapter capabilities, and the way the
client was actually coded (fast-register has very different requirements
from other memreg models). I take it the Linux smbdirect client does
not trigger this issue?

Is there some reason you can't currently support multiple descriptors?
Or is it simply deferred for now?

Tom.

>> Also, what exactly does "single buffer descriptor from Windows client"
>> mean, and why is it relevant?
> Windows can send an array of one or more Buffer Descriptor V1
> structures, i.e. multiple elements. Currently, ksmbd can handle only
> one Buffer Descriptor V1 structure element.
> 
> If there's anything I've missed, please let me know.
>>
>> Confused,
>> Tom.
> Thanks!
>>
>>> Signed-off-by: Namjae Jeon <linkinjeon@kernel.org>
>>> ---
>>>    fs/ksmbd/transport_rdma.c | 2 +-
>>>    1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/fs/ksmbd/transport_rdma.c b/fs/ksmbd/transport_rdma.c
>>> index 3c1ec1ac0b27..ba5a22bc2e6d 100644
>>> --- a/fs/ksmbd/transport_rdma.c
>>> +++ b/fs/ksmbd/transport_rdma.c
>>> @@ -80,7 +80,7 @@ static int smb_direct_max_fragmented_recv_size = 1024 *
>>> 1024;
>>>    /*  The maximum single-message size which can be received */
>>>    static int smb_direct_max_receive_size = 8192;
>>>
>>> -static int smb_direct_max_read_write_size = 1048512;
>>> +static int smb_direct_max_read_write_size = 524224;
>>>
>>>    static int smb_direct_max_outstanding_rw_ops = 8;
>>>
>>
> 

  reply	other threads:[~2022-01-31  3:18 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-01-30  9:34 [PATCH 1/3] ksmbd: reduce smb direct max read/write size Namjae Jeon
2022-01-30  9:34 ` [PATCH 2/3] ksmbd: fix same UniqueId for dot and dotdot entries Namjae Jeon
2022-01-30  9:34 ` [PATCH 3/3] ksmbd: don't align last entry offset in smb2 query directory Namjae Jeon
2022-01-30 19:04 ` [PATCH 1/3] ksmbd: reduce smb direct max read/write size Tom Talpey
2022-01-31  1:07   ` Namjae Jeon
2022-01-31  3:18     ` Tom Talpey [this message]
2022-01-31  5:09       ` Namjae Jeon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b22e3dd4-51d4-31d2-ac69-7cb4510860fc@talpey.com \
    --to=tom@talpey.com \
    --cc=linkinjeon@kernel.org \
    --cc=linux-cifs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).