linux-cifs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] cifs: use the least loaded channel for sending requests
@ 2022-12-20 13:28 Shyam Prasad N
  2022-12-20 14:47 ` Steve French
  0 siblings, 1 reply; 7+ messages in thread
From: Shyam Prasad N @ 2022-12-20 13:28 UTC (permalink / raw)
  To: Steve French, CIFS, Tom Talpey, Bharath S M

Hi Steve,

Below is a patch for a new channel allocation strategy that we've been
discussing for some time now. It uses the least loaded channel to send
requests as compared to the simple round robin. This will help
especially in cases where the server is not consuming requests at the
same rate across the channels.

I've put the changes behind a config option that has a default value of true.
This way, we have an option to switch to the current default of round
robin when needed.

Please review.

https://github.com/sprasad-microsoft/smb3-kernel-client/commit/28b96fd89f7d746fc2b6c68682527214a55463f9.patch

-- 
Regards,
Shyam

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] cifs: use the least loaded channel for sending requests
  2022-12-20 13:28 [PATCH] cifs: use the least loaded channel for sending requests Shyam Prasad N
@ 2022-12-20 14:47 ` Steve French
  2022-12-20 18:18   ` Tom Talpey
  0 siblings, 1 reply; 7+ messages in thread
From: Steve French @ 2022-12-20 14:47 UTC (permalink / raw)
  To: Shyam Prasad N; +Cc: CIFS, Tom Talpey, Bharath S M

maybe a module load parm would be easier to use than kernel config
option (and give more realistic test comparison data for many)

On Tue, Dec 20, 2022 at 7:29 AM Shyam Prasad N <nspmangalore@gmail.com> wrote:
>
> Hi Steve,
>
> Below is a patch for a new channel allocation strategy that we've been
> discussing for some time now. It uses the least loaded channel to send
> requests as compared to the simple round robin. This will help
> especially in cases where the server is not consuming requests at the
> same rate across the channels.
>
> I've put the changes behind a config option that has a default value of true.
> This way, we have an option to switch to the current default of round
> robin when needed.
>
> Please review.
>
> https://github.com/sprasad-microsoft/smb3-kernel-client/commit/28b96fd89f7d746fc2b6c68682527214a55463f9.patch
>
> --
> Regards,
> Shyam



-- 
Thanks,

Steve

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] cifs: use the least loaded channel for sending requests
  2022-12-20 14:47 ` Steve French
@ 2022-12-20 18:18   ` Tom Talpey
  2022-12-21 15:33     ` Shyam Prasad N
  0 siblings, 1 reply; 7+ messages in thread
From: Tom Talpey @ 2022-12-20 18:18 UTC (permalink / raw)
  To: Steve French, Shyam Prasad N; +Cc: CIFS, Bharath S M

I'd suggest a runtime configuration, personally. A config option
is undesirable, because it's very difficult to deploy. A module
parameter is only slightly better. The channel selection is a
natural for a runtime per-operation decision. And for the record,
I think a round-robin selection is a bad approach. Personally
I'd just yank it.

I'm uneasy about ignoring the channel bandwidth and channel type.
Low bandwidth channels, or mixing RDMA and non-RDMA, are going to
be very problematic for bulk data. In fact, the Windows client
never mixes such alternatives, it always selects identical link
speeds and transport types. The traffic will always find a way to
block on the slowest/worst connection.

Do you feel there is some advantage to mixing traffic? If so, can
you elaborate on that?

The patch you link to doesn't seem complete. If min_in_flight is
initialized to -1, how does the server->in_flight < min_in_flight
test ever return true?

Tom.

On 12/20/2022 9:47 AM, Steve French wrote:
> maybe a module load parm would be easier to use than kernel config
> option (and give more realistic test comparison data for many)
> 
> On Tue, Dec 20, 2022 at 7:29 AM Shyam Prasad N <nspmangalore@gmail.com> wrote:
>>
>> Hi Steve,
>>
>> Below is a patch for a new channel allocation strategy that we've been
>> discussing for some time now. It uses the least loaded channel to send
>> requests as compared to the simple round robin. This will help
>> especially in cases where the server is not consuming requests at the
>> same rate across the channels.
>>
>> I've put the changes behind a config option that has a default value of true.
>> This way, we have an option to switch to the current default of round
>> robin when needed.
>>
>> Please review.
>>
>> https://github.com/sprasad-microsoft/smb3-kernel-client/commit/28b96fd89f7d746fc2b6c68682527214a55463f9.patch
>>
>> --
>> Regards,
>> Shyam
> 
> 
> 

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] cifs: use the least loaded channel for sending requests
  2022-12-20 18:18   ` Tom Talpey
@ 2022-12-21 15:33     ` Shyam Prasad N
  2022-12-21 16:46       ` Tom Talpey
  0 siblings, 1 reply; 7+ messages in thread
From: Shyam Prasad N @ 2022-12-21 15:33 UTC (permalink / raw)
  To: Tom Talpey; +Cc: Steve French, CIFS, Bharath S M, Aurélien Aptel

On Tue, Dec 20, 2022 at 11:48 PM Tom Talpey <tom@talpey.com> wrote:
>
> I'd suggest a runtime configuration, personally. A config option
> is undesirable, because it's very difficult to deploy. A module
> parameter is only slightly better. The channel selection is a
> natural for a runtime per-operation decision. And for the record,
> I think a round-robin selection is a bad approach. Personally
> I'd just yank it.

Hi Tom,

Thanks for taking a look at this.
I was considering doing so. But was unsure if we'll still need a way
to do round robin.
Steve/Aurelien: Any objections to just remove the round-robin approach?

>
> I'm uneasy about ignoring the channel bandwidth and channel type.
> Low bandwidth channels, or mixing RDMA and non-RDMA, are going to
> be very problematic for bulk data. In fact, the Windows client
> never mixes such alternatives, it always selects identical link
> speeds and transport types. The traffic will always find a way to
> block on the slowest/worst connection.
>
> Do you feel there is some advantage to mixing traffic? If so, can
> you elaborate on that?

We will not be mixing traffic here. Channel bandwidth and type are
considered while establishing a new channel.
This change is only for distributing the requests among the channels
for the session.

That said, those decisions are sub-optimal today, IMO.
I plan to send out some changes there too.

>
> The patch you link to doesn't seem complete. If min_in_flight is
> initialized to -1, how does the server->in_flight < min_in_flight
> test ever return true?

min_in_flight is declared as unsigned and then assigned to -1.
I'm relying on the compiler to use the max value for the unsigned int
based on this.
Perhaps I should have been more explicit by assigning this to
UINT_MAX. Will do so now.

>
> Tom.
>
> On 12/20/2022 9:47 AM, Steve French wrote:
> > maybe a module load parm would be easier to use than kernel config
> > option (and give more realistic test comparison data for many)
> >
> > On Tue, Dec 20, 2022 at 7:29 AM Shyam Prasad N <nspmangalore@gmail.com> wrote:
> >>
> >> Hi Steve,
> >>
> >> Below is a patch for a new channel allocation strategy that we've been
> >> discussing for some time now. It uses the least loaded channel to send
> >> requests as compared to the simple round robin. This will help
> >> especially in cases where the server is not consuming requests at the
> >> same rate across the channels.
> >>
> >> I've put the changes behind a config option that has a default value of true.
> >> This way, we have an option to switch to the current default of round
> >> robin when needed.
> >>
> >> Please review.
> >>
> >> https://github.com/sprasad-microsoft/smb3-kernel-client/commit/28b96fd89f7d746fc2b6c68682527214a55463f9.patch
> >>
> >> --
> >> Regards,
> >> Shyam
> >
> >
> >



-- 
Regards,
Shyam

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] cifs: use the least loaded channel for sending requests
  2022-12-21 15:33     ` Shyam Prasad N
@ 2022-12-21 16:46       ` Tom Talpey
  2022-12-22  9:56         ` Shyam Prasad N
  0 siblings, 1 reply; 7+ messages in thread
From: Tom Talpey @ 2022-12-21 16:46 UTC (permalink / raw)
  To: Shyam Prasad N; +Cc: Steve French, CIFS, Bharath S M, Aurélien Aptel

Question on this:

 > We will not be mixing traffic here. Channel bandwidth and type are
 > considered while establishing a new channel.
 > This change is only for distributing the requests among the channels
 > for the session.

I disagree. The iface_cmp() function in cifsglob.h returns a positive
value for all interfaces which are as fast or faster than the one
being compared-to. And weirdly, it only looks at rdma_capable when the
speeds are equal, and it ignores rss_capable unless rdma_capable
matches.

It also makes the following questionable assumption:

     * The iface_list is assumed to be sorted by speed.

In parse_server_interfaces(), new eligible entries are added in the
order the server returns them. The protocol doesn't require this! It's
entirely implementation specific.

In any event, I think if the client connects to a server on a 1GbE
interface, and discovers a 10GbE one, it will mix traffic on both.
In the current code, every other operation will switch interfaces.
In your code, it will only slightly prefer the 10GbE, when bulk data
begins to backlog the 1GbE.

So, unless you fix iface_cmp, and rework the selection logic in
parse_server_interfaces, I don't think the change does much.

What about the runtime selection?

Tom.



On 12/21/2022 10:33 AM, Shyam Prasad N wrote:
> On Tue, Dec 20, 2022 at 11:48 PM Tom Talpey <tom@talpey.com> wrote:
>>
>> I'd suggest a runtime configuration, personally. A config option
>> is undesirable, because it's very difficult to deploy. A module
>> parameter is only slightly better. The channel selection is a
>> natural for a runtime per-operation decision. And for the record,
>> I think a round-robin selection is a bad approach. Personally
>> I'd just yank it.
> 
> Hi Tom,
> 
> Thanks for taking a look at this.
> I was considering doing so. But was unsure if we'll still need a way
> to do round robin.
> Steve/Aurelien: Any objections to just remove the round-robin approach?
> 
>>
>> I'm uneasy about ignoring the channel bandwidth and channel type.
>> Low bandwidth channels, or mixing RDMA and non-RDMA, are going to
>> be very problematic for bulk data. In fact, the Windows client
>> never mixes such alternatives, it always selects identical link
>> speeds and transport types. The traffic will always find a way to
>> block on the slowest/worst connection.
>>
>> Do you feel there is some advantage to mixing traffic? If so, can
>> you elaborate on that?
> 
> We will not be mixing traffic here. Channel bandwidth and type are
> considered while establishing a new channel.
> This change is only for distributing the requests among the channels
> for the session.
> 
> That said, those decisions are sub-optimal today, IMO.
> I plan to send out some changes there too.
> 
>>
>> The patch you link to doesn't seem complete. If min_in_flight is
>> initialized to -1, how does the server->in_flight < min_in_flight
>> test ever return true?
> 
> min_in_flight is declared as unsigned and then assigned to -1.
> I'm relying on the compiler to use the max value for the unsigned int
> based on this.
> Perhaps I should have been more explicit by assigning this to
> UINT_MAX. Will do so now.
> 
>>
>> Tom.
>>
>> On 12/20/2022 9:47 AM, Steve French wrote:
>>> maybe a module load parm would be easier to use than kernel config
>>> option (and give more realistic test comparison data for many)
>>>
>>> On Tue, Dec 20, 2022 at 7:29 AM Shyam Prasad N <nspmangalore@gmail.com> wrote:
>>>>
>>>> Hi Steve,
>>>>
>>>> Below is a patch for a new channel allocation strategy that we've been
>>>> discussing for some time now. It uses the least loaded channel to send
>>>> requests as compared to the simple round robin. This will help
>>>> especially in cases where the server is not consuming requests at the
>>>> same rate across the channels.
>>>>
>>>> I've put the changes behind a config option that has a default value of true.
>>>> This way, we have an option to switch to the current default of round
>>>> robin when needed.
>>>>
>>>> Please review.
>>>>
>>>> https://github.com/sprasad-microsoft/smb3-kernel-client/commit/28b96fd89f7d746fc2b6c68682527214a55463f9.patch
>>>>
>>>> --
>>>> Regards,
>>>> Shyam
>>>
>>>
>>>
> 
> 
> 

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] cifs: use the least loaded channel for sending requests
  2022-12-21 16:46       ` Tom Talpey
@ 2022-12-22  9:56         ` Shyam Prasad N
  2022-12-23 14:16           ` Shyam Prasad N
  0 siblings, 1 reply; 7+ messages in thread
From: Shyam Prasad N @ 2022-12-22  9:56 UTC (permalink / raw)
  To: Tom Talpey; +Cc: Steve French, CIFS, Bharath S M, Aurélien Aptel

On Wed, Dec 21, 2022 at 10:17 PM Tom Talpey <tom@talpey.com> wrote:
>
> Question on this:
>
>  > We will not be mixing traffic here. Channel bandwidth and type are
>  > considered while establishing a new channel.
>  > This change is only for distributing the requests among the channels
>  > for the session.
>
> I disagree. The iface_cmp() function in cifsglob.h returns a positive
> value for all interfaces which are as fast or faster than the one
> being compared-to. And weirdly, it only looks at rdma_capable when the
> speeds are equal, and it ignores rss_capable unless rdma_capable
> matches.
>

Hi Tom,

What I meant by my statement was that this change does not modify the
decision about which server interface the channels are connected to.

You are right about what iface_cmp does.
The goal of this function was to compare two interfaces, and return 1
if interface 'a' is more preferred; 0 if both are the same; and -1 if
b is preferred.
The one that is preferred sits closer to the head of the list after
parse_server_interfaces.
The "preference" is based on 3 factors:
1. speed
2. rdma_capable
3. rss_capable
I missed the case where interfaces could have both rdma and rss
capability. Will fix that soon.

Do you see any problem if channels of the same session mix rdma/rss
capable interfaces?
I've kept the overall logic the same as before my changes.

> It also makes the following questionable assumption:
>
>      * The iface_list is assumed to be sorted by speed.
>
> In parse_server_interfaces(), new eligible entries are added in the
> order the server returns them. The protocol doesn't require this! It's
> entirely implementation specific.

If the server returns interfaces of different speeds, the end result
of parse_server_interfaces is in decreasing order of speed; then rdma
capable; then rss capable interfaces.
The protocol does not require this. But the implementation we had
always chose to connect to the fastest interfaces first.

>
> In any event, I think if the client connects to a server on a 1GbE
> interface, and discovers a 10GbE one, it will mix traffic on both.
> In the current code, every other operation will switch interfaces.
> In your code, it will only slightly prefer the 10GbE, when bulk data
> begins to backlog the 1GbE.

Again, do you see a problem in mixing the interfaces, if we make
connections based on the speed of the connection?
I plan to change this logic to do a weighted distribution of channels
among interfaces (weight decided by speed of the interface), instead
of simply choosing the fastest one.
That way, parse_server_interfaces will not need to sort anymore.

>
> So, unless you fix iface_cmp, and rework the selection logic in
> parse_server_interfaces, I don't think the change does much.
>
> What about the runtime selection?

It should not be necessary.

I spoke to Steve about this yesterday. He doesn't mind doing away with
the old logic of simple round robin too.
However, when in-flight == 0 for all channels, he feels that we should
not favour the same channel all the time.
So I'm planning to make some modifications to the current code in that
direction.

>
> Tom.
>
>
>
> On 12/21/2022 10:33 AM, Shyam Prasad N wrote:
> > On Tue, Dec 20, 2022 at 11:48 PM Tom Talpey <tom@talpey.com> wrote:
> >>
> >> I'd suggest a runtime configuration, personally. A config option
> >> is undesirable, because it's very difficult to deploy. A module
> >> parameter is only slightly better. The channel selection is a
> >> natural for a runtime per-operation decision. And for the record,
> >> I think a round-robin selection is a bad approach. Personally
> >> I'd just yank it.
> >
> > Hi Tom,
> >
> > Thanks for taking a look at this.
> > I was considering doing so. But was unsure if we'll still need a way
> > to do round robin.
> > Steve/Aurelien: Any objections to just remove the round-robin approach?
> >
> >>
> >> I'm uneasy about ignoring the channel bandwidth and channel type.
> >> Low bandwidth channels, or mixing RDMA and non-RDMA, are going to
> >> be very problematic for bulk data. In fact, the Windows client
> >> never mixes such alternatives, it always selects identical link
> >> speeds and transport types. The traffic will always find a way to
> >> block on the slowest/worst connection.
> >>
> >> Do you feel there is some advantage to mixing traffic? If so, can
> >> you elaborate on that?
> >
> > We will not be mixing traffic here. Channel bandwidth and type are
> > considered while establishing a new channel.
> > This change is only for distributing the requests among the channels
> > for the session.
> >
> > That said, those decisions are sub-optimal today, IMO.
> > I plan to send out some changes there too.
> >
> >>
> >> The patch you link to doesn't seem complete. If min_in_flight is
> >> initialized to -1, how does the server->in_flight < min_in_flight
> >> test ever return true?
> >
> > min_in_flight is declared as unsigned and then assigned to -1.
> > I'm relying on the compiler to use the max value for the unsigned int
> > based on this.
> > Perhaps I should have been more explicit by assigning this to
> > UINT_MAX. Will do so now.
> >
> >>
> >> Tom.
> >>
> >> On 12/20/2022 9:47 AM, Steve French wrote:
> >>> maybe a module load parm would be easier to use than kernel config
> >>> option (and give more realistic test comparison data for many)
> >>>
> >>> On Tue, Dec 20, 2022 at 7:29 AM Shyam Prasad N <nspmangalore@gmail.com> wrote:
> >>>>
> >>>> Hi Steve,
> >>>>
> >>>> Below is a patch for a new channel allocation strategy that we've been
> >>>> discussing for some time now. It uses the least loaded channel to send
> >>>> requests as compared to the simple round robin. This will help
> >>>> especially in cases where the server is not consuming requests at the
> >>>> same rate across the channels.
> >>>>
> >>>> I've put the changes behind a config option that has a default value of true.
> >>>> This way, we have an option to switch to the current default of round
> >>>> robin when needed.
> >>>>
> >>>> Please review.
> >>>>
> >>>> https://github.com/sprasad-microsoft/smb3-kernel-client/commit/28b96fd89f7d746fc2b6c68682527214a55463f9.patch
> >>>>
> >>>> --
> >>>> Regards,
> >>>> Shyam
> >>>
> >>>
> >>>
> >
> >
> >



-- 
Regards,
Shyam

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] cifs: use the least loaded channel for sending requests
  2022-12-22  9:56         ` Shyam Prasad N
@ 2022-12-23 14:16           ` Shyam Prasad N
  0 siblings, 0 replies; 7+ messages in thread
From: Shyam Prasad N @ 2022-12-23 14:16 UTC (permalink / raw)
  To: Tom Talpey; +Cc: Steve French, CIFS, Bharath S M, Aurélien Aptel

On Thu, Dec 22, 2022 at 3:26 PM Shyam Prasad N <nspmangalore@gmail.com> wrote:
>
> On Wed, Dec 21, 2022 at 10:17 PM Tom Talpey <tom@talpey.com> wrote:
> >
> > Question on this:
> >
> >  > We will not be mixing traffic here. Channel bandwidth and type are
> >  > considered while establishing a new channel.
> >  > This change is only for distributing the requests among the channels
> >  > for the session.
> >
> > I disagree. The iface_cmp() function in cifsglob.h returns a positive
> > value for all interfaces which are as fast or faster than the one
> > being compared-to. And weirdly, it only looks at rdma_capable when the
> > speeds are equal, and it ignores rss_capable unless rdma_capable
> > matches.
> >
>
> Hi Tom,
>
> What I meant by my statement was that this change does not modify the
> decision about which server interface the channels are connected to.
>
> You are right about what iface_cmp does.
> The goal of this function was to compare two interfaces, and return 1
> if interface 'a' is more preferred; 0 if both are the same; and -1 if
> b is preferred.
> The one that is preferred sits closer to the head of the list after
> parse_server_interfaces.
> The "preference" is based on 3 factors:
> 1. speed
> 2. rdma_capable
> 3. rss_capable
> I missed the case where interfaces could have both rdma and rss
> capability. Will fix that soon.
>
> Do you see any problem if channels of the same session mix rdma/rss
> capable interfaces?
> I've kept the overall logic the same as before my changes.
>
> > It also makes the following questionable assumption:
> >
> >      * The iface_list is assumed to be sorted by speed.
> >
> > In parse_server_interfaces(), new eligible entries are added in the
> > order the server returns them. The protocol doesn't require this! It's
> > entirely implementation specific.
>
> If the server returns interfaces of different speeds, the end result
> of parse_server_interfaces is in decreasing order of speed; then rdma
> capable; then rss capable interfaces.
> The protocol does not require this. But the implementation we had
> always chose to connect to the fastest interfaces first.
>
> >
> > In any event, I think if the client connects to a server on a 1GbE
> > interface, and discovers a 10GbE one, it will mix traffic on both.
> > In the current code, every other operation will switch interfaces.
> > In your code, it will only slightly prefer the 10GbE, when bulk data
> > begins to backlog the 1GbE.
>
> Again, do you see a problem in mixing the interfaces, if we make
> connections based on the speed of the connection?
> I plan to change this logic to do a weighted distribution of channels
> among interfaces (weight decided by speed of the interface), instead
> of simply choosing the fastest one.
> That way, parse_server_interfaces will not need to sort anymore.
>
> >
> > So, unless you fix iface_cmp, and rework the selection logic in
> > parse_server_interfaces, I don't think the change does much.
> >
> > What about the runtime selection?
>
> It should not be necessary.
>
> I spoke to Steve about this yesterday. He doesn't mind doing away with
> the old logic of simple round robin too.
> However, when in-flight == 0 for all channels, he feels that we should
> not favour the same channel all the time.
> So I'm planning to make some modifications to the current code in that
> direction.
>

Here's the updated version of the patch:
https://github.com/sprasad-microsoft/smb3-kernel-client/commit/ea9b6b0d65756e55e4f6b89313da6bda61687929.patch
@Steve French @Tom Talpey Please review.

I'm working on another patch for interface selection while creating a
new channel.
Will send that out soon.

> >
> > Tom.
> >
> >
> >
> > On 12/21/2022 10:33 AM, Shyam Prasad N wrote:
> > > On Tue, Dec 20, 2022 at 11:48 PM Tom Talpey <tom@talpey.com> wrote:
> > >>
> > >> I'd suggest a runtime configuration, personally. A config option
> > >> is undesirable, because it's very difficult to deploy. A module
> > >> parameter is only slightly better. The channel selection is a
> > >> natural for a runtime per-operation decision. And for the record,
> > >> I think a round-robin selection is a bad approach. Personally
> > >> I'd just yank it.
> > >
> > > Hi Tom,
> > >
> > > Thanks for taking a look at this.
> > > I was considering doing so. But was unsure if we'll still need a way
> > > to do round robin.
> > > Steve/Aurelien: Any objections to just remove the round-robin approach?
> > >
> > >>
> > >> I'm uneasy about ignoring the channel bandwidth and channel type.
> > >> Low bandwidth channels, or mixing RDMA and non-RDMA, are going to
> > >> be very problematic for bulk data. In fact, the Windows client
> > >> never mixes such alternatives, it always selects identical link
> > >> speeds and transport types. The traffic will always find a way to
> > >> block on the slowest/worst connection.
> > >>
> > >> Do you feel there is some advantage to mixing traffic? If so, can
> > >> you elaborate on that?
> > >
> > > We will not be mixing traffic here. Channel bandwidth and type are
> > > considered while establishing a new channel.
> > > This change is only for distributing the requests among the channels
> > > for the session.
> > >
> > > That said, those decisions are sub-optimal today, IMO.
> > > I plan to send out some changes there too.
> > >
> > >>
> > >> The patch you link to doesn't seem complete. If min_in_flight is
> > >> initialized to -1, how does the server->in_flight < min_in_flight
> > >> test ever return true?
> > >
> > > min_in_flight is declared as unsigned and then assigned to -1.
> > > I'm relying on the compiler to use the max value for the unsigned int
> > > based on this.
> > > Perhaps I should have been more explicit by assigning this to
> > > UINT_MAX. Will do so now.
> > >
> > >>
> > >> Tom.
> > >>
> > >> On 12/20/2022 9:47 AM, Steve French wrote:
> > >>> maybe a module load parm would be easier to use than kernel config
> > >>> option (and give more realistic test comparison data for many)
> > >>>
> > >>> On Tue, Dec 20, 2022 at 7:29 AM Shyam Prasad N <nspmangalore@gmail.com> wrote:
> > >>>>
> > >>>> Hi Steve,
> > >>>>
> > >>>> Below is a patch for a new channel allocation strategy that we've been
> > >>>> discussing for some time now. It uses the least loaded channel to send
> > >>>> requests as compared to the simple round robin. This will help
> > >>>> especially in cases where the server is not consuming requests at the
> > >>>> same rate across the channels.
> > >>>>
> > >>>> I've put the changes behind a config option that has a default value of true.
> > >>>> This way, we have an option to switch to the current default of round
> > >>>> robin when needed.
> > >>>>
> > >>>> Please review.
> > >>>>
> > >>>> https://github.com/sprasad-microsoft/smb3-kernel-client/commit/28b96fd89f7d746fc2b6c68682527214a55463f9.patch
> > >>>>
> > >>>> --
> > >>>> Regards,
> > >>>> Shyam
> > >>>
> > >>>
> > >>>
> > >
> > >
> > >
>
>
>
> --
> Regards,
> Shyam



-- 
Regards,
Shyam

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2022-12-23 14:16 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-12-20 13:28 [PATCH] cifs: use the least loaded channel for sending requests Shyam Prasad N
2022-12-20 14:47 ` Steve French
2022-12-20 18:18   ` Tom Talpey
2022-12-21 15:33     ` Shyam Prasad N
2022-12-21 16:46       ` Tom Talpey
2022-12-22  9:56         ` Shyam Prasad N
2022-12-23 14:16           ` Shyam Prasad N

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).