All of lore.kernel.org
 help / color / mirror / Atom feed
From: Tony Lu <tonylu@linux.alibaba.com>
To: Karsten Graul <kgraul@linux.ibm.com>
Cc: "D. Wythe" <alibuda@linux.alibaba.com>,
	dust.li@linux.alibaba.com, kuba@kernel.org, davem@davemloft.net,
	netdev@vger.kernel.org, linux-s390@vger.kernel.org,
	linux-rdma@vger.kernel.org
Subject: Re: [PATCH net-next v2] net/smc: Reduce overflow of smc clcsock listen queue
Date: Thu, 20 Jan 2022 21:39:33 +0800	[thread overview]
Message-ID: <YelmFWn7ot0iQCYG@TonyMac-Alibaba> (raw)
In-Reply-To: <5a5ba1b6-93d7-5c1e-aab2-23a52727fbd1@linux.ibm.com>

On Thu, Jan 13, 2022 at 09:07:51AM +0100, Karsten Graul wrote:
> On 06/01/2022 08:05, Tony Lu wrote:
> 
> I think of the following approach: the default maximum of active workers in a
> work queue is defined by WQ_MAX_ACTIVE (512). when this limit is hit then we
> have slightly lesser than 512 parallel SMC handshakes running at the moment,
> and new workers would be enqueued without to become active.
> In that case (max active workers reached) I would tend to fallback new connections
> to TCP. We would end up with lesser connections using SMC, but for the user space
> applications there would be nearly no change compared to TCP (no dropped TCP connection
> attempts, no need to reconnect).
> Imho, most users will never run into this problem, so I think its fine to behave like this.

This makes sense to me, thanks.

> 
> As far as I understand you, you still see a good reason in having another behavior 
> implemented in parallel (controllable by user) which enqueues all incoming connections
> like in your patch proposal? But how to deal with the out-of-memory problems that might 
> happen with that?

There is a possible scene, when the user only wants to use SMC protocol, such
as performance benchmark, or explicitly specify SMC protocol, they can
afford the lower speed of incoming connection creation, but enjoy the
higher QPS after creation.

> Lets decide that when you have a specific control that you want to implement. 
> I want to have a very good to introduce another interface into the SMC module,
> making the code more complex and all of that. The decision for the netlink interface 
> was also done because we have the impression that this is the NEW way to go, and
> since we had no interface before we started with the most modern way to implement it.
> 
> TCP et al have a history with sysfs, so thats why it is still there. 
> But I might be wrong on that...

Thanks for the information that I don't know about the decision for new
control interface. I am understanding your decision about the interface.
We are glad to contribute the knobs to smc_netlink.c in the next patches.

There is something I want to discuss here about the persistent
configuration, we need to store new config in system, and make sure that
it could be loaded correctly after boot up. A possible solution is to
extend smc-tools for new config, and work with systemd for auto-loading.
If it works, we are glad to contribute these to smc-tools.

Thank you.
Tony Lu

  parent reply	other threads:[~2022-01-20 13:39 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-01-04 13:12 [PATCH net-next v2] net/smc: Reduce overflow of smc clcsock listen queue D. Wythe
2022-01-04 13:45 ` Karsten Graul
2022-01-04 16:17   ` D. Wythe
2022-01-05  4:40   ` D. Wythe
2022-01-05  8:28     ` Tony Lu
2022-01-05  8:57     ` dust.li
2022-01-05 13:17       ` Karsten Graul
2022-01-05 15:06         ` D. Wythe
2022-01-05 19:13           ` Karsten Graul
2022-01-06  7:05             ` Tony Lu
2022-01-13  8:07               ` Karsten Graul
2022-01-13 18:50                 ` Jakub Kicinski
2022-01-20 13:39                 ` Tony Lu [this message]
2022-01-20 16:00                   ` Stefan Raspl
2022-01-21  2:47                     ` Tony Lu
2022-02-16 11:46                 ` dust.li
2022-01-06  3:51           ` D. Wythe
2022-01-06  9:54             ` Karsten Graul

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YelmFWn7ot0iQCYG@TonyMac-Alibaba \
    --to=tonylu@linux.alibaba.com \
    --cc=alibuda@linux.alibaba.com \
    --cc=davem@davemloft.net \
    --cc=dust.li@linux.alibaba.com \
    --cc=kgraul@linux.ibm.com \
    --cc=kuba@kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=linux-s390@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.