From: Stefan Raspl <email@example.com>
To: Tony Lu <firstname.lastname@example.org>, Karsten Graul <email@example.com>
Cc: "D. Wythe" <firstname.lastname@example.org>,
email@example.com, firstname.lastname@example.org, email@example.com,
Subject: Re: [PATCH net-next v2] net/smc: Reduce overflow of smc clcsock listen queue
Date: Thu, 20 Jan 2022 17:00:18 +0100 [thread overview]
Message-ID: <firstname.lastname@example.org> (raw)
On 1/20/22 14:39, Tony Lu wrote:
> On Thu, Jan 13, 2022 at 09:07:51AM +0100, Karsten Graul wrote:
>> On 06/01/2022 08:05, Tony Lu wrote:
>> I think of the following approach: the default maximum of active workers in a
>> work queue is defined by WQ_MAX_ACTIVE (512). when this limit is hit then we
>> have slightly lesser than 512 parallel SMC handshakes running at the moment,
>> and new workers would be enqueued without to become active.
>> In that case (max active workers reached) I would tend to fallback new connections
>> to TCP. We would end up with lesser connections using SMC, but for the user space
>> applications there would be nearly no change compared to TCP (no dropped TCP connection
>> attempts, no need to reconnect).
>> Imho, most users will never run into this problem, so I think its fine to behave like this.
> This makes sense to me, thanks.
>> As far as I understand you, you still see a good reason in having another behavior
>> implemented in parallel (controllable by user) which enqueues all incoming connections
>> like in your patch proposal? But how to deal with the out-of-memory problems that might
>> happen with that?
> There is a possible scene, when the user only wants to use SMC protocol, such
> as performance benchmark, or explicitly specify SMC protocol, they can
> afford the lower speed of incoming connection creation, but enjoy the
> higher QPS after creation.
>> Lets decide that when you have a specific control that you want to implement.
>> I want to have a very good to introduce another interface into the SMC module,
>> making the code more complex and all of that. The decision for the netlink interface
>> was also done because we have the impression that this is the NEW way to go, and
>> since we had no interface before we started with the most modern way to implement it.
>> TCP et al have a history with sysfs, so thats why it is still there.
>> But I might be wrong on that...
> Thanks for the information that I don't know about the decision for new
> control interface. I am understanding your decision about the interface.
> We are glad to contribute the knobs to smc_netlink.c in the next patches.
> There is something I want to discuss here about the persistent
> configuration, we need to store new config in system, and make sure that
> it could be loaded correctly after boot up. A possible solution is to
> extend smc-tools for new config, and work with systemd for auto-loading.
> If it works, we are glad to contribute these to smc-tools.
I'd be definitely open to look into patches for smc-tools that extend it to
configure SMC properties, and that provide the capability to read (and apply) a
config from a file! We can discuss what you'd imagine as an interface before you
implement it, too.
next prev parent reply other threads:[~2022-01-20 16:00 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-01-04 13:12 [PATCH net-next v2] net/smc: Reduce overflow of smc clcsock listen queue D. Wythe
2022-01-04 13:45 ` Karsten Graul
2022-01-04 16:17 ` D. Wythe
2022-01-05 4:40 ` D. Wythe
2022-01-05 8:28 ` Tony Lu
2022-01-05 8:57 ` dust.li
2022-01-05 13:17 ` Karsten Graul
2022-01-05 15:06 ` D. Wythe
2022-01-05 19:13 ` Karsten Graul
2022-01-06 7:05 ` Tony Lu
2022-01-13 8:07 ` Karsten Graul
2022-01-13 18:50 ` Jakub Kicinski
2022-01-20 13:39 ` Tony Lu
2022-01-20 16:00 ` Stefan Raspl [this message]
2022-01-21 2:47 ` Tony Lu
2022-02-16 11:46 ` dust.li
2022-01-06 3:51 ` D. Wythe
2022-01-06 9:54 ` Karsten Graul
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.