From: "D. Wythe" <firstname.lastname@example.org>
To: Karsten Graul <email@example.com>
Cc: firstname.lastname@example.org, email@example.com, firstname.lastname@example.org,
Subject: Re: [PATCH net-next v2] net/smc: Reduce overflow of smc clcsock listen queue
Date: Wed, 5 Jan 2022 00:17:27 +0800 [thread overview]
Message-ID: <20220104161727.GA123107@e02h04389.eu6sqa> (raw)
It's seems last mail has been rejected by some reason, resend it for
confirm. sry to bother you if you already seen it.
Got your point, it's quite a problem within this patch.
As you noted, may be we can use the backlog parameter of the listen socket
to limit the dangling connections, just like tcp does.
I'll work on it in the next few days. Please let me know if you have more suggestions for it.
On Tue, Jan 04, 2022 at 02:45:35PM +0100, Karsten Graul wrote:
> On 04/01/2022 14:12, D. Wythe wrote:
> > From: "D. Wythe" <email@example.com>
> > In nginx/wrk multithread and 10K connections benchmark, the
> > backend TCP connection established very slowly, and lots of TCP
> > connections stay in SYN_SENT state.
> I see what you are trying to solve here.
> So what happens with your patch now is that we are accepting way more connections
> in advance and queue them up for the SMC connection handshake worker.
> The connection handshake worker itself will not run faster with this change, so overall
> it should be the same time that is needed to establish all connections.
> What you solve is that when 10k connections are started at the same time, some of them
> will be dropped due to tcp 3-way handshake timeouts. Your patch avoids that but one can now flood
> the stack with an ~infinite amount of dangling sockets waiting for the SMC handshake, maybe even
> causing oom conditions.
> What should be respected with such a change would be the backlog parameter for the listen socket,
> i.e. how many backlog connections are requested by the user space application?
> There is no such handling of backlog right now, and due to the 'braking' workers we avoided
> to flood the kernel with too many dangling connections. With your change there should be a way to limit
> this ind of connections in some way.
next prev parent reply other threads:[~2022-01-04 16:17 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-01-04 13:12 [PATCH net-next v2] net/smc: Reduce overflow of smc clcsock listen queue D. Wythe
2022-01-04 13:45 ` Karsten Graul
2022-01-04 16:17 ` D. Wythe [this message]
2022-01-05 4:40 ` D. Wythe
2022-01-05 8:28 ` Tony Lu
2022-01-05 8:57 ` dust.li
2022-01-05 13:17 ` Karsten Graul
2022-01-05 15:06 ` D. Wythe
2022-01-05 19:13 ` Karsten Graul
2022-01-06 7:05 ` Tony Lu
2022-01-13 8:07 ` Karsten Graul
2022-01-13 18:50 ` Jakub Kicinski
2022-01-20 13:39 ` Tony Lu
2022-01-20 16:00 ` Stefan Raspl
2022-01-21 2:47 ` Tony Lu
2022-02-16 11:46 ` dust.li
2022-01-06 3:51 ` D. Wythe
2022-01-06 9:54 ` Karsten Graul
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.