* low pending handshake limit
@ 2023-09-04 12:39 Moritz Wanzenböck
2023-09-04 15:13 ` Chuck Lever III
0 siblings, 1 reply; 4+ messages in thread
From: Moritz Wanzenböck @ 2023-09-04 12:39 UTC (permalink / raw)
To: kernel-tls-handshake
Hi all,
I'm currently working on enabling TLS support for DRBD, so I'm very
keen to use the handshake infrastructure. During testing I noticed that
the allowed number of pending handshakes is quite low. This seems to
stem from the following calculation:
/*
* Arbitrary limit to prevent handshakes that do not make
* progress from clogging up the system. The cap scales up
* with the amount of physical memory on the system.
*/
si_meminfo(&si);
tmp = si.totalram / (25 * si.mem_unit);
hn->hn_pending_max = clamp(tmp, 3UL, 50UL);
Which, for the typical VMs I use for testing (1Gi RAM), ends up being
just 3 handshakes. The limits in general seem too low also in the best
case. If a node just booted, and would start connecting to all
configured DRBD devices, we could easily hit even the upper limit of 50.
Also the calculation used doesn't seem to make too much sense to me. It
allows more handshakes when using a smaller page size?
Would it be possible to increase the number of pending handshakes?
Best regards,
Moritz
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: low pending handshake limit
2023-09-04 12:39 low pending handshake limit Moritz Wanzenböck
@ 2023-09-04 15:13 ` Chuck Lever III
2023-09-05 8:56 ` Moritz Wanzenböck
0 siblings, 1 reply; 4+ messages in thread
From: Chuck Lever III @ 2023-09-04 15:13 UTC (permalink / raw)
To: Moritz Wanzenböck
Cc: kernel-tls-handshake, open list:NETWORKING [GENERAL], Paolo Abeni
Hi-
> On Sep 4, 2023, at 8:39 AM, Moritz Wanzenböck <moritz.wanzenboeck@linbit.com> wrote:
>
> Hi all,
>
> I'm currently working on enabling TLS support for DRBD, so I'm very keen to use the handshake infrastructure.
I'm happy to see the handshake infrastructure get more usage.
> During testing I noticed that the allowed number of pending handshakes is quite low. This seems to stem from the following calculation:
>
> /*
> * Arbitrary limit to prevent handshakes that do not make
> * progress from clogging up the system. The cap scales up
> * with the amount of physical memory on the system.
> */
> si_meminfo(&si);
> tmp = si.totalram / (25 * si.mem_unit);
> hn->hn_pending_max = clamp(tmp, 3UL, 50UL);
>
> Which, for the typical VMs I use for testing (1Gi RAM), ends up being just 3 handshakes. The limits in general seem too low also in the best case. If a node just booted, and would start connecting to all configured DRBD devices, we could easily hit even the upper limit of 50.
>
> Also the calculation used doesn't seem to make too much sense to me. It allows more handshakes when using a smaller page size?
>
> Would it be possible to increase the number of pending handshakes?
IIRC I added the dynamic computation in response to a review
comment from Paolo (cc'd). I think the limit values are arbitrary,
we just want a sensible cap on the number of pending handshakes,
and on smaller systems, that limit should be a smaller value.
It's true that a handshake can fail if that limit is hit, but
the consumer ought to be able to retry after a brief delay in
that case.
I am open to discussing changes if retrying proves to be a
challenge.
--
Chuck Lever
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: low pending handshake limit
2023-09-04 15:13 ` Chuck Lever III
@ 2023-09-05 8:56 ` Moritz Wanzenböck
2023-09-05 11:30 ` Paolo Abeni
0 siblings, 1 reply; 4+ messages in thread
From: Moritz Wanzenböck @ 2023-09-05 8:56 UTC (permalink / raw)
To: Chuck Lever III; +Cc: kernel-tls-handshake, Paolo Abeni
Hi Chuck,
On Mo, Sep 4 2023 at 15:13:25 +0000, Chuck Lever III
<chuck.lever@oracle.com> wrote:
> Hi-
>
>> On Sep 4, 2023, at 8:39 AM, Moritz Wanzenböck
>> <moritz.wanzenboeck@linbit.com> wrote:
>>
>> Hi all,
>>
>> I'm currently working on enabling TLS support for DRBD, so I'm very
>> keen to use the handshake infrastructure.
>
> I'm happy to see the handshake infrastructure get more usage.
>
>
>> During testing I noticed that the allowed number of pending
>> handshakes is quite low. This seems to stem from the following
>> calculation:
>>
>> /*
>> * Arbitrary limit to prevent handshakes that do not make
>> * progress from clogging up the system. The cap scales up
>> * with the amount of physical memory on the system.
>> */
>> si_meminfo(&si);
>> tmp = si.totalram / (25 * si.mem_unit);
>> hn->hn_pending_max = clamp(tmp, 3UL, 50UL);
>>
>> Which, for the typical VMs I use for testing (1Gi RAM), ends up
>> being just 3 handshakes. The limits in general seem too low also in
>> the best case. If a node just booted, and would start connecting to
>> all configured DRBD devices, we could easily hit even the upper
>> limit of 50.
>>
>> Also the calculation used doesn't seem to make too much sense to
>> me. It allows more handshakes when using a smaller page size?
>>
>> Would it be possible to increase the number of pending handshakes?
>
> IIRC I added the dynamic computation in response to a review
> comment from Paolo (cc'd). I think the limit values are arbitrary,
> we just want a sensible cap on the number of pending handshakes,
> and on smaller systems, that limit should be a smaller value.
>
> It's true that a handshake can fail if that limit is hit, but
> the consumer ought to be able to retry after a brief delay in
> that case.
>
> I am open to discussing changes if retrying proves to be a
> challenge.
Thanks for the explanation. Actually, retrying is not an issue. I was
initially
vary, I thought the requests remained pending until the handshake was
complete.
Looks like I was wrong about that, it's just pending until the netlink
message
is sent to the user space utility.
Best regards,
Moritz
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: low pending handshake limit
2023-09-05 8:56 ` Moritz Wanzenböck
@ 2023-09-05 11:30 ` Paolo Abeni
0 siblings, 0 replies; 4+ messages in thread
From: Paolo Abeni @ 2023-09-05 11:30 UTC (permalink / raw)
To: Moritz Wanzenböck, Chuck Lever III; +Cc: kernel-tls-handshake
On Tue, 2023-09-05 at 10:56 +0200, Moritz Wanzenböck wrote:
> On Mo, Sep 4 2023 at 15:13:25 +0000, Chuck Lever III
> <chuck.lever@oracle.com> wrote:
> > Hi-
> >
> > > On Sep 4, 2023, at 8:39 AM, Moritz Wanzenböck
> > > <moritz.wanzenboeck@linbit.com> wrote:
> > >
> > > Hi all,
> > >
> > > I'm currently working on enabling TLS support for DRBD, so I'm very
> > > keen to use the handshake infrastructure.
> >
> > I'm happy to see the handshake infrastructure get more usage.
> >
> >
> > > During testing I noticed that the allowed number of pending
> > > handshakes is quite low. This seems to stem from the following
> > > calculation:
> > >
> > > /*
> > > * Arbitrary limit to prevent handshakes that do not make
> > > * progress from clogging up the system. The cap scales up
> > > * with the amount of physical memory on the system.
> > > */
> > > si_meminfo(&si);
> > > tmp = si.totalram / (25 * si.mem_unit);
> > > hn->hn_pending_max = clamp(tmp, 3UL, 50UL);
> > >
> > > Which, for the typical VMs I use for testing (1Gi RAM), ends up
> > > being just 3 handshakes. The limits in general seem too low also in
> > > the best case. If a node just booted, and would start connecting to
> > > all configured DRBD devices, we could easily hit even the upper
> > > limit of 50.
> > >
> > > Also the calculation used doesn't seem to make too much sense to
> > > me. It allows more handshakes when using a smaller page size?
> > >
> > > Would it be possible to increase the number of pending handshakes?
> >
> > IIRC I added the dynamic computation in response to a review
> > comment from Paolo (cc'd). I think the limit values are arbitrary,
> > we just want a sensible cap on the number of pending handshakes,
> > and on smaller systems, that limit should be a smaller value.
> >
> > It's true that a handshake can fail if that limit is hit, but
> > the consumer ought to be able to retry after a brief delay in
> > that case.
> >
> > I am open to discussing changes if retrying proves to be a
> > challenge.
>
> Thanks for the explanation. Actually, retrying is not an issue. I was
> initially
> vary, I thought the requests remained pending until the handshake was
> complete.
> Looks like I was wrong about that, it's just pending until the netlink
> message
> is sent to the user space utility.
For the records, if similar issues should some real issue raise again
from small limits, I think it should be possible to expose some user-
space tunable (sysctl? socket option? both?) to let the application
adjust the limit.
Cheers,
Paolo
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2023-09-05 11:30 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-09-04 12:39 low pending handshake limit Moritz Wanzenböck
2023-09-04 15:13 ` Chuck Lever III
2023-09-05 8:56 ` Moritz Wanzenböck
2023-09-05 11:30 ` Paolo Abeni
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).