All of lore.kernel.org
 help / color / mirror / Atom feed
* multiple endpoints for a single peer -- implementation details
@ 2020-01-15  9:10 Motiejus Jakštys
  2020-01-16  9:55 ` Toke Høiland-Jørgensen
  0 siblings, 1 reply; 3+ messages in thread
From: Motiejus Jakštys @ 2020-01-15  9:10 UTC (permalink / raw)
  To: wireguard

Hi all,

I thought I'd implement a prototype to use multiple endpoints for a
single peer, but after some analysis on "## Select new endpoint during
each handshake"[1], I'd like to share the concerns with future readers
who might try the same endeavor. TLDR: I think the kernel is not in
the best position to do this, "decision making in user space" may be
more appropriate.

To make it happen, handshake process would change. New suggested flow:
- Initiator sends a handshake packet to all endpoints quasi-simultaneously.
  - Each handshake is a new message with a different ephemeral key et al.
- Responder receives the first one and responds.
- Responder receives more handshakes within 1/INITIATIONS_PER_SECOND
and discards them.
- Responder may receive more after 1/INITIATIONS_PER_SECOND and responds.

Responder needs to maintain more than one handshake state for
MAX_TIMER_HANDSHAKES, specifically, the whole `noise_handshake`
struct. Following a later suggestion in the thread, this can have an
upper bound of MAX_ENDPOINTS_PER_PEER (TBD constant).

Responder's responses are technically different handshakes: different
ephemeral keys, different hashes, possibly different indices (I
haven't figured the role of indices yet). From this stems the question
1.

Questions:
1. how can initiator associate the one-of-the-many handshakes it sent
with the one-of-the-many responses it received? I.e. is there
common/derivable data shared between the handshake request and
response? I wasn't able to find one.
2. more a concern: neither kernel, nor wireguard-go implementations
are willing to accept more than one endpoint, and it would be messy to
extend:

include/uapi/linux/wireguard.h
  WGPEER_A_ENDPOINT: struct sockaddr_in or struct sockaddr_in6

device/uapi.go:
  case "endpoint":
  ...
    endpoint, err := CreateEndpoint(value)

Endpoint is fixed to be a single UDP address, and both kernel and
wireguard-go refuse unknown keys. To have tooling
backwards-compatibility (i.e. use newer wireguard-tools with older
kernel implementations), wireguard-tools would need to know the
supported "features" of the underlying implementation. And there is no
version negotiation between the user/kernel space. Which makes it
tricky to add features like this.

Related gotcha: currently DNS is looked up in userspace during
interface configuration**. The biggest selling point of "multiple
endpoints" is increasing probability to reach it... Given that:
1. wireguard-linux netlink API was not designed to be extendable (so
isn't wireguard-go).
2. DNS lookup is done on configuration time.

I am suggesting that "## Decision-making in userspace" would work
better here. Userspace would regularly* issue handshake initiations
and measure how long it takes for each endpoint, and hand over the
*single* endpoint to the kernel to connect. Interestingly enough, this
"userspace thing" is a combination of wireguard-tools for config
parsing and MNL, and wireguard-go to initiate the handshake. What do
you think, where could this belong? A userspace wireguard
implementation in C, and/or wireguard-tools implementation in Go would
make this easy, but we have neither. :)

Motiejus

[1]: https://lists.zx2c4.com/pipermail/wireguard/2017-January/000917.html
[*]: timing and conditions when to do the "probing handshakes" would
need to be very carefully considered. TBD.
[**]: my wireguard server is behind a dynamic IP with a DNS TTL of 600 seconds.
_______________________________________________
WireGuard mailing list
WireGuard@lists.zx2c4.com
https://lists.zx2c4.com/mailman/listinfo/wireguard

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: multiple endpoints for a single peer -- implementation details
  2020-01-15  9:10 multiple endpoints for a single peer -- implementation details Motiejus Jakštys
@ 2020-01-16  9:55 ` Toke Høiland-Jørgensen
  0 siblings, 0 replies; 3+ messages in thread
From: Toke Høiland-Jørgensen @ 2020-01-16  9:55 UTC (permalink / raw)
  To: Motiejus Jakštys, wireguard

Motiejus Jakštys <motiejus.jakstys@gmail.com> writes:

> Hi all,
>
> I thought I'd implement a prototype to use multiple endpoints for a
> single peer, but after some analysis on "## Select new endpoint during
> each handshake"[1], I'd like to share the concerns with future readers
> who might try the same endeavor. TLDR: I think the kernel is not in
> the best position to do this, "decision making in user space" may be
> more appropriate.
>
> To make it happen, handshake process would change. New suggested flow:
> - Initiator sends a handshake packet to all endpoints quasi-simultaneously.
>   - Each handshake is a new message with a different ephemeral key et al.
> - Responder receives the first one and responds.
> - Responder receives more handshakes within 1/INITIATIONS_PER_SECOND
> and discards them.
> - Responder may receive more after 1/INITIATIONS_PER_SECOND and responds.
>
> Responder needs to maintain more than one handshake state for
> MAX_TIMER_HANDSHAKES, specifically, the whole `noise_handshake`
> struct. Following a later suggestion in the thread, this can have an
> upper bound of MAX_ENDPOINTS_PER_PEER (TBD constant).

Before you go and re-invent the happy eyeballs algorithm, may I suggest
you read the RFC? :)

https://tools.ietf.org/html/rfc8305

Specifically, the considerations for how to do multiple connection
attempts safely without overloading the network is relevant for this. As
is the bit about sorting DNS responses.

[...]

> 2. more a concern: neither kernel, nor wireguard-go implementations
> are willing to accept more than one endpoint, and it would be messy to
> extend:
>
> include/uapi/linux/wireguard.h
>   WGPEER_A_ENDPOINT: struct sockaddr_in or struct sockaddr_in6
>
> device/uapi.go:
>   case "endpoint":
>   ...
>     endpoint, err := CreateEndpoint(value)
>
> Endpoint is fixed to be a single UDP address, and both kernel and
> wireguard-go refuse unknown keys. To have tooling
> backwards-compatibility (i.e. use newer wireguard-tools with older
> kernel implementations), wireguard-tools would need to know the
> supported "features" of the underlying implementation. And there is no
> version negotiation between the user/kernel space. Which makes it
> tricky to add features like this.

Eh? The kernel API is netlink - you could just add multiple
WGPEER_A_ENDPOINT attributes. Or add a new one
(WGPEER_A_ENDPOINTS_MULTI?). Same thing for wireguard-go (I assume).

> I am suggesting that "## Decision-making in userspace" would work
> better here. Userspace would regularly* issue handshake initiations
> and measure how long it takes for each endpoint, and hand over the
> *single* endpoint to the kernel to connect.

Why not just let userspace add more endpoints to an already established
peer, and have the kernel figure out the selection between them?

-Toke
_______________________________________________
WireGuard mailing list
WireGuard@lists.zx2c4.com
https://lists.zx2c4.com/mailman/listinfo/wireguard

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: multiple endpoints for a single peer -- implementation details
       [not found] <CAFVMu-qhABgo2zgw7bb4hSuZUKnApLG=856CN-aw7xQshzzNBw@mail.gmail.com>
@ 2020-01-16 11:55 ` Toke Høiland-Jørgensen
  0 siblings, 0 replies; 3+ messages in thread
From: Toke Høiland-Jørgensen @ 2020-01-16 11:55 UTC (permalink / raw)
  To: Motiejus Jakštys; +Cc: wireguard

Motiejus Jakštys <motiejus.jakstys@gmail.com> writes:

> On Thu, Jan 16, 2020 at 11:55 AM Toke Høiland-Jørgensen <toke@toke.dk> wrote:
>>
>> Motiejus Jakštys <motiejus.jakstys@gmail.com> writes:
>>
>> > [...]
>> Before you go and re-invent the happy eyeballs algorithm, may I suggest
>> you read the RFC? :)
>>
>> https://tools.ietf.org/html/rfc8305
>>
>> Specifically, the considerations for how to do multiple connection
>> attempts safely without overloading the network is relevant for this. As
>> is the bit about sorting DNS responses.
>
> From the RFC I learned that the client should not issue parallel
> requests, and time between tries must be reasonable (say 100ms). So
> the original problem goes away. Thanks for the link.

You're welcome :)

>> [...]
>>
>> > 2. more a concern: neither kernel, nor wireguard-go implementations
>> > are willing to accept more than one endpoint, and it would be messy to
>> > extend:
>> >
>> > include/uapi/linux/wireguard.h
>> >   WGPEER_A_ENDPOINT: struct sockaddr_in or struct sockaddr_in6
>> >
>> > device/uapi.go:
>> >   case "endpoint":
>> >   ...
>> >     endpoint, err := CreateEndpoint(value)
>> >
>> > Endpoint is fixed to be a single UDP address, and both kernel and
>> > wireguard-go refuse unknown keys. To have tooling
>> > backwards-compatibility (i.e. use newer wireguard-tools with older
>> > kernel implementations), wireguard-tools would need to know the
>> > supported "features" of the underlying implementation. And there is no
>> > version negotiation between the user/kernel space. Which makes it
>> > tricky to add features like this.
>>
>> Eh? The kernel API is netlink - you could just add multiple
>> WGPEER_A_ENDPOINT attributes. Or add a new one
>> (WGPEER_A_ENDPOINTS_MULTI?). Same thing for wireguard-go (I assume).
>
> New netlink attribute in userspace alone would mean that older kernels
> will not understand the message. Now that I looked at the headers
> again, kernel does send WGPEER_A_PROTOCOL_VERSION. Which solves the
> problem. Go has an equivalent "protocol_version".

Or a new userspace just adds the new attribute, and if the kernel
doesn't understand it, it will be rejected and userspace can fall back
to the old behaviour. At least I hope that's what the kernel does :)

-Toke
_______________________________________________
WireGuard mailing list
WireGuard@lists.zx2c4.com
https://lists.zx2c4.com/mailman/listinfo/wireguard

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2020-01-16 11:55 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-01-15  9:10 multiple endpoints for a single peer -- implementation details Motiejus Jakštys
2020-01-16  9:55 ` Toke Høiland-Jørgensen
     [not found] <CAFVMu-qhABgo2zgw7bb4hSuZUKnApLG=856CN-aw7xQshzzNBw@mail.gmail.com>
2020-01-16 11:55 ` Toke Høiland-Jørgensen

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.