All of lore.kernel.org
 help / color / mirror / Atom feed
* problem  wireguard + ospf + unconnected tunnels
@ 2017-07-03 21:09 ae
  2017-07-04 15:55 ` Roelf "rewbycraft" Wichertjes
  2017-07-08 14:21 ` Indefinite queuing for unconnected peers (Was: problem wireguard + ospf + unconnected tunnels) Baptiste Jonglez
  0 siblings, 2 replies; 12+ messages in thread
From: ae @ 2017-07-03 21:09 UTC (permalink / raw)
  To: wireguard

[-- Attachment #1: Type: text/plain, Size: 526 bytes --]

situation
2 tunnels
1 normal - 2nd with unconnected ending
+ ospfd quagge

At start everything works fine - but after ~ 30-60 seconds - the ospf stops working

This is due to the fact that the ospf daemon sends packets from the same socket on different interfaces - and in the tunnel interface everything goes fine - but in the 2nd packets accumulate
And after a certain accumulation - the socket of the demon daemon stops working on sending completely "No buffer space available "

Is it possible to fix this with settings?


[-- Attachment #2: Type: text/html, Size: 732 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: problem wireguard + ospf + unconnected tunnels
  2017-07-03 21:09 problem wireguard + ospf + unconnected tunnels ae
@ 2017-07-04 15:55 ` Roelf "rewbycraft" Wichertjes
  2017-07-04 17:10   ` Re[2]: " ae
  2017-07-08 14:21 ` Indefinite queuing for unconnected peers (Was: problem wireguard + ospf + unconnected tunnels) Baptiste Jonglez
  1 sibling, 1 reply; 12+ messages in thread
From: Roelf "rewbycraft" Wichertjes @ 2017-07-04 15:55 UTC (permalink / raw)
  To: wireguard

 From what you said, I surmise the following setup:
- Three devices, A, B and C.
- A talks ospf to B over wireguard.
- A talks ospf to C over wireguard.
- The connection between A and C has gotten interrupted. (maybe C is a 
laptop)
- The error causes the entire ospf process to fail for all interfaces.
   In other words: A will suddenly also stop talking B when the 
connection A<->C fails?

If I am correct in that, there are a few things to note:
  - The "No buffer space available" error is normal from wireguard when 
an interface cannot reach the peer.
  - A single "failing" interface shouldn't kill the ospf process for all 
interfaces.
  - This sounds more like a quagga problem, as I have a similar setup (I 
use my laptop for device C in my case) except I use the BIRD routing 
daemon instead of quagga (and this setup works fine for me).

Of course, before any definitive conclusions can be made, we'll need a 
bit more information. Could you possibly provide us with the following 
pieces of information:
  - What distribution are you using?
  - What kernel (version) are you using?
  - What wireguard version are you using?
  - What quagga version are you using?
  - Please provide the kernel logs.
  - Please provide the quagga logs.

On 07/03/2017 11:09 PM, ae wrote:
> situation
> 2 tunnels
> 1 normal - 2nd with unconnected ending
> + ospfd quagge
> 
> At start everything works fine - but after ~ 30-60 seconds - the ospf 
> stops working
> 
> This is due to the fact that the ospf daemon sends packets from the same 
> socket on different interfaces - and in the tunnel interface everything 
> goes fine - but in the 2nd packets accumulate
> And after a certain accumulation - the socket of the demon daemon stops 
> working on sending completely "No buffer space available"
> 
> Is it possible to fix this with settings?
> 
> 
> 
> _______________________________________________
> WireGuard mailing list
> WireGuard@lists.zx2c4.com
> https://lists.zx2c4.com/mailman/listinfo/wireguard
> 

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re[2]: problem wireguard + ospf + unconnected tunnels
  2017-07-04 15:55 ` Roelf "rewbycraft" Wichertjes
@ 2017-07-04 17:10   ` ae
  2017-07-07 15:08     ` Roelf "rewbycraft" Wichertjes
  0 siblings, 1 reply; 12+ messages in thread
From: ae @ 2017-07-04 17:10 UTC (permalink / raw)
  To: Roelf \"rewbycraft\" Wichertjes; +Cc: wireguard

[-- Attachment #1: Type: text/plain, Size: 2346 bytes --]



>Вторник, 4 июля 2017, 20:56 +05:00 от "Roelf \"rewbycraft\" Wichertjes" < mailings+wireguard@roelf.org >:
>
>From what you said, I surmise the following setup:
>- Three devices, A, B and C.
>- A talks ospf to B over wireguard.
>- A talks ospf to C over wireguard.
>- The connection between A and C has gotten interrupted. (maybe C is a 
>laptop)
>- The error causes the entire ospf process to fail for all interfaces.
>   In other words: A will suddenly also stop talking B when the 
>connection A<->C fails? 
Not at all
A-B normally installed tunnels
A-C with never working tunnel - there was no connection setup never
Both tunnels are described with a direct indication of the other side's feast (ip port)

There is a blocking of the work of the demon's ospfd - because of "No buffer space available"
Ospf uses ONE socket to send its message to all interfaces - and this socket is blocked due to buffer overflow - which occurs when it sends packets to a non-starting tunnel

>
>If I am correct in that, there are a few things to note:
>  - The "No buffer space available" error is normal from wireguard when 
>an interface cannot reach the peer. 
Can and normal - but it blocks ospfd - and as a result to use them together is simply impossible
Would he rather have dropped them?
>
>  - A single "failing" interface shouldn't kill the ospf process for all 
>interfaces. 
not kill - blocked yes
>
>  - This sounds more like a quagga problem, as I have a similar setup (I 
>use my laptop for device C in my case) except I use the BIRD routing 
>daemon instead of quagga (and this setup works fine for me). 
This is a problem with vireguard
No other tunnels - I did not allow myself to do this
About the inability to reach the addressee - packets just drop out
But here he accumulates and accumulate ....
>
>Of course, before any definitive conclusions can be made, we'll need a 
>bit more information. Could you possibly provide us with the following 
>pieces of information:
>  - What distribution are you using? 
debian9
>
>  - What kernel (version) are you using? 
4.9.30-2+deb9u2
>
>  - What wireguard version are you using? 
wireguard-0.0.20170613-1
>
>  - What quagga version are you using? 
0.99.23.1-1+deb8u3
>
>  - Please provide the kernel logs. 
empty
>
>  - Please provide the quagga logs. 
empty


[-- Attachment #2: Type: text/html, Size: 5282 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: problem wireguard + ospf + unconnected tunnels
  2017-07-04 17:10   ` Re[2]: " ae
@ 2017-07-07 15:08     ` Roelf "rewbycraft" Wichertjes
  2017-07-07 15:47       ` Re[2]: " ae
  2017-07-10  0:46       ` Jason A. Donenfeld
  0 siblings, 2 replies; 12+ messages in thread
From: Roelf "rewbycraft" Wichertjes @ 2017-07-07 15:08 UTC (permalink / raw)
  To: wireguard

So, is the problem you actually want help with actually getting A and C 
to talk to eachother?
If so, we'll need to see the configs you're using on both ends of the 
tunnel. I'd also suggest checking your firewalls in this case.

And ospf is simply refusing to use the A<->C but is still working just 
fine across A<->B?
If so, that's normal.
If A<->B also stops working due to the "No buffer space available" 
error, that is a bug with quagga. (which we can try to (get) fix(ed) in 
that situation)

Sorry if it seems obvious, I'm simply trying to get a grasp as to what 
the actual problem you want help with is.


On 07/04/2017 07:10 PM, ae wrote:
> 
> 
>     Вторник, 4 июля 2017, 20:56 +05:00 от "Roelf \"rewbycraft\"
>     Wichertjes" <mailings+wireguard@roelf.org
>     <https://e.mail.ru/compose?To=mailings%2bwireguard@roelf.org>>:
> 
>      From what you said, I surmise the following setup:
>     - Three devices, A, B and C.
>     - A talks ospf to B over wireguard.
>     - A talks ospf to C over wireguard.
>     - The connection between A and C has gotten interrupted. (maybe C is a
>     laptop)
>     - The error causes the entire ospf process to fail for all interfaces.
>         In other words: A will suddenly also stop talking B when the
>     connection A<->C fails?
> 
> Not at all
> A-B normally installed tunnels
> A-C with never working tunnel - there was no connection setup never
> Both tunnels are described with a direct indication of the other side's 
> feast (ip port)
> 
> There is a blocking of the work of the demon's ospfd - because of "No 
> buffer space available"
> Ospf uses ONE socket to send its message to all interfaces - and this 
> socket is blocked due to buffer overflow - which occurs when it sends 
> packets to a non-starting tunnel
> 
> 
>     If I am correct in that, there are a few things to note:
>        - The "No buffer space available" error is normal from wireguard
>     when
>     an interface cannot reach the peer.
> 
> Can and normal - but it blocks ospfd - and as a result to use them 
> together is simply impossible
> Would he rather have dropped them?
> 
> 
>        - A single "failing" interface shouldn't kill the ospf process
>     for all
>     interfaces.
> 
> not kill - blocked yes
> 
> 
>        - This sounds more like a quagga problem, as I have a similar
>     setup (I
>     use my laptop for device C in my case) except I use the BIRD routing
>     daemon instead of quagga (and this setup works fine for me).
> 
> This is a problem with vireguard
> No other tunnels - I did not allow myself to do this
> About the inability to reach the addressee - packets just drop out
> But here he accumulates and accumulate ....
> 
> 
>     Of course, before any definitive conclusions can be made, we'll need a
>     bit more information. Could you possibly provide us with the following
>     pieces of information:
>        - What distribution are you using?
> 
> debian9
> 
> 
>        - What kernel (version) are you using?
> 
> 4.9.30-2+deb9u2
> 
> 
>        - What wireguard version are you using?
> 
> wireguard-0.0.20170613-1
> 
> 
>        - What quagga version are you using?
> 
> 0.99.23.1-1+deb8u3
> 
> 
>        - Please provide the kernel logs.
> 
> empty
> 
> 
>        - Please provide the quagga logs.
> 
> empty
> 

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re[2]: problem wireguard + ospf + unconnected tunnels
  2017-07-07 15:08     ` Roelf "rewbycraft" Wichertjes
@ 2017-07-07 15:47       ` ae
  2017-07-10  0:46       ` Jason A. Donenfeld
  1 sibling, 0 replies; 12+ messages in thread
From: ae @ 2017-07-07 15:47 UTC (permalink / raw)
  To: Roelf \"rewbycraft\" Wichertjes; +Cc: wireguard

[-- Attachment #1: Type: text/plain, Size: 1862 bytes --]


>So, is the problem you actually want help with actually getting A and C 
>to talk to eachother?
>If so, we'll need to see the configs you're using on both ends of the 
>tunnel. I'd also suggest checking your firewalls in this case.
>
>And ospf is simply refusing to use the A<->C but is still working just 
>fine across A<->B?
>If so, that's normal.
>If A<->B also stops working due to the "No buffer space available" 
>error, that is a bug with quagga. (which we can try to (get) fix(ed) in 
>that situation)
>
>Sorry if it seems obvious, I'm simply trying to get a grasp as to what 
>the actual problem you want help with is.
>
I gave an accurate description of the problem
Its essence is that the buffer overflow - which occurs when sending to the socket in the essence of being on an unconnected tunnel - blocks any other references from this socket to other networks

That is, a non-working tunnel can block ANY socket from which the traffic is sent to different points

If you do not consider this a problem - then indicate in the documentation that this tunnel is partially incompatible with the guage ospf and can lead to not just diagnosed problems in the network

And if you want to simulate the situation - then
wg setconf wg0 wg0.conf  (standart setting - tunnel to random host - 0.0.0.0/0 dst)
ip route add 10.192.122.3 dev wg0

import socket
import time
UDP_IP = "127.0.0.1"
UDP_PORT = 5005
UDP_IP2 = "10.192.122.3"
UDP_PORT2 = 5005
MESSAGE = "Hello, World!"
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) # UDP
n = 0
while True:
    print "send1", n
    sock.sendto(MESSAGE, (UDP_IP, UDP_PORT))
    print "send2", n
    sock.sendto(MESSAGE, (UDP_IP2, UDP_PORT2))
    time.sleep(0.1)
    n+=1


and run
The application will be blocked after 20 seconds
The application will be blocked after 20 seconds
it is not right


[-- Attachment #2: Type: text/html, Size: 4049 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Indefinite queuing for unconnected peers (Was: problem  wireguard + ospf + unconnected tunnels)
  2017-07-03 21:09 problem wireguard + ospf + unconnected tunnels ae
  2017-07-04 15:55 ` Roelf "rewbycraft" Wichertjes
@ 2017-07-08 14:21 ` Baptiste Jonglez
  2017-07-08 18:51   ` Roelf "rewbycraft" Wichertjes
  2017-07-10  0:53   ` Jason A. Donenfeld
  1 sibling, 2 replies; 12+ messages in thread
From: Baptiste Jonglez @ 2017-07-08 14:21 UTC (permalink / raw)
  To: wireguard

[-- Attachment #1: Type: text/plain, Size: 1425 bytes --]

Hi,

The current approach is to queue all outgoing packets for an indefinite
amount of time when the peer is not connected or reachable.

I think it does not make much sense, and leads to the kind of issue you
mention here.  The initial goal was probably to queue packets just long
enough to be able to complete a handshake with the peer, which makes a lot
of sense (it would be annoying to drop the first packet of any outgoing
connection).  But the handshake should not take more than hundreds of
milliseconds.

Maybe Wireguard should drop packets from this queue after a few seconds?
Would it be hard to implement?

Baptiste

On Tue, Jul 04, 2017 at 12:09:22AM +0300, ae wrote:
> situation
> 2 tunnels
> 1 normal - 2nd with unconnected ending
> + ospfd quagge
> 
> At start everything works fine - but after ~ 30-60 seconds - the ospf stops working
> 
> This is due to the fact that the ospf daemon sends packets from the same socket on different interfaces - and in the tunnel interface everything goes fine - but in the 2nd packets accumulate
> And after a certain accumulation - the socket of the demon daemon stops working on sending completely "No buffer space available "
> 
> Is it possible to fix this with settings?
> 

> _______________________________________________
> WireGuard mailing list
> WireGuard@lists.zx2c4.com
> https://lists.zx2c4.com/mailman/listinfo/wireguard


[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Indefinite queuing for unconnected peers (Was: problem wireguard + ospf + unconnected tunnels)
  2017-07-08 14:21 ` Indefinite queuing for unconnected peers (Was: problem wireguard + ospf + unconnected tunnels) Baptiste Jonglez
@ 2017-07-08 18:51   ` Roelf "rewbycraft" Wichertjes
  2017-07-10  0:53   ` Jason A. Donenfeld
  1 sibling, 0 replies; 12+ messages in thread
From: Roelf "rewbycraft" Wichertjes @ 2017-07-08 18:51 UTC (permalink / raw)
  To: wireguard

I can personally see there being use in both the getting sendto errors 
but also in simply dropping the packets (depending on the software you 
have communicating over wireguard). So rather than change it entirely, I 
would suggest making that an option of some sort.

As an aside, a single interface producing sendto() failures shouldn't, 
in my opinion, cause quagga's ospfd to refuse to operate on other 
interfaces.

On 07/08/2017 04:21 PM, Baptiste Jonglez wrote:
> Hi,
> 
> The current approach is to queue all outgoing packets for an indefinite
> amount of time when the peer is not connected or reachable.
> 
> I think it does not make much sense, and leads to the kind of issue you
> mention here.  The initial goal was probably to queue packets just long
> enough to be able to complete a handshake with the peer, which makes a lot
> of sense (it would be annoying to drop the first packet of any outgoing
> connection).  But the handshake should not take more than hundreds of
> milliseconds.
> 
> Maybe Wireguard should drop packets from this queue after a few seconds?
> Would it be hard to implement?
> 
> Baptiste
> 
> On Tue, Jul 04, 2017 at 12:09:22AM +0300, ae wrote:
>> situation
>> 2 tunnels
>> 1 normal - 2nd with unconnected ending
>> + ospfd quagge
>>
>> At start everything works fine - but after ~ 30-60 seconds - the ospf stops working
>>
>> This is due to the fact that the ospf daemon sends packets from the same socket on different interfaces - and in the tunnel interface everything goes fine - but in the 2nd packets accumulate
>> And after a certain accumulation - the socket of the demon daemon stops working on sending completely "No buffer space available "
>>
>> Is it possible to fix this with settings?
>>
> 
>> _______________________________________________
>> WireGuard mailing list
>> WireGuard@lists.zx2c4.com
>> https://lists.zx2c4.com/mailman/listinfo/wireguard
> 
> 
> 
> _______________________________________________
> WireGuard mailing list
> WireGuard@lists.zx2c4.com
> https://lists.zx2c4.com/mailman/listinfo/wireguard
> 

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: problem wireguard + ospf + unconnected tunnels
  2017-07-07 15:08     ` Roelf "rewbycraft" Wichertjes
  2017-07-07 15:47       ` Re[2]: " ae
@ 2017-07-10  0:46       ` Jason A. Donenfeld
  2017-07-10 17:06         ` Re[2]: " ae
  1 sibling, 1 reply; 12+ messages in thread
From: Jason A. Donenfeld @ 2017-07-10  0:46 UTC (permalink / raw)
  To: aeforeve; +Cc: WireGuard mailing list, Roelf Wichertjes

Hey ae,

Thanks for your detailed reports, especially the nice Python
reproducer you sent. And sorry for the delay in getting back to you
and investigating this. I actually don't receive any of your emails. I
don't know if it's because mail.ru has a bad spam score, or because
the HTML part of your email contains embedded javascript, but there's
something sufficiently sketchy that precludes them from being
delivered to my mailbox. Luckily others on the list brought this
thread to my attention.

I successfully debugged and fixed the Python reproducer you sent me.
Could you try the following patch, and see if applying it results in
ospfd working properly?

https://git.zx2c4.com/WireGuard/patch/?id=177335d5b460cce07631dff8bea478b73e184247

After you apply that and rebuild the module, be sure to rmmod the old
module and modprobe the new one. Then repeat your tests and see if it
works.

For interested readers on the list, here's what's happening:

* A packet inside the kernel is represented as an sk_buff, or an skb.
* Each socket inside the kernel has a budget of how many skbs it can
allocate for itself.
* When a socket reaches the limit of skbs it can allocate for itself,
it blocks until those skbs are freed.

Meanwhile in WireGuard:

* When a handshake has not been established, packets are queued up to
be sent immediately after a handshake is established.
* There is a maximum of 1024 packets allowed in this queue. Newer
packets push out older packets.
* After 20 unsuccessful attempts to establish a handshake, this queue
is emptied.

In your Python example, you used the same socket to send packets to
both lo and to wg0. lo immediately dropped the packets it couldn't
deliver, whereas wg0 did not, due to the above. After reaching a
per-socket limit on skbs allocated, sendto() simply blocks, thus
preventing packets being sent anywhere using that same socket. Herein
lies the problem.

The solution is to "orphan" packets that WireGuard buffers longterm,
so that they're no longer charged to the socket's maximum limit. Since
the interface maximum is capped (1024) and new packets replace old
packets and the fact that they are all freed after 20 unsuccessful
attempts, this does not cause any sort of unbounded memory growth.

So, the aforementioned problem successfully fixes your Python
reproducer code. Please try it on your routing daemon and let me know
if it also fixes the problem there too?

Thanks again for your help,
Jason

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Indefinite queuing for unconnected peers (Was: problem wireguard + ospf + unconnected tunnels)
  2017-07-08 14:21 ` Indefinite queuing for unconnected peers (Was: problem wireguard + ospf + unconnected tunnels) Baptiste Jonglez
  2017-07-08 18:51   ` Roelf "rewbycraft" Wichertjes
@ 2017-07-10  0:53   ` Jason A. Donenfeld
  1 sibling, 0 replies; 12+ messages in thread
From: Jason A. Donenfeld @ 2017-07-10  0:53 UTC (permalink / raw)
  To: Baptiste Jonglez; +Cc: WireGuard mailing list

Hey Baptiste,

As alluded to in my other recent reply, WireGuard already does this
actually. It tries the handshake a few times, and only after failing
does it drop the queue. I suppose I could greatly reduce the clearing
condition from dropping after 20 handshakes to dropping after 1
handshake, but I don't think it makes a difference anyway, because new
packets should replace old packets in the queue.

Jason

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re[2]: problem wireguard + ospf + unconnected tunnels
  2017-07-10  0:46       ` Jason A. Donenfeld
@ 2017-07-10 17:06         ` ae
  2017-07-10 17:09           ` Jason A. Donenfeld
  0 siblings, 1 reply; 12+ messages in thread
From: ae @ 2017-07-10 17:06 UTC (permalink / raw)
  To: Jason A. Donenfeld; +Cc: WireGuard mailing list, Roelf Wichertjes

[-- Attachment #1: Type: text/plain, Size: 908 bytes --]


>Hey ae,
>
>Thanks for your detailed reports, especially the nice Python
>reproducer you sent. And sorry for the delay in getting back to you
>and investigating this. I actually don't receive any of your emails. I
>don't know if it's because mail.ru has a bad spam score, or because
>the HTML part of your email contains embedded javascript, but there's
>something sufficiently sketchy that precludes them from being
>delivered to my mailbox. Luckily others on the list brought this
>thread to my attention.
>
>I successfully debugged and fixed the Python reproducer you sent me.
>Could you try the following patch, and see if applying it results in
>ospfd working properly?
>
>https://git.zx2c4.com/WireGuard/patch/?id=177335d5b460cce07631dff8bea478b73e184247

yes - work

+ Pair of missing functionality - which I lacked when replacing with wireguard

1) src addr tunnel
2) work in only preshared crypto


[-- Attachment #2: Type: text/html, Size: 1619 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Re[2]: problem wireguard + ospf + unconnected tunnels
  2017-07-10 17:06         ` Re[2]: " ae
@ 2017-07-10 17:09           ` Jason A. Donenfeld
  2017-07-10 17:26             ` Re[4]: " ae
  0 siblings, 1 reply; 12+ messages in thread
From: Jason A. Donenfeld @ 2017-07-10 17:09 UTC (permalink / raw)
  To: ae; +Cc: WireGuard mailing list, Roelf Wichertjes

On Mon, Jul 10, 2017 at 7:06 PM, ae <aeforeve@mail.ru> wrote:
> yes - work
Great to hear! This will be a part of the next snapshot.

> + Pair of missing functionality - which I lacked when replacing with wireguard
>
> 1) src addr tunnel
What is this? Can you elaborate on what you mean?

> 2) work in only preshared crypto
WireGuard has a preshared-key mode, but it's in addition to the normal
EC-based crypto, not instead of. Welcome to the future!

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re[4]: problem wireguard + ospf + unconnected tunnels
  2017-07-10 17:09           ` Jason A. Donenfeld
@ 2017-07-10 17:26             ` ae
  0 siblings, 0 replies; 12+ messages in thread
From: ae @ 2017-07-10 17:26 UTC (permalink / raw)
  To: Jason A. Donenfeld; +Cc: WireGuard mailing list, Roelf Wichertjes

[-- Attachment #1: Type: text/plain, Size: 1091 bytes --]




>Понедельник, 10 июля 2017, 22:09 +05:00 от "Jason A. Donenfeld" <Jason@zx2c4.com>:
>
>On Mon, Jul 10, 2017 at 7:06 PM, ae < aeforeve@mail.ru > wrote:
>> yes - work
>Great to hear! This will be a part of the next snapshot.
>
>> + Pair of missing functionality - which I lacked when replacing with wireguard
>>
>> 1) src addr tunnel
>What is this? Can you elaborate on what you mean?

src address tunnel
Not only src port
But also with an address from which the tunnel packets are sent
At a multichromed server - it is possible but inconvenient to operate from where the packets will be sent via the ip mark


>
>> 2) work in only preshared crypto
>WireGuard has a preshared-key mode, but it's in addition to the normal
>EC-based crypto, not instead of. Welcome to the future!

Routing through crypto keys - maybe well - but with dynamic routing - not working at all
Go through to create a crowd of point-point tunnels - and have 2 keys to use


And the question is: how productive will it work when point multipoint, provided that multipoint ~ 10000? And 10,000 + 1 key


[-- Attachment #2: Type: text/html, Size: 1876 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2017-07-10 17:08 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-07-03 21:09 problem wireguard + ospf + unconnected tunnels ae
2017-07-04 15:55 ` Roelf "rewbycraft" Wichertjes
2017-07-04 17:10   ` Re[2]: " ae
2017-07-07 15:08     ` Roelf "rewbycraft" Wichertjes
2017-07-07 15:47       ` Re[2]: " ae
2017-07-10  0:46       ` Jason A. Donenfeld
2017-07-10 17:06         ` Re[2]: " ae
2017-07-10 17:09           ` Jason A. Donenfeld
2017-07-10 17:26             ` Re[4]: " ae
2017-07-08 14:21 ` Indefinite queuing for unconnected peers (Was: problem wireguard + ospf + unconnected tunnels) Baptiste Jonglez
2017-07-08 18:51   ` Roelf "rewbycraft" Wichertjes
2017-07-10  0:53   ` Jason A. Donenfeld

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.