All of lore.kernel.org
 help / color / mirror / Atom feed
* Flowtable with ppp/bridge
@ 2021-04-26 15:30 Frank Wunderlich
  2021-04-26 17:29 ` Pablo Neira Ayuso
  0 siblings, 1 reply; 22+ messages in thread
From: Frank Wunderlich @ 2021-04-26 15:30 UTC (permalink / raw)
  To: netfilter

Hi,
I try to setup a flowtable for use with ppp(oe) and bridge interface.

http://forum.banana-pi.org/t/new-netfilter-flow-table-based-hnat/12049/60?u=frank-w

table ip filter {
flowtable ft {
hook ingress priority 10;
devices = {"ppp8","lanbr0"};
flags offload
}

I cannot get nft (0.9.8) apply this

Any idea? Is there a kernel option or something in nft missing?
regards Frank

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Flowtable with ppp/bridge
  2021-04-26 15:30 Flowtable with ppp/bridge Frank Wunderlich
@ 2021-04-26 17:29 ` Pablo Neira Ayuso
  2021-04-26 17:51   ` Frank Wunderlich
  0 siblings, 1 reply; 22+ messages in thread
From: Pablo Neira Ayuso @ 2021-04-26 17:29 UTC (permalink / raw)
  To: Frank Wunderlich; +Cc: netfilter

On Mon, Apr 26, 2021 at 05:30:47PM +0200, Frank Wunderlich wrote:
> Hi,
> I try to setup a flowtable for use with ppp(oe) and bridge interface.
> 
> http://forum.banana-pi.org/t/new-netfilter-flow-table-based-hnat/12049/60?u=frank-w
> 
> table ip filter {
> flowtable ft {
> hook ingress priority 10;
> devices = {"ppp8","lanbr0"};
> flags offload
> }
> 
> I cannot get nft (0.9.8) apply this
> 
> Any idea? Is there a kernel option or something in nft missing?

Someone already replied to you:

http://forum.banana-pi.org/t/new-netfilter-flow-table-based-hnat/12049/63

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Flowtable with ppp/bridge
  2021-04-26 17:29 ` Pablo Neira Ayuso
@ 2021-04-26 17:51   ` Frank Wunderlich
  2021-04-26 17:57     ` Pablo Neira Ayuso
  0 siblings, 1 reply; 22+ messages in thread
From: Frank Wunderlich @ 2021-04-26 17:51 UTC (permalink / raw)
  To: Pablo Neira Ayuso; +Cc: netfilter

Hi Pablo,

Is alex' guess right and i need to use physical interface instead of the virtual one?

In my case i have ppp connection over vlan on wan port.

ppp8 => wan.110 => wan

Lan side (bridge) may work,but for ppp it sounds wrong to me.
regards Frank

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Flowtable with ppp/bridge
  2021-04-26 17:51   ` Frank Wunderlich
@ 2021-04-26 17:57     ` Pablo Neira Ayuso
  2021-04-26 18:08       ` Frank Wunderlich
  0 siblings, 1 reply; 22+ messages in thread
From: Pablo Neira Ayuso @ 2021-04-26 17:57 UTC (permalink / raw)
  To: Frank Wunderlich; +Cc: netfilter

On Mon, Apr 26, 2021 at 07:51:11PM +0200, Frank Wunderlich wrote:
> Hi Pablo,
> 
> Is alex' guess right and i need to use physical interface instead of
> the virtual one?

Confusing, you reported an example that works:

http://forum.banana-pi.org/t/new-netfilter-flow-table-based-hnat/12049/30

That was in March 2021.

> In my case i have ppp connection over vlan on wan port.
> 
> ppp8 => wan.110 => wan
> 
> Lan side (bridge) may work,but for ppp it sounds wrong to me.

Just add the 'wan' device to the flowtable, as you did back in March.


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Flowtable with ppp/bridge
  2021-04-26 17:57     ` Pablo Neira Ayuso
@ 2021-04-26 18:08       ` Frank Wunderlich
  2021-04-27 23:49         ` Pablo Neira Ayuso
  0 siblings, 1 reply; 22+ messages in thread
From: Frank Wunderlich @ 2021-04-26 18:08 UTC (permalink / raw)
  To: Pablo Neira Ayuso; +Cc: netfilter

Am 26. April 2021 19:57:03 MESZ schrieb Pablo Neira Ayuso <pablo@netfilter.org>:
>On Mon, Apr 26, 2021 at 07:51:11PM +0200, Frank Wunderlich wrote:
>> Hi Pablo,
>> 
>> Is alex' guess right and i need to use physical interface instead of
>> the virtual one?
>
>Confusing, you reported an example that works:
>
>http://forum.banana-pi.org/t/new-netfilter-flow-table-based-hnat/12049/30
>
>That was in March 2021.

That was a test without ppp/vlan/bridge on my test device (to make performance test and looking for bindings on simple setup). Now i'm working on my main router where i use pppoe to my isp which needs to be encapsulated into a vlan (to separate from voip).

>> In my case i have ppp connection over vlan on wan port.
>> 
>> ppp8 => wan.110 => wan
>> 
>> Lan side (bridge) may work,but for ppp it sounds wrong to me.
>
>Just add the 'wan' device to the flowtable, as you did back in March.

Ok, i try it...If this works also for traffic routed to the ppp interface then it is ok. Forwarding is done from lanbr0 to ppp8 virtual interfaces not to physical interfaces directly like i've done on test in March

regards Frank

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Flowtable with ppp/bridge
  2021-04-26 18:08       ` Frank Wunderlich
@ 2021-04-27 23:49         ` Pablo Neira Ayuso
  2021-04-28  8:07           ` Frank Wunderlich
  0 siblings, 1 reply; 22+ messages in thread
From: Pablo Neira Ayuso @ 2021-04-27 23:49 UTC (permalink / raw)
  To: Frank Wunderlich; +Cc: netfilter

On Mon, Apr 26, 2021 at 08:08:05PM +0200, Frank Wunderlich wrote:
> Am 26. April 2021 19:57:03 MESZ schrieb Pablo Neira Ayuso <pablo@netfilter.org>:
> >On Mon, Apr 26, 2021 at 07:51:11PM +0200, Frank Wunderlich wrote:
> >> Hi Pablo,
> >> 
> >> Is alex' guess right and i need to use physical interface instead of
> >> the virtual one?
> >
> >Confusing, you reported an example that works:
> >
> >http://forum.banana-pi.org/t/new-netfilter-flow-table-based-hnat/12049/30
> >
> >That was in March 2021.
> 
> That was a test without ppp/vlan/bridge on my test device (to make
> performance test and looking for bindings on simple setup). Now i'm
> working on my main router where i use pppoe to my isp which needs to
> be encapsulated into a vlan (to separate from voip).
> 
> >> In my case i have ppp connection over vlan on wan port.
> >> 
> >> ppp8 => wan.110 => wan
> >> 
> >> Lan side (bridge) may work,but for ppp it sounds wrong to me.
> >
> >Just add the 'wan' device to the flowtable, as you did back in March.
> 
> Ok, i try it...If this works also for traffic routed to the ppp
> interface then it is ok. Forwarding is done from lanbr0 to ppp8
> virtual interfaces not to physical interfaces directly like i've
> done on test in March

Since Linux kernel 3.13-rc, the flowtable is capable of autodetecting
your existing network device configurations. Therefore, you only have
to add the physical devices in the flowtable definition.

The flowtable offload supports for:

- VLAN device.
- Bridge VLAN filtering.
- PPPoE device.
- Bridge device.

and combinations of these devices.

PPPoE over VLAN is also supported, I tested this specifically before
submission upstream.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Flowtable with ppp/bridge
  2021-04-27 23:49         ` Pablo Neira Ayuso
@ 2021-04-28  8:07           ` Frank Wunderlich
  2021-04-28 17:26             ` Frank Wunderlich
  0 siblings, 1 reply; 22+ messages in thread
From: Frank Wunderlich @ 2021-04-28  8:07 UTC (permalink / raw)
  To: Pablo Neira Ayuso; +Cc: netfilter

Hi

Does it depend on any other recent changes?

I have backported v1 of the offload-series to 5.10 ([1] maybe i did any mistake there) to have lts kernel running while testing on main-router.

Direct lan-wan hnat works,forward to pppoe is broken (i can apply with wan/lan physical devices,but no traffic through ppp).on client i get connection-reset,pppoe session on router stays alive.

This is an entry from the mtk entries (mt7623):

01ac6 UNB IPv4 5T orig=192.168.0.21:52136->195.20.250.26:443 new=217.61.147.xx:57010->217.72.196.71:443 eth=02:11:02:03:04:05->00:00:5e:00:01:02 etype=0101 vlan=140,0 ib1=1000019d ib2=007ff020

Vlan140 is correct (accidentally wrote 110 above). Sourceip is correctly changed from private to public. I wonder about etype and target ip. I think target ip should stay same and etype imho should be 88xx (vlan/pppoe-s).

[1] https://github.com/frank-w/BPI-R2-4.14/commits/5.10-hnat
regards Frank

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Flowtable with ppp/bridge
  2021-04-28  8:07           ` Frank Wunderlich
@ 2021-04-28 17:26             ` Frank Wunderlich
  2021-04-29 13:59               ` Aw: " Frank Wunderlich
  0 siblings, 1 reply; 22+ messages in thread
From: Frank Wunderlich @ 2021-04-28 17:26 UTC (permalink / raw)
  To: Pablo Neira Ayuso; +Cc: netfilter

Tested again and found bnd-entry with correct src+dst ip. But i wonder about etype=0101, i expect 8100 here as top most ethertype should be 802.1q/vlan

Pppoe=>vlan=>dsa

00c22 BND IPv4 5T orig=192.168.0.21:33676->213.95.41.4:443 new=80.245.76.249:33676->213.95.41.4:443 eth=02:11:02:03:04:05->00:00:5e:00:01:02 etype=0101 vlan=140,0 ib1=214949a7 ib2=007ff020

I looked in mtk_foe_entry_set_vlan and it seems ethertype is not set,have not found position where 0x101 gets set. 

https://patchwork.ozlabs.org/project/netfilter-devel/patch/20210311003604.22199-22-pablo@netfilter.org/

As it is not ETH_P_PPP_SES=0x8864 and not ETH_P_IP=0x0800, i guess it is calculated in mtk_foe_entry_set_dsa

Am i on the right way?
regards Frank

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Aw: Re: Flowtable with ppp/bridge
  2021-04-28 17:26             ` Frank Wunderlich
@ 2021-04-29 13:59               ` Frank Wunderlich
  2021-05-02 13:51                 ` Frank Wunderlich
  0 siblings, 1 reply; 22+ messages in thread
From: Frank Wunderlich @ 2021-04-29 13:59 UTC (permalink / raw)
  To: frank-w; +Cc: Pablo Neira Ayuso, netfilter

Hi,

after setting up a pppoe-server on my laptop and configuring all to go over it, running 5.12-rc+offload series (v1) it works so far. Running 5.10 with the ported version, i cannot get tcp working with flowtable over vlan/ppp (also without offloading).

seems there are any patches for flowtable i miss in my 5.10-tree

any idea which patch(es) this can be?

regards Frank

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Aw: Re: Flowtable with ppp/bridge
  2021-04-29 13:59               ` Aw: " Frank Wunderlich
@ 2021-05-02 13:51                 ` Frank Wunderlich
  2021-05-02 22:11                   ` Pablo Neira Ayuso
  0 siblings, 1 reply; 22+ messages in thread
From: Frank Wunderlich @ 2021-05-02 13:51 UTC (permalink / raw)
  To: Frank Wunderlich; +Cc: Pablo Neira Ayuso, netfilter

Hi,

i got a bit further and it looks like an MTU-Issue

i tested now with my 5.10 bridges,vlan and pppoe

first 2 working without problems, pppoe works only if i reduce mtu to e.g. 1480 (pppoe has 1492).

i tried with this patch (found as difference between my 5.10 and 5.12 hnat trees), but this does not solve it.

https://github.com/frank-w/BPI-R2-4.14/commit/5f7d57280c1982d993d5f4ff0edac310f820f607 (bpf: Drop MTU check when doing TC-BPF redirect to ingress)

any idea? i wonder why mtu is a problem here, as with 1500 default i still got internet-connection (over pppoe on main-router too) and due to Pathdiscovery it should fragment. Seems this is not working in 5.10 for flowtable. Without flowtable (disable "flow add" line, disabling "flags offload" is not enough) i have no issues.

any idea?

regards Frank

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Re: Flowtable with ppp/bridge
  2021-05-02 13:51                 ` Frank Wunderlich
@ 2021-05-02 22:11                   ` Pablo Neira Ayuso
  2021-05-03 18:56                     ` Aw: " Frank Wunderlich
  0 siblings, 1 reply; 22+ messages in thread
From: Pablo Neira Ayuso @ 2021-05-02 22:11 UTC (permalink / raw)
  To: Frank Wunderlich; +Cc: netfilter

On Sun, May 02, 2021 at 03:51:08PM +0200, Frank Wunderlich wrote:
> Hi,
> 
> i got a bit further and it looks like an MTU-Issue
> 
> i tested now with my 5.10 bridges,vlan and pppoe
> 
> first 2 working without problems, pppoe works only if i reduce mtu to e.g. 1480 (pppoe has 1492).
>
> i tried with this patch (found as difference between my 5.10 and 5.12 hnat trees), but this does not solve it.
> 
> https://github.com/frank-w/BPI-R2-4.14/commit/5f7d57280c1982d993d5f4ff0edac310f820f607 (bpf: Drop MTU check when doing TC-BPF redirect to ingress)
> 
> any idea? i wonder why mtu is a problem here, as with 1500 default i
> still got internet-connection (over pppoe on main-router too) and
> due to Pathdiscovery it should fragment. Seems this is not working
> in 5.10 for flowtable. Without flowtable (disable "flow add" line,
> disabling "flags offload" is not enough) i have no issues.
> 
> any idea?

You have to add a rule to clamp TCP mss to path MTU.

... tcp flags syn tcp option maxseg size set rt mtu

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Aw: Re: Re: Flowtable with ppp/bridge
  2021-05-02 22:11                   ` Pablo Neira Ayuso
@ 2021-05-03 18:56                     ` Frank Wunderlich
  2021-05-03 21:32                       ` Pablo Neira Ayuso
  0 siblings, 1 reply; 22+ messages in thread
From: Frank Wunderlich @ 2021-05-03 18:56 UTC (permalink / raw)
  To: Pablo Neira Ayuso; +Cc: netfilter

Hi Pablo

> Gesendet: Montag, 03. Mai 2021 um 00:11 Uhr
> Von: "Pablo Neira Ayuso" <pablo@netfilter.org>

> You have to add a rule to clamp TCP mss to path MTU.
>
> ... tcp flags syn tcp option maxseg size set rt mtu

Thanks i try this like described here (just for reference):

https://wiki.nftables.org/wiki-nftables/index.php/Mangling_packet_headers

my MTU broadcast via dnsmasq does not work for all client-devices

but imho this should affect 5.12 and 5.10 without flowtable too (because limit is the ppp-tunnel in default Gateway), right?? so it looks like flowtable in 5.10 breaks the Path Discovery or prevents fragmentation which should normally happen if packets are too big.

regards Frank

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Re: Re: Flowtable with ppp/bridge
  2021-05-03 18:56                     ` Aw: " Frank Wunderlich
@ 2021-05-03 21:32                       ` Pablo Neira Ayuso
  2021-05-04 10:54                         ` Aw: " Frank Wunderlich
  0 siblings, 1 reply; 22+ messages in thread
From: Pablo Neira Ayuso @ 2021-05-03 21:32 UTC (permalink / raw)
  To: Frank Wunderlich; +Cc: netfilter

On Mon, May 03, 2021 at 08:56:48PM +0200, Frank Wunderlich wrote:
> Hi Pablo
> 
> > Gesendet: Montag, 03. Mai 2021 um 00:11 Uhr
> > Von: "Pablo Neira Ayuso" <pablo@netfilter.org>
> 
> > You have to add a rule to clamp TCP mss to path MTU.
> >
> > ... tcp flags syn tcp option maxseg size set rt mtu
> 
> Thanks i try this like described here (just for reference):
> 
> https://wiki.nftables.org/wiki-nftables/index.php/Mangling_packet_headers

I have updated the wiki: you have to mangle the TCP MSS options of the
original syn and the reply syn+ack packets.

> my MTU broadcast via dnsmasq does not work for all client-devices
> 
> but imho this should affect 5.12 and 5.10 without flowtable too
> (because limit is the ppp-tunnel in default Gateway), right?? so it
> looks like flowtable in 5.10 breaks the Path Discovery or prevents
> fragmentation which should normally happen if packets are too big.

Did you try with the rule that mangles both the original syn and the
reply syn+ack packets? Do not restrict mangling to oifname pppoe0.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Aw: Re: Re: Re: Flowtable with ppp/bridge
  2021-05-03 21:32                       ` Pablo Neira Ayuso
@ 2021-05-04 10:54                         ` Frank Wunderlich
  2021-05-04 11:42                           ` Pablo Neira Ayuso
  0 siblings, 1 reply; 22+ messages in thread
From: Frank Wunderlich @ 2021-05-04 10:54 UTC (permalink / raw)
  To: Pablo Neira Ayuso; +Cc: netfilter

> Gesendet: Montag, 03. Mai 2021 um 23:32 Uhr
> Von: "Pablo Neira Ayuso" <pablo@netfilter.org>

> On Mon, May 03, 2021 at 08:56:48PM +0200, Frank Wunderlich wrote:
> I have updated the wiki: you have to mangle the TCP MSS options of the
> original syn and the reply syn+ack packets.

does old rule not match both directions (syn and syn+ack)? it looks like
you have only removed the "oifname ppp0"

> > but imho this should affect 5.12 and 5.10 without flowtable too
> > (because limit is the ppp-tunnel in default Gateway), right?? so it
> > looks like flowtable in 5.10 breaks the Path Discovery or prevents
> > fragmentation which should normally happen if packets are too big.
>
> Did you try with the rule that mangles both the original syn and the
> reply syn+ack packets? Do not restrict mangling to oifname pppoe0.

my last tests were completely without any mss fix, so i should have always the problem
with and without flowtable in all Kernel-versions. but i had the problem only in 5.10
with flowtable, so i guess flowtable in 5.10 blocks the normal behaviour or adding
additional headers internally which prevents a normal forward.

my current rule is this (only on main-router without any extensive tests...):

    chain FORWARD {
        type filter hook forward priority 0; policy drop;
        #https://wiki.nftables.org/wiki-nftables/index.php/Mangling_packet_headers
        #MSS fix for pppoe 1500 - 8 (pppoe) - 20 (ipv4) - 20 (TCP)
        oifname $ifwan tcp flags syn tcp option maxseg size set 1452

$ifwan is my ppp8, and "tcp flags syn" imho should match syn and syn+ack.

Dropping the "oifname pppX" makes no sense for me as the ppp-interface is the bottleneck. why should i watch for tcp-packets on lan-only where every interface has the mtu 1500? imho i need to limit only the packets going through the bottleneck on my side. The other side should be handled by ISP (but with syn and syn-ack i watch both directions and modify if needed).

btw. i read that the flow offload makes a bypass resulting that further rules are not processed, right?

i have some limits in formward-chain, that may be skipped.

example:

   chain FORWARD {
        type filter hook forward priority 0; policy drop;
        oifname $ifwan tcp flags syn tcp option maxseg size set 1452
        ip protocol { tcp, udp } flow add @f
        oifname $ifexternal ip saddr $iprangesblocked jump REJECTED comment "block internal ip ranges to have only internal access"
        udp dport {41,43,44,58,59,60} jump REJECTED comment "block ipv6 in ipv4 tunnel"

is it true that bottom 2 rules are not processed due to the flowtable? i guess (!) flowtable only affects established connections where bottom 2 rules prevent this ("connections" do not enter established state because dropped on first packet).

regards Frank

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Re: Re: Re: Flowtable with ppp/bridge
  2021-05-04 10:54                         ` Aw: " Frank Wunderlich
@ 2021-05-04 11:42                           ` Pablo Neira Ayuso
  2021-05-05  8:55                             ` Aw: " Frank Wunderlich
  0 siblings, 1 reply; 22+ messages in thread
From: Pablo Neira Ayuso @ 2021-05-04 11:42 UTC (permalink / raw)
  To: Frank Wunderlich; +Cc: netfilter

On Tue, May 04, 2021 at 12:54:24PM +0200, Frank Wunderlich wrote:
> > Gesendet: Montag, 03. Mai 2021 um 23:32 Uhr
> > Von: "Pablo Neira Ayuso" <pablo@netfilter.org>
> 
> > On Mon, May 03, 2021 at 08:56:48PM +0200, Frank Wunderlich wrote:
> > I have updated the wiki: you have to mangle the TCP MSS options of the
> > original syn and the reply syn+ack packets.
> 
> does old rule not match both directions (syn and syn+ack)? it looks like
> you have only removed the "oifname ppp0"
>
> > > but imho this should affect 5.12 and 5.10 without flowtable too
> > > (because limit is the ppp-tunnel in default Gateway), right?? so it
> > > looks like flowtable in 5.10 breaks the Path Discovery or prevents
> > > fragmentation which should normally happen if packets are too big.
> >
> > Did you try with the rule that mangles both the original syn and the
> > reply syn+ack packets? Do not restrict mangling to oifname pppoe0.
> 
> my last tests were completely without any mss fix, so i should have
> always the problem with and without flowtable in all
> Kernel-versions. but i had the problem only in 5.10 with flowtable,
> so i guess flowtable in 5.10 blocks the normal behaviour or adding
> additional headers internally which prevents a normal forward.

You also need TCP clamp MTU in a non-flowtable setup.

> my current rule is this (only on main-router without any extensive tests...):
> 
>     chain FORWARD {
>         type filter hook forward priority 0; policy drop;
>         #https://wiki.nftables.org/wiki-nftables/index.php/Mangling_packet_headers
>         #MSS fix for pppoe 1500 - 8 (pppoe) - 20 (ipv4) - 20 (TCP)
>         oifname $ifwan tcp flags syn tcp option maxseg size set 1452
> 
> $ifwan is my ppp8, and "tcp flags syn" imho should match syn and syn+ack.

syn+ack matches iifname $ifwan.

> Dropping the "oifname pppX" makes no sense for me as the
> ppp-interface is the bottleneck. why should i watch for tcp-packets
> on lan-only where every interface has the mtu 1500? imho i need to
> limit only the packets going through the bottleneck on my side. The
> other side should be handled by ISP (but with syn and syn-ack i
> watch both directions and modify if needed).
> 
> btw. i read that the flow offload makes a bypass resulting that
> further rules are not processed, right?
>
> i have some limits in formward-chain, that may be skipped.
> 
> example:
> 
>    chain FORWARD {
>         type filter hook forward priority 0; policy drop;
>         oifname $ifwan tcp flags syn tcp option maxseg size set 1452
>         ip protocol { tcp, udp } flow add @f
>         oifname $ifexternal ip saddr $iprangesblocked jump REJECTED comment "block internal ip ranges to have only internal access"
>         udp dport {41,43,44,58,59,60} jump REJECTED comment "block ipv6 in ipv4 tunnel"
> 
> is it true that bottom 2 rules are not processed due to the
> flowtable? i guess (!) flowtable only affects established
> connections where bottom 2 rules prevent this ("connections" do not
> enter established state because dropped on first packet).

Only the two initial syn and syn+ack packets follow the classic
forwarding path. Therefore, the FORWARD chain in your example above is
evaluated only for the two initial packets of the TCP connection.

You should add the 'flow add' rule at the bottom of your ruleset in
your example above.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Aw: Re: Re: Re: Re: Flowtable with ppp/bridge
  2021-05-04 11:42                           ` Pablo Neira Ayuso
@ 2021-05-05  8:55                             ` Frank Wunderlich
  2021-05-05 22:55                               ` Pablo Neira Ayuso
  0 siblings, 1 reply; 22+ messages in thread
From: Frank Wunderlich @ 2021-05-05  8:55 UTC (permalink / raw)
  To: Pablo Neira Ayuso; +Cc: netfilter

Hi,
> Gesendet: Dienstag, 04. Mai 2021 um 13:42 Uhr
> Von: "Pablo Neira Ayuso" <pablo@netfilter.org>

> You also need TCP clamp MTU in a non-flowtable setup.
hi,
thats clear now, but i guess i had not much problems till now because of Path discovery

> > $ifwan is my ppp8, and "tcp flags syn" imho should match syn and syn+ack.
>
> syn+ack matches iifname $ifwan.

imho this depends on the initiator of the connection. if i try to establish connection from my lan, this is true, and mss is set by oifname ppp0. Afair SYN-ACK from "Server" does use my size for syn-ack (if it is smaller that its size), or am i wrong?

if i initiate a connection from the public internet, i need to match iifname ppp0, but only if my ISP does not modify mss on pushing the initial SYN-Packet through the ppp-tunnel. But afair in this case i modify mss with oifname, but on response (SYN-ACK) i send back to the initiator.

> Only the two initial syn and syn+ack packets follow the classic
> forwarding path. Therefore, the FORWARD chain in your example above is
> evaluated only for the two initial packets of the TCP connection.
>
> You should add the 'flow add' rule at the bottom of your ruleset in
> your example above.

good to know, so flowtable does always "accept" matching packets (here all udp/tcp), right?
and final foward-policy does neyer hit if flowtable condition matches.

regards Frank

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Re: Re: Re: Re: Flowtable with ppp/bridge
  2021-05-05  8:55                             ` Aw: " Frank Wunderlich
@ 2021-05-05 22:55                               ` Pablo Neira Ayuso
  2021-05-06  9:53                                 ` Aw: " Frank Wunderlich
  0 siblings, 1 reply; 22+ messages in thread
From: Pablo Neira Ayuso @ 2021-05-05 22:55 UTC (permalink / raw)
  To: Frank Wunderlich; +Cc: netfilter

On Wed, May 05, 2021 at 10:55:24AM +0200, Frank Wunderlich wrote:
> Hi,
> > Gesendet: Dienstag, 04. Mai 2021 um 13:42 Uhr
> > Von: "Pablo Neira Ayuso" <pablo@netfilter.org>
>
> > You also need TCP clamp MTU in a non-flowtable setup.
> hi,
> thats clear now, but i guess i had not much problems till now because of Path discovery
>
> > > $ifwan is my ppp8, and "tcp flags syn" imho should match syn and syn+ack.
> >
> > syn+ack matches iifname $ifwan.
>
> imho this depends on the initiator of the connection. if i try to
> establish connection from my lan, this is true, and mss is set by
> oifname ppp0. Afair SYN-ACK from "Server" does use my size for
> syn-ack (if it is smaller that its size), or am i wrong?

rfc6691 says that TCP MSS is:

   The maximum number of data octets that may be received by the
   sender of this TCP option in TCP segments with no TCP header
   options transmitted in IP datagrams with no IP header option

> if i initiate a connection from the public internet, i need to match
> iifname ppp0, but only if my ISP does not modify mss on pushing the
> initial SYN-Packet through the ppp-tunnel. But afair in this case i
> modify mss with oifname, but on response (SYN-ACK) i send back to
> the initiator.
>
> > Only the two initial syn and syn+ack packets follow the classic
> > forwarding path. Therefore, the FORWARD chain in your example above is
> > evaluated only for the two initial packets of the TCP connection.
> >
> > You should add the 'flow add' rule at the bottom of your ruleset in
> > your example above.
>
> good to know, so flowtable does always "accept" matching packets
> (here all udp/tcp), right?

Yes, the flowtable lookup comes from the ingress hook that comes much
sooner than the forward chain.

> and final foward-policy does neyer hit if flowtable condition
> matches.

The forward policy is skipped once the flowtable lookup finds a
matching entry.

By "flowtable condition" I'm not sure if you're refering to the "flow
add" statement through.

So based on your original simple example (untested) ruleset that
I'd suggest:

table x {
   flowtable x {
        hook ingress priority 0
        devices = { eth0, eth1, eth2 }
   }

   chain FORWARD_established {
        ip protocol { tcp, udp } flow add @x
        accept
   }

   chain REJECTED {
        # your rules to reject traffic here
   }

   chain FORWARD_new {
        oifname $ifexternal ip saddr $iprangesblocked jump REJECTED comment "block internal ip ranges to have only internal access"
        udp dport {41,43,44,58,59,60} jump REJECTED comment "block ipv6 in ipv4 tunnel"
   }

   chain FORWARD {
        type filter hook forward priority 0; policy drop;

        tcp flags syn tcp option maxseg size set rt mtu
        ct state vmap { established : jump FORWARD_established, related : jump FORWARD_established, new : jump FORWARD_new }
   }
}

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Aw: Re: Re: Re: Re: Re: Flowtable with ppp/bridge
  2021-05-05 22:55                               ` Pablo Neira Ayuso
@ 2021-05-06  9:53                                 ` Frank Wunderlich
  2021-05-06 15:51                                   ` Pablo Neira Ayuso
  0 siblings, 1 reply; 22+ messages in thread
From: Frank Wunderlich @ 2021-05-06  9:53 UTC (permalink / raw)
  To: Pablo Neira Ayuso; +Cc: netfilter

Hi
> Gesendet: Donnerstag, 06. Mai 2021 um 00:55 Uhr
> Von: "Pablo Neira Ayuso" <pablo@netfilter.org>

> rfc6691 says that TCP MSS is:
>
>    The maximum number of data octets that may be received by the
>    sender of this TCP option in TCP segments with no TCP header
>    options transmitted in IP datagrams with no IP header option

right, tell receiver which size of tcp-payload sender can handle, wonder about "IP datagrams" which remembers to udp but have nothing to do with tcp. i think mss does nothing for udp, am i right?

> By "flowtable condition" I'm not sure if you're refering to the "flow
> add" statement through.

right, the "flow add" line with the condition (in my simple example all tcp/udp)

>    chain FORWARD {
>         type filter hook forward priority 0; policy drop;
>
>         tcp flags syn tcp option maxseg size set rt mtu
>         ct state vmap { established : jump FORWARD_established, related : jump FORWARD_established, new : jump FORWARD_new }
>    }
> }

Thanks for the example, i wonder about this:

established : jump FORWARD_established, related : jump FORWARD_established

so established and related are moved to the established-chain, so far so good, but you wrote in previous mail, that forward-chain is only processed for syn-packets only (first 2: syn and syn-ack), so imho there should be no established connections there.

regards Frank

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Re: Re: Re: Re: Re: Flowtable with ppp/bridge
  2021-05-06  9:53                                 ` Aw: " Frank Wunderlich
@ 2021-05-06 15:51                                   ` Pablo Neira Ayuso
  2021-05-10  6:50                                     ` Aw: " Frank Wunderlich
  0 siblings, 1 reply; 22+ messages in thread
From: Pablo Neira Ayuso @ 2021-05-06 15:51 UTC (permalink / raw)
  To: Frank Wunderlich; +Cc: netfilter

Hi,

On Thu, May 06, 2021 at 11:53:21AM +0200, Frank Wunderlich wrote:
> Hi
> > Gesendet: Donnerstag, 06. Mai 2021 um 00:55 Uhr
> > Von: "Pablo Neira Ayuso" <pablo@netfilter.org>
> 
> > rfc6691 says that TCP MSS is:
> >
> >    The maximum number of data octets that may be received by the
> >    sender of this TCP option in TCP segments with no TCP header
> >    options transmitted in IP datagrams with no IP header option
> 
> right, tell receiver which size of tcp-payload sender can handle,
> wonder about "IP datagrams" which remembers to udp but have nothing
> to do with tcp. i think mss does nothing for udp, am i right?

UDP relies on IP fragmentation.

> > By "flowtable condition" I'm not sure if you're refering to the "flow
> > add" statement through.
> 
> right, the "flow add" line with the condition (in my simple example all tcp/udp)
> 
> >    chain FORWARD {
> >         type filter hook forward priority 0; policy drop;
> >
> >         tcp flags syn tcp option maxseg size set rt mtu
> >         ct state vmap { established : jump FORWARD_established, related : jump FORWARD_established, new : jump FORWARD_new }
> >    }
> > }
> 
> Thanks for the example, i wonder about this:
> 
> established : jump FORWARD_established, related : jump FORWARD_established
> 
> so established and related are moved to the established-chain, so
> far so good, but you wrote in previous mail, that forward-chain is
> only processed for syn-packets only (first 2: syn and syn-ack), so
> imho there should be no established connections there.

In conntrack, "established" state means: packets in both directions
have been seen, therefore, TCP established != conntrack established.
The first syn-ack reply packet is matching "ct state established"

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Aw: Re: Re: Re: Re: Re: Re: Flowtable with ppp/bridge
  2021-05-06 15:51                                   ` Pablo Neira Ayuso
@ 2021-05-10  6:50                                     ` Frank Wunderlich
  2021-05-10  8:24                                       ` Pablo Neira Ayuso
  0 siblings, 1 reply; 22+ messages in thread
From: Frank Wunderlich @ 2021-05-10  6:50 UTC (permalink / raw)
  To: Pablo Neira Ayuso; +Cc: netfilter

Hi

> Gesendet: Donnerstag, 06. Mai 2021 um 17:51 Uhr
> Von: "Pablo Neira Ayuso" <pablo@netfilter.org>

> > >    chain FORWARD {
> > >         type filter hook forward priority 0; policy drop;
> > >
> > >         tcp flags syn tcp option maxseg size set rt mtu
> > >         ct state vmap { established : jump FORWARD_established, related : jump FORWARD_established, new : jump FORWARD_new }

tried this way, seems to work so far, i have only problem on removing my ruleset with iptables (have this to reset my complete firewall, not only nft).

iptables -X
iptables v1.8.2 (nf_tables):  CHAIN_USER_DEL failed (Device or resource busy): chain FORWARD_known

i guess iptables cannot delete chain cause it is linked by ctstate vmap any idea?

is order important of defined chains? maybe i can move the 2 new forward-chains below old with "ct state vmap"

regards Frank

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Re: Re: Re: Re: Re: Re: Flowtable with ppp/bridge
  2021-05-10  6:50                                     ` Aw: " Frank Wunderlich
@ 2021-05-10  8:24                                       ` Pablo Neira Ayuso
  2021-05-10  9:00                                         ` Aw: " Frank Wunderlich
  0 siblings, 1 reply; 22+ messages in thread
From: Pablo Neira Ayuso @ 2021-05-10  8:24 UTC (permalink / raw)
  To: Frank Wunderlich; +Cc: netfilter

On Mon, May 10, 2021 at 08:50:56AM +0200, Frank Wunderlich wrote:
> Hi
> 
> > Gesendet: Donnerstag, 06. Mai 2021 um 17:51 Uhr
> > Von: "Pablo Neira Ayuso" <pablo@netfilter.org>
> 
> > > >    chain FORWARD {
> > > >         type filter hook forward priority 0; policy drop;
> > > >
> > > >         tcp flags syn tcp option maxseg size set rt mtu
> > > >         ct state vmap { established : jump FORWARD_established, related : jump FORWARD_established, new : jump FORWARD_new }
> 
> tried this way, seems to work so far, i have only problem on removing my ruleset with iptables (have this to reset my complete firewall, not only nft).
> 
> iptables -X
> iptables v1.8.2 (nf_tables):  CHAIN_USER_DEL failed (Device or resource busy): chain FORWARD_known
> 
> i guess iptables cannot delete chain cause it is linked by ctstate vmap any idea?

In iptables, you have to flush a chain (-F) before you can delete it.

Anyway, once you step in to use nftables, it is better if you use
native nftables commands to operate, such as:

 nft flush ruleset

> is order important of defined chains? maybe i can move the 2 new
> forward-chains below old with "ct state vmap"

Not sure what you mean, could you provide an example?

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Aw: Re: Re: Re: Re: Re: Re: Re: Flowtable with ppp/bridge
  2021-05-10  8:24                                       ` Pablo Neira Ayuso
@ 2021-05-10  9:00                                         ` Frank Wunderlich
  0 siblings, 0 replies; 22+ messages in thread
From: Frank Wunderlich @ 2021-05-10  9:00 UTC (permalink / raw)
  To: Pablo Neira Ayuso; +Cc: netfilter



regards Frank


> Gesendet: Montag, 10. Mai 2021 um 10:24 Uhr
> Von: "Pablo Neira Ayuso" <pablo@netfilter.org>
> An: "Frank Wunderlich" <frank-w@public-files.de>
> Cc: netfilter@vger.kernel.org
> Betreff: Re: Re: Re: Re: Re: Re: Re: Flowtable with ppp/bridge
>
> On Mon, May 10, 2021 at 08:50:56AM +0200, Frank Wunderlich wrote:

> > > Gesendet: Donnerstag, 06. Mai 2021 um 17:51 Uhr
> > > Von: "Pablo Neira Ayuso" <pablo@netfilter.org>
> >
> > > > >    chain FORWARD {
> > > > >         type filter hook forward priority 0; policy drop;
> > > > >
> > > > >         tcp flags syn tcp option maxseg size set rt mtu
> > > > >         ct state vmap { established : jump FORWARD_established, related : jump FORWARD_established, new : jump FORWARD_new }
> >
> > tried this way, seems to work so far, i have only problem on removing my ruleset with iptables (have this to reset my complete firewall, not only nft).
> >
> > iptables -X
> > iptables v1.8.2 (nf_tables):  CHAIN_USER_DEL failed (Device or resource busy): chain FORWARD_known
> >
> > i guess iptables cannot delete chain cause it is linked by ctstate vmap any idea?
>
> In iptables, you have to flush a chain (-F) before you can delete it.

i do this:

+ iptables -F
+ iptables -X
iptables v1.8.2 (nf_tables):  CHAIN_USER_DEL failed (Device or resource busy): chain FORWARD_known
+ iptables -t nat -F
+ iptables -t nat -X
+ iptables -t mangle -F
+ iptables -t mangle -X
+ iptables -P INPUT ACCEPT
+ iptables -P OUTPUT ACCEPT
+ iptables -P FORWARD ACCEPT

and

+ iptables-legacy -F
+ iptables-legacy -X
+ iptables-legacy -t nat -F
+ iptables-legacy -t nat -X
+ iptables-legacy -t mangle -F
+ iptables-legacy -t mangle -X
+ iptables-legacy -P INPUT ACCEPT
+ iptables-legacy -P OUTPUT ACCEPT
+ iptables-legacy -P FORWARD ACCEPT

same for ipv6 and then import my ruleset which contains the "flush ruleset" on top (after defines)

nft -f /usr/local/sbin/firewall/ruleset_new_v4+v6.nft

> Anyway, once you step in to use nftables, it is better if you use
> native nftables commands to operate, such as:

>  nft flush ruleset

i do this too, but running iptables before to make sure no legacy firewall is defined which may block (to be able to switch to my old firewall script for testing and back)

> > is order important of defined chains? maybe i can move the 2 new
> > forward-chains below old with "ct state vmap"
>
> Not sure what you mean, could you provide an example?

currently i have defined new forwarding chains first and then use them

table ip filter {
    flowtable f {
        hook ingress priority 10
        devices = { wan, lan0, lan1, lan2, lan3 }
        flags offload
    }

    chain FORWARD_known {
        ip protocol { tcp, udp } flow add @f
        accept
    }

    chain FORWARD_new {
        #rules for new forward connections
    }

    chain FORWARD {
        type filter hook forward priority 0; policy drop;
        oifname $ifexternal tcp flags syn tcp option maxseg size set 1452
        ct state vmap { established : jump FORWARD_known, related : jump FORWARD_known, new : jump FORWARD_new }
        counter jump REJECTED
    }

    chain REJECTED {
        #limit rate 10/minute counter packets 0 bytes 0 log prefix "NETFILTER4-REJECTED: " level debug
        ip protocol tcp reject with tcp reset comment "Drop with tcp-reset"
        ip protocol udp reject comment "drop and count packets"
        counter drop
    }
}

i see that i have rejected also used before defining it...so maybe it works with the forwarding-chains too. basicly putting FORWARD_known and FORWARD_new below FORWARD.

regards Frank

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2021-05-10  9:00 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-26 15:30 Flowtable with ppp/bridge Frank Wunderlich
2021-04-26 17:29 ` Pablo Neira Ayuso
2021-04-26 17:51   ` Frank Wunderlich
2021-04-26 17:57     ` Pablo Neira Ayuso
2021-04-26 18:08       ` Frank Wunderlich
2021-04-27 23:49         ` Pablo Neira Ayuso
2021-04-28  8:07           ` Frank Wunderlich
2021-04-28 17:26             ` Frank Wunderlich
2021-04-29 13:59               ` Aw: " Frank Wunderlich
2021-05-02 13:51                 ` Frank Wunderlich
2021-05-02 22:11                   ` Pablo Neira Ayuso
2021-05-03 18:56                     ` Aw: " Frank Wunderlich
2021-05-03 21:32                       ` Pablo Neira Ayuso
2021-05-04 10:54                         ` Aw: " Frank Wunderlich
2021-05-04 11:42                           ` Pablo Neira Ayuso
2021-05-05  8:55                             ` Aw: " Frank Wunderlich
2021-05-05 22:55                               ` Pablo Neira Ayuso
2021-05-06  9:53                                 ` Aw: " Frank Wunderlich
2021-05-06 15:51                                   ` Pablo Neira Ayuso
2021-05-10  6:50                                     ` Aw: " Frank Wunderlich
2021-05-10  8:24                                       ` Pablo Neira Ayuso
2021-05-10  9:00                                         ` Aw: " Frank Wunderlich

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.