All of lore.kernel.org
 help / color / mirror / Atom feed
* iptables user space performance benchmarks published
@ 2020-06-19 14:11 Phil Sutter
  2020-06-22 12:42 ` Pablo Neira Ayuso
  2020-06-22 14:04 ` Jan Engelhardt
  0 siblings, 2 replies; 16+ messages in thread
From: Phil Sutter @ 2020-06-19 14:11 UTC (permalink / raw)
  To: Pablo Neira Ayuso; +Cc: netfilter-devel

Hi Pablo,

I remember you once asked for the benchmark scripts I used to compare
performance of iptables-nft with -legacy in terms of command overhead
and caching, as detailed in a blog[1] I wrote about it. I meanwhile
managed to polish the scripts a bit and push them into a public repo,
accessible here[2]. I'm not sure whether they are useful for regular
runs (or even CI) as a single run takes a few hours and parallel use
likely kills result precision.

Cheers, Phil

[1] https://developers.redhat.com/blog/2020/04/27/optimizing-iptables-nft-large-ruleset-performance-in-user-space/
[2] http://nwl.cc/cgi-bin/git/gitweb.cgi?p=ipt-sbs-bench.git;a=summary

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: iptables user space performance benchmarks published
  2020-06-19 14:11 iptables user space performance benchmarks published Phil Sutter
@ 2020-06-22 12:42 ` Pablo Neira Ayuso
  2020-06-22 13:34   ` Reindl Harald
  2020-06-22 13:40   ` Phil Sutter
  2020-06-22 14:04 ` Jan Engelhardt
  1 sibling, 2 replies; 16+ messages in thread
From: Pablo Neira Ayuso @ 2020-06-22 12:42 UTC (permalink / raw)
  To: Phil Sutter, netfilter-devel

Hi Phil,

On Fri, Jun 19, 2020 at 04:11:57PM +0200, Phil Sutter wrote:
> Hi Pablo,
> 
> I remember you once asked for the benchmark scripts I used to compare
> performance of iptables-nft with -legacy in terms of command overhead
> and caching, as detailed in a blog[1] I wrote about it. I meanwhile
> managed to polish the scripts a bit and push them into a public repo,
> accessible here[2]. I'm not sure whether they are useful for regular
> runs (or even CI) as a single run takes a few hours and parallel use
> likely kills result precision.

So what is the _technical_ incentive for using the iptables blob
interface (a.k.a. legacy) these days then?

The iptables-nft frontend is transparent and it outperforms the legacy
code for dynamic rulesets.

Thanks.

> [1] https://developers.redhat.com/blog/2020/04/27/optimizing-iptables-nft-large-ruleset-performance-in-user-space/
> [2] http://nwl.cc/cgi-bin/git/gitweb.cgi?p=ipt-sbs-bench.git;a=summary

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: iptables user space performance benchmarks published
  2020-06-22 12:42 ` Pablo Neira Ayuso
@ 2020-06-22 13:34   ` Reindl Harald
  2020-06-22 14:04     ` Phil Sutter
  2020-06-22 16:23     ` Stefano Brivio
  2020-06-22 13:40   ` Phil Sutter
  1 sibling, 2 replies; 16+ messages in thread
From: Reindl Harald @ 2020-06-22 13:34 UTC (permalink / raw)
  To: Pablo Neira Ayuso, Phil Sutter, netfilter-devel



Am 22.06.20 um 14:42 schrieb Pablo Neira Ayuso:
> Hi Phil,
> 
> On Fri, Jun 19, 2020 at 04:11:57PM +0200, Phil Sutter wrote:
>> Hi Pablo,
>>
>> I remember you once asked for the benchmark scripts I used to compare
>> performance of iptables-nft with -legacy in terms of command overhead
>> and caching, as detailed in a blog[1] I wrote about it. I meanwhile
>> managed to polish the scripts a bit and push them into a public repo,
>> accessible here[2]. I'm not sure whether they are useful for regular
>> runs (or even CI) as a single run takes a few hours and parallel use
>> likely kills result precision.
> 
> So what is the _technical_ incentive for using the iptables blob
> interface (a.k.a. legacy) these days then?
> 
> The iptables-nft frontend is transparent and it outperforms the legacy
> code for dynamic rulesets.

it is not transparent enough because it don't understand classical ipset

my shell scripts creating the ruleset, cahins and ipsets can be switched
from iptables-legacy to iptables-nft and before the reboot despite the
warning that both are loaded it *looked* more or less fine comparing the
rulset from both backends

i gave it one try and used "iptables-nft-restore" and "ip6tables-nft",
after reboot nothing worked at all

via console i called "firewall.sh" again wich would delete all rules and
chains followed by re-create them, no success and errors that things
already exist

please don't consider to drop iptables-legacy, it just works and im miss
a compelling argument to rework thousands of hours

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: iptables user space performance benchmarks published
  2020-06-22 12:42 ` Pablo Neira Ayuso
  2020-06-22 13:34   ` Reindl Harald
@ 2020-06-22 13:40   ` Phil Sutter
  1 sibling, 0 replies; 16+ messages in thread
From: Phil Sutter @ 2020-06-22 13:40 UTC (permalink / raw)
  To: Pablo Neira Ayuso; +Cc: netfilter-devel

Hi Pablo,

On Mon, Jun 22, 2020 at 02:42:07PM +0200, Pablo Neira Ayuso wrote:
> On Fri, Jun 19, 2020 at 04:11:57PM +0200, Phil Sutter wrote:
> > Hi Pablo,
> > 
> > I remember you once asked for the benchmark scripts I used to compare
> > performance of iptables-nft with -legacy in terms of command overhead
> > and caching, as detailed in a blog[1] I wrote about it. I meanwhile
> > managed to polish the scripts a bit and push them into a public repo,
> > accessible here[2]. I'm not sure whether they are useful for regular
> > runs (or even CI) as a single run takes a few hours and parallel use
> > likely kills result precision.
> 
> So what is the _technical_ incentive for using the iptables blob
> interface (a.k.a. legacy) these days then?

Mostly interoperability, I guess. Recent real-world scenario is host
firewall management from inside a container (please don't ask me why):
If the host uses legacy iptables (for legacy reasons ;) the top-notch
state of the art container has to do so as well or hell freezes over.

Apart from that, I can imagine there are users depending on one of the
few missing features like e.g. broute table in ebtables.

> The iptables-nft frontend is transparent and it outperforms the legacy
> code for dynamic rulesets.

Sadly, we can't claim the same for nft - its caching strategy is dumb
compared to what iptables-nft does nowadays. I guess that should be my
follow-up task. :)

Cheers, Phil

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: iptables user space performance benchmarks published
  2020-06-19 14:11 iptables user space performance benchmarks published Phil Sutter
  2020-06-22 12:42 ` Pablo Neira Ayuso
@ 2020-06-22 14:04 ` Jan Engelhardt
  2020-06-22 14:35   ` Phil Sutter
  1 sibling, 1 reply; 16+ messages in thread
From: Jan Engelhardt @ 2020-06-22 14:04 UTC (permalink / raw)
  To: Phil Sutter; +Cc: Pablo Neira Ayuso, netfilter-devel


On Friday 2020-06-19 16:11, Phil Sutter wrote:
>
>I remember you once asked for the benchmark scripts I used to compare
>performance of iptables-nft with -legacy in terms of command overhead
>and caching, as detailed in a blog[1] I wrote about it. I meanwhile
>managed to polish the scripts a bit and push them into a public repo,
>accessible here[2]. I'm not sure whether they are useful for regular
>runs (or even CI) as a single run takes a few hours and parallel use
>likely kills result precision.
>
>[1] https://developers.redhat.com/blog/2020/04/27/optimizing-iptables-nft-large-ruleset-performance-in-user-space/
>
>"""My main suspects for why iptables-nft performed so poorly were kernel ruleset
>caching and the internal conversion from nftables rules in libnftnl data
>structures to iptables rules in libxtables data structures."""

Did you record any syscall-induced latency? The classic ABI used a
one-syscall approach, passing the entire buffer at once. With
netlink, it's a bit of a ping-pong between user and kernel unless one
uses mmap like on AF_PACKET — and I don't see any mmap in libmnl or
libnftnl.

Furthermore, loading the ruleset is just one aspect. Evaluating it
for every packet is what should weigh in a lot more. Did you by
chance collect any numbers in that regard?

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: iptables user space performance benchmarks published
  2020-06-22 13:34   ` Reindl Harald
@ 2020-06-22 14:04     ` Phil Sutter
  2020-06-22 14:11       ` Reindl Harald
  2020-06-22 16:23     ` Stefano Brivio
  1 sibling, 1 reply; 16+ messages in thread
From: Phil Sutter @ 2020-06-22 14:04 UTC (permalink / raw)
  To: Reindl Harald; +Cc: Pablo Neira Ayuso, netfilter-devel

Hi Harald,

On Mon, Jun 22, 2020 at 03:34:24PM +0200, Reindl Harald wrote:
> Am 22.06.20 um 14:42 schrieb Pablo Neira Ayuso:
> > Hi Phil,
> > 
> > On Fri, Jun 19, 2020 at 04:11:57PM +0200, Phil Sutter wrote:
> >> Hi Pablo,
> >>
> >> I remember you once asked for the benchmark scripts I used to compare
> >> performance of iptables-nft with -legacy in terms of command overhead
> >> and caching, as detailed in a blog[1] I wrote about it. I meanwhile
> >> managed to polish the scripts a bit and push them into a public repo,
> >> accessible here[2]. I'm not sure whether they are useful for regular
> >> runs (or even CI) as a single run takes a few hours and parallel use
> >> likely kills result precision.
> > 
> > So what is the _technical_ incentive for using the iptables blob
> > interface (a.k.a. legacy) these days then?
> > 
> > The iptables-nft frontend is transparent and it outperforms the legacy
> > code for dynamic rulesets.
> 
> it is not transparent enough because it don't understand classical ipset

It does! You can use ipsets with iptables-nft just as before. If your
experience differs, that's a bug we should fix.

> my shell scripts creating the ruleset, cahins and ipsets can be switched
> from iptables-legacy to iptables-nft and before the reboot despite the
> warning that both are loaded it *looked* more or less fine comparing the
> rulset from both backends
> 
> i gave it one try and used "iptables-nft-restore" and "ip6tables-nft",
> after reboot nothing worked at all

Not good. Did you find out *why* nothing worked anymore? Would you maybe
care to share your script and ruleset with us?

> via console i called "firewall.sh" again wich would delete all rules and
> chains followed by re-create them, no success and errors that things
> already exist

That sounds weird, if it reliably drops everything why does it complain
with EEXIST?

> please don't consider to drop iptables-legacy, it just works and im miss
> a compelling argument to rework thousands of hours

I'm not the one to make that call, but IMHO the plan is for
iptables-legacy to become irrelevant *before* it is dropped from
upstream repositories. So as long as you are still using it (and you're
not an irrelevant minority ;) nothing's at harm.

Cheers, Phil

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: iptables user space performance benchmarks published
  2020-06-22 14:04     ` Phil Sutter
@ 2020-06-22 14:11       ` Reindl Harald
  2020-06-22 14:54         ` Phil Sutter
  0 siblings, 1 reply; 16+ messages in thread
From: Reindl Harald @ 2020-06-22 14:11 UTC (permalink / raw)
  To: Phil Sutter, Pablo Neira Ayuso, netfilter-devel



Am 22.06.20 um 16:04 schrieb Phil Sutter:
>> i gave it one try and used "iptables-nft-restore" and "ip6tables-nft",
>> after reboot nothing worked at all
> 
> Not good. Did you find out *why* nothing worked anymore? Would you maybe
> care to share your script and ruleset with us?

i could share it offlist, it's a bunch of stuff including a managament
interface written in bash and is designed for a /24 1:1 NETMAP

basicaly it already has a config-switch to enforce iptables-nft

FILE                    TOTAL  STRIPPED  SIZE
tui.sh                  1653   1413      80K
firewall.sh             984    738       57K
shared.inc.sh           578    407       28K
custom.inc.sh           355    112       13K
config.inc.sh           193    113       6.2K
update-blocked-feed.sh  68     32        4.1K

[harry@srv-rhsoft:/data/lounge-daten/firewall/snapshots/2020-06-21]$
/bin/ls -1 ipset_*
ipset_ADMIN_CLIENTS.txt
ipset_BAYES_SYNC.txt
ipset_BLOCKED.txt
ipset_EXCLUDES.txt
ipset_HONEYPOT_IPS.txt
ipset_HONEYPOT_PORTS.txt
ipset_IANA_RESERVED.txt
ipset_INFRASTRUCTURE.txt
ipset_IPERF.txt
ipset_JABBER.txt
ipset_LAN_VPN_FORWARDING.txt
ipset_OUTBOUND_BLOCKED_PORTS.txt
ipset_OUTBOUND_BLOCKED_SRC.txt
ipset_PORTSCAN_PORTS.txt
ipset_PORTS_MAIL.txt
ipset_PORTS_RESTRICTED.txt
ipset_RBL_SYNC.txt
ipset_RESTRICTED.txt
ipset_SFTP_22.txt

>> via console i called "firewall.sh" again wich would delete all rules and
>> chains followed by re-create them, no success and errors that things
>> already exist
> 
> That sounds weird, if it reliably drops everything why does it complain
> with EEXIST?

that was the reason why i gave up finally

>> please don't consider to drop iptables-legacy, it just works and im miss
>> a compelling argument to rework thousands of hours
> 
> I'm not the one to make that call, but IMHO the plan is for
> iptables-legacy to become irrelevant *before* it is dropped from
> upstream repositories. So as long as you are still using it (and you're
> not an irrelevant minority ;) nothing's at harm.

well, my machines are dating back to 2008 and i don't plan to re-install
them and given that im am just 42 years old now :-)

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: iptables user space performance benchmarks published
  2020-06-22 14:04 ` Jan Engelhardt
@ 2020-06-22 14:35   ` Phil Sutter
  0 siblings, 0 replies; 16+ messages in thread
From: Phil Sutter @ 2020-06-22 14:35 UTC (permalink / raw)
  To: Jan Engelhardt; +Cc: Pablo Neira Ayuso, netfilter-devel

Hi Jan,

On Mon, Jun 22, 2020 at 04:04:43PM +0200, Jan Engelhardt wrote:
> On Friday 2020-06-19 16:11, Phil Sutter wrote:
> >I remember you once asked for the benchmark scripts I used to compare
> >performance of iptables-nft with -legacy in terms of command overhead
> >and caching, as detailed in a blog[1] I wrote about it. I meanwhile
> >managed to polish the scripts a bit and push them into a public repo,
> >accessible here[2]. I'm not sure whether they are useful for regular
> >runs (or even CI) as a single run takes a few hours and parallel use
> >likely kills result precision.
> >
> >[1] https://developers.redhat.com/blog/2020/04/27/optimizing-iptables-nft-large-ruleset-performance-in-user-space/
> >
> >"""My main suspects for why iptables-nft performed so poorly were kernel ruleset
> >caching and the internal conversion from nftables rules in libnftnl data
> >structures to iptables rules in libxtables data structures."""
> 
> Did you record any syscall-induced latency? The classic ABI used a
> one-syscall approach, passing the entire buffer at once. With
> netlink, it's a bit of a ping-pong between user and kernel unless one
> uses mmap like on AF_PACKET — and I don't see any mmap in libmnl or
> libnftnl.

While it is true that no zero-copy mechanisms are used by
libmnl/libnftnl, an early improvement I did was to max out receive
buffer size (see commit 5a0294901db1d which also has some figures).
After all though, I would consider this to be mostly relevant when
loading a large ruleset and that is rather a one-time action, for
instance during system boot-up.

Some "quick changes" like, e.g. adding an IP to a blacklist, usually
don't need to push much data to the kernel for zero-copy to become
relevant. (Of course they may still benefit if setup overhead can be
kept low).

> Furthermore, loading the ruleset is just one aspect. Evaluating it
> for every packet is what should weigh in a lot more. Did you by
> chance collect any numbers in that regard?

Not really. I did some runtime measurements once but unless there's an
undiscovered performance loop I wouldn't expect much to improve there.

Obviously, a much larger factor is ruleset design. I guess most
existing, legacy rulesets out there would largely benefit from
introducing ipset. Duplicating the same crappy ruleset in nftables is
pointless. Making it use nftables' features after the conversion is not
trivial, but results aren't even comparable afterwards. At least that's
my quintessence from trying, see the related blog[1] for details.

Cheers, Phil

[1] https://developers.redhat.com/blog/2017/04/11/benchmarking-nftables/


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: iptables user space performance benchmarks published
  2020-06-22 14:11       ` Reindl Harald
@ 2020-06-22 14:54         ` Phil Sutter
  2020-06-22 15:19           ` Reindl Harald
  0 siblings, 1 reply; 16+ messages in thread
From: Phil Sutter @ 2020-06-22 14:54 UTC (permalink / raw)
  To: Reindl Harald; +Cc: Pablo Neira Ayuso, netfilter-devel

Hi,

On Mon, Jun 22, 2020 at 04:11:06PM +0200, Reindl Harald wrote:
> Am 22.06.20 um 16:04 schrieb Phil Sutter:
> >> i gave it one try and used "iptables-nft-restore" and "ip6tables-nft",
> >> after reboot nothing worked at all
> > 
> > Not good. Did you find out *why* nothing worked anymore? Would you maybe
> > care to share your script and ruleset with us?
> 
> i could share it offlist, it's a bunch of stuff including a managament
> interface written in bash and is designed for a /24 1:1 NETMAP

Yes, please share off-list. I'll see if I can reproduce the problem.

> basicaly it already has a config-switch to enforce iptables-nft
> 
> FILE                    TOTAL  STRIPPED  SIZE
> tui.sh                  1653   1413      80K
> firewall.sh             984    738       57K
> shared.inc.sh           578    407       28K
> custom.inc.sh           355    112       13K
> config.inc.sh           193    113       6.2K
> update-blocked-feed.sh  68     32        4.1K

Let's hope I don't have to read all of that. /o\

[...]
> >> please don't consider to drop iptables-legacy, it just works and im miss
> >> a compelling argument to rework thousands of hours
> > 
> > I'm not the one to make that call, but IMHO the plan is for
> > iptables-legacy to become irrelevant *before* it is dropped from
> > upstream repositories. So as long as you are still using it (and you're
> > not an irrelevant minority ;) nothing's at harm.
> 
> well, my machines are dating back to 2008 and i don't plan to re-install
> them and given that im am just 42 years old now :-)

You're sending emails, so you're alive and kicking! There's absolutely
no reason your systems shouldn't be. After all, where's the fun of
keeping a box up to date if it's not for the casual technology migration
(and the sleepless night to fix the bugs)? :)

Cheers, Phil

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: iptables user space performance benchmarks published
  2020-06-22 14:54         ` Phil Sutter
@ 2020-06-22 15:19           ` Reindl Harald
  2020-06-22 15:44             ` Phil Sutter
  0 siblings, 1 reply; 16+ messages in thread
From: Reindl Harald @ 2020-06-22 15:19 UTC (permalink / raw)
  To: Phil Sutter, Pablo Neira Ayuso, netfilter-devel



Am 22.06.20 um 16:54 schrieb Phil Sutter:
> On Mon, Jun 22, 2020 at 04:11:06PM +0200, Reindl Harald wrote:
>> Am 22.06.20 um 16:04 schrieb Phil Sutter:
>>>> i gave it one try and used "iptables-nft-restore" and "ip6tables-nft",
>>>> after reboot nothing worked at all
>>>
>>> Not good. Did you find out *why* nothing worked anymore? Would you maybe
>>> care to share your script and ruleset with us?
>>
>> i could share it offlist, it's a bunch of stuff including a managament
>> interface written in bash and is designed for a /24 1:1 NETMAP
> 
> Yes, please share off-list. I'll see if I can reproduce the problem.
> 
>> basicaly it already has a config-switch to enforce iptables-nft
>>
>> FILE                    TOTAL  STRIPPED  SIZE
>> tui.sh                  1653   1413      80K
>> firewall.sh             984    738       57K
>> shared.inc.sh           578    407       28K
>> custom.inc.sh           355    112       13K
>> config.inc.sh           193    113       6.2K
>> update-blocked-feed.sh  68     32        4.1K
> 
> Let's hope I don't have to read all of that. /o\

to see the testing implemented please scroll at the bottom :-)

that whole stuff lives in a demo-setup at home reacting slightly
different when $HOSTNAME is "firewall.vmware.local"

surely, you can have the scripts alone but it's likely easier to get the
ESXi started somehow and have a fully working network reflecting
produtkin just with different LAN/WAN ranges

-------------------------------

in the config.inc.sh the line "ENABLE_IPTABLES_NFT=1" would switch the
backend and before reboot /etc/systemd/system/network-up.service needs
to be adjusted to also use iptables-nft for restore

-------------------------------

the ESXi nested in Vmware Workstation hosts 4 VMS

* firewall
* test (two ips on the wan interface)
* client (full /24 , listening on tcp 1-65535)
* buildsever to compile packages

on the vm "test" there is a script checking if the ruleset works as
expected, nat, wireguard and so on

see output of the "make me happy testing" interacting between fuirewall,
test and client

-------------------------------

what i can offer you is the whole folder after set "letmein" as root
password everywhere over some private channel and finally replace all my
ssh/wiregaurd-keys *after* that

so you can have a fully working setup and in case you hand over a ssh
public key i add it to allowed-keys before dump the whole beast

-------------------------------

[root@srv-rhsoft:~]$ ls /vmware/esx/
insgesamt 4,9G
-rw------- 1 vmware vmware 265K 2020-06-22 10:27 esx1.nvram
-rw------- 1 vmware vmware 4,9G 2020-06-22 17:07 esx1-1.vmdk
-rw------- 1 vmware vmware   67 2020-06-20 19:32 esx1.vmsd
-rwx------ 1 vmware vmware 4,5K 2020-06-21 17:07 esx1.vmx
-rw------- 1 vmware vmware  259 2020-06-20 19:32 esx1.vmxf

-------------------------------

not sure how well that goes with ESXi7 and how to convert the vmware
vmdk properly, that's you part :-)

https://fabianlee.org/2018/09/19/kvm-deploying-a-nested-version-of-vmware-esxi-6-7-inside-kvm/

-------------------------------


[root@firewall:~]$ stresstest.sh
Starting netserver with host 'IN(6)ADDR_ANY' port '12865' and family
AF_UNSPEC
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost ()
port 0 AF_INET
Recv   Send    Send
Socket Socket  Message  Elapsed
Size   Size    Size     Time     Throughput
bytes  bytes   bytes    secs.    10^6bits/sec

131072  16384  16384    10.00    8668.17
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost ()
port 0 AF_INET
Recv   Send    Send
Socket Socket  Message  Elapsed
Size   Size    Size     Time     Throughput
bytes  bytes   bytes    secs.    10^6bits/sec

131072  16384  16384    10.00    8795.55
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost ()
port 0 AF_INET
Recv   Send    Send
Socket Socket  Message  Elapsed
Size   Size    Size     Time     Throughput
bytes  bytes   bytes    secs.    10^6bits/sec

131072  16384  16384    10.00    8873.98
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost ()
port 0 AF_INET
Recv   Send    Send
Socket Socket  Message  Elapsed
Size   Size    Size     Time     Throughput
bytes  bytes   bytes    secs.    10^6bits/sec

131072  16384  16384    10.00    8414.01
hping in flood mode, no replies will be shown
hping in flood mode, no replies will be shown
hping in flood mode, no replies will be shown
hping in flood mode, no replies will be shown

--- 172.17.0.x hping statistic ---
136100 packets transmitted, 0 packets received, 100% packet loss

--- 172.17.0.x hping statistic ---
round-trip min/avg/max = 0.0/0.0/0.0 ms
HPING 172.17.0.x (eth0 172.17.0.x): S set, 40 headers + 0 data bytes
HPING 172.17.0.x (eth0 172.17.0.x): SPUXY set, 40 headers + 0 data bytes
HPING 172.17.0.5 (eth0 172.17.0.5): S set, 40 headers + 0 data bytes
135812 packets transmitted, 0 packets received, 100% packet loss
round-trip min/avg/max = 0.0/0.0/0.0 ms

--- 172.17.0.5 hping statistic ---
136939 packets transmitted, 0 packets received, 100% packet loss
round-trip min/avg/max = 0.0/0.0/0.0 ms
HPING 172.17.0.6 (eth0 172.17.0.6): S set, 40 headers + 0 data bytes

--- 172.17.0.6 hping statistic ---
137024 packets transmitted, 0 packets received, 100% packet loss
round-trip min/avg/max = 0.0/0.0/0.0 ms
---------------------------------------------------------
172.17.0.1:

TCP 8000: 0 OK
---------------------------------------------------------
172.17.0.16:

UDP 53: 1 OK
UDP 853: 1 OK
TCP 53: 1 OK
TCP 10022: 1 OK
TCP 22: 0 OK
TCP 25: 0 OK
TCP 80: 0 OK
TCP 443: 0 OK
TCP 445: 0 OK
TCP 3306: 0 OK
TCP 3389: 0 OK
TCP 5900: 0 OK
---------------------------------------------------------
172.17.0.2:

TCP 445: 1 OK
TCP 3306: 1 OK
TCP 3389: 1 OK
TCP 5900: 1 OK
---------------------------------------------------------
172.17.0.3:

TCP 10022: 1 OK
TCP 22: 0 OK
TCP 25: 0 OK
TCP 53: 0 OK
TCP 445: 0 OK
TCP 3306: 0 OK
---------------------------------------------------------
172.17.0.4:

UDP 53: 0 OK
TCP 80: 1 OK
TCP 443: 1 OK
TCP 10022: 1 OK
TCP 21: 0 OK
TCP 22: 0 OK
TCP 25: 0 OK
TCP 53: 0 OK
TCP 445: 0 OK
TCP 3306: 0 OK
---------------------------------------------------------
172.17.0.5:

TCP 80: 1 OK
TCP 443: 1 OK
TCP 21: 0 OK
TCP 22: 0 OK
TCP 25: 0 OK
TCP 53: 0 OK
TCP 445: 0 OK
TCP 3306: 0 OK
TCP 10022: 0 OK
---------------------------------------------------------
172.17.0.6:

UDP 53: 0 OK
UDP 67: 0 OK
UDP 68: 0 OK
TCP 21: 1 OK
TCP 80: 1 OK
TCP 443: 1 OK
TCP 10022: 1 OK
TCP 25: 0 OK
TCP 53: 0 OK
TCP 143: 0 OK
TCP 445: 0 OK
TCP 3306: 0 OK
TCP 8080: 0 OK
TCP 8443: 0 OK
---------------------------------------------------------
172.17.0.8:

TCP 80: 1 OK
TCP 443: 1 OK
TCP 21: 0 OK
TCP 22: 0 OK
TCP 25: 0 OK
TCP 53: 0 OK
TCP 445: 0 OK
TCP 3306: 0 OK
TCP 10022: 0 OK
---------------------------------------------------------
172.17.0.9:

TCP 80: 1 OK
TCP 443: 0 OK
TCP 22: 0 OK
TCP 25: 0 OK
TCP 53: 0 OK
TCP 445: 0 OK
TCP 3306: 0 OK
TCP 10022: 0 OK
---------------------------------------------------------
172.17.0.10:

TCP 80: 1 OK
TCP 22: 0 OK
TCP 25: 0 OK
TCP 53: 0 OK
TCP 443: 0 OK
TCP 445: 0 OK
TCP 3306: 0 OK
TCP 10022: 0 OK
---------------------------------------------------------
172.17.0.11:

TCP 80: 1 OK
TCP 443: 1 OK
TCP 21: 0 OK
TCP 22: 0 OK
TCP 25: 0 OK
TCP 53: 0 OK
TCP 445: 0 OK
TCP 3306: 0 OK
TCP 10022: 0 OK
---------------------------------------------------------
172.17.0.15:

UDP 53: 0 OK
TCP 21: 0 OK
TCP 22: 0 OK
TCP 24: 0 OK
TCP 25: 0 OK
TCP 53: 0 OK
TCP 80: 0 OK
TCP 443: 0 OK
TCP 445: 0 OK
TCP 2000: 0 OK
TCP 3306: 0 OK
TCP 465: 1 OK
TCP 587: 1 OK
TCP 588: 1 OK
TCP 110: 1 OK
TCP 143: 1 OK
TCP 993: 1 OK
TCP 995: 1 OK
TCP 10022: 1 OK
---------------------------------------------------------
172.17.0.17:

TCP 21: 1 OK
TCP 80: 1 OK
TCP 443: 1 OK
TCP 465: 1 OK
TCP 587: 1 OK
TCP 110: 1 OK
TCP 143: 1 OK
TCP 993: 1 OK
TCP 995: 1 OK
TCP 10022: 1 OK
TCP 22: 0 OK
TCP 23: 0 OK
TCP 24: 0 OK
TCP 25: 0 OK
TCP 445: 0 OK
TCP 2000: 0 OK
TCP 3306: 0 OK
TCP 3307: 0 OK
---------------------------------------------------------
172.17.0.19:

UDP 53: 0 OK
UDP 1053: 0 OK
TCP 25: 1 OK
TCP 10022: 1 OK
TCP 22: 0 OK
TCP 53: 0 OK
TCP 80: 0 OK
TCP 443: 0 OK
TCP 445: 0 OK
TCP 3306: 0 OK
TCP 10025: 0 OK
---------------------------------------------------------
172.17.0.20:

TCP 25: 1 OK
TCP 10022: 0 OK
TCP 22: 0 OK
TCP 80: 0 OK
TCP 443: 0 OK
TCP 445: 0 OK
TCP 3306: 0 OK
---------------------------------------------------------
172.17.0.21:

TCP 25: 1 OK
TCP 10022: 0 OK
TCP 22: 0 OK
TCP 80: 0 OK
TCP 443: 0 OK
TCP 445: 0 OK
TCP 3306: 0 OK
---------------------------------------------------------
172.17.0.30:

TCP 80: 1 OK
TCP 443: 1 OK
TCP 10022: 1 OK
TCP 21: 0 OK
TCP 22: 0 OK
TCP 25: 0 OK
TCP 53: 0 OK
TCP 445: 0 OK
TCP 873: 0 OK
TCP 5222: 0 OK
TCP 5269: 0 OK
---------------------------------------------------------
172.17.0.32:

TCP 80: 1 OK
TCP 443: 1 OK
TCP 10022: 1 OK
TCP 21: 0 OK
TCP 22: 0 OK
TCP 25: 0 OK
TCP 53: 0 OK
TCP 445: 0 OK
TCP 3306: 0 OK
---------------------------------------------------------
172.17.0.34:

TCP 80: 1 OK
TCP 443: 1 OK
TCP 22: 0 OK
TCP 25: 0 OK
TCP 53: 0 OK
TCP 445: 0 OK
TCP 3306: 0 OK
TCP 10022: 0 OK
---------------------------------------------------------
172.17.0.35:

TCP 80: 1 OK
TCP 443: 1 OK
TCP 21: 0 OK
TCP 22: 0 OK
TCP 25: 0 OK
TCP 53: 0 OK
TCP 445: 0 OK
TCP 3306: 0 OK
TCP 10022: 0 OK
---------------------------------------------------------
172.17.0.38:

TCP 80: 1 OK
TCP 443: 1 OK
TCP 10022: 1 OK
TCP 22: 0 OK
TCP 25: 0 OK
TCP 53: 0 OK
TCP 3306: 0 OK
TCP 445: 0 OK
---------------------------------------------------------
172.17.0.99:

TCP 80: 1 OK
TCP 443: 1 OK
TCP 10022: 1 OK
TCP 21: 0 OK
TCP 22: 0 OK
TCP 23: 0 OK
TCP 25: 0 OK
TCP 53: 0 OK
TCP 445: 0 OK
TCP 465: 0 OK
TCP 587: 0 OK
TCP 110: 0 OK
TCP 143: 0 OK
TCP 993: 0 OK
TCP 995: 0 OK
TCP 3306: 0 OK
TCP 3307: 0 OK
---------------------------------------------------------
172.17.0.115:

UDP 5353: 0 OK
TCP 21: 0 OK
TCP 22: 0 OK
TCP 25: 0 OK
TCP 80: 0 OK
TCP 443: 0 OK
TCP 445: 0 OK
TCP 548: 0 OK
TCP 3306: 0 OK
TCP 10022: 0 OK
---------------------------------------------------------
172.17.0.116:

UDP 111: 0 OK
UDP 139: 0 OK
UDP 2049: 0 OK
TCP 111: 0 OK
TCP 139: 0 OK
TCP 445: 0 OK
TCP 2049: 0 OK
TCP 10022: 0 OK
---------------------------------------------------------
172.17.0.132:

UDP 53: 0 OK
UDP 69: 0 OK
UDP 88: 0 OK
UDP 111: 0 OK
UDP 135: 0 OK
UDP 514: 0 OK
UDP 902: 0 OK
UDP 930: 0 OK
UDP 5355: 0 OK
TCP 22: 0 OK
TCP 25: 0 OK
TCP 53: 0 OK
TCP 80: 0 OK
TCP 88: 0 OK
TCP 111: 0 OK
TCP 135: 0 OK
TCP 389: 0 OK
TCP 443: 0 OK
TCP 514: 0 OK
TCP 587: 0 OK
TCP 636: 0 OK
TCP 1514: 0 OK
TCP 2012: 0 OK
TCP 2014: 0 OK
TCP 2015: 0 OK
TCP 2020: 0 OK
TCP 5090: 0 OK
TCP 5432: 0 OK
TCP 5443: 0 OK
TCP 5480: 0 OK
TCP 7080: 0 OK
TCP 7081: 0 OK
TCP 7444: 0 OK
TCP 8006: 0 OK
TCP 8085: 0 OK
TCP 8089: 0 OK
TCP 8190: 0 OK
TCP 8191: 0 OK
TCP 8200: 0 OK
TCP 8201: 0 OK
TCP 8300: 0 OK
TCP 8301: 0 OK
TCP 8900: 0 OK
TCP 9090: 0 OK
TCP 9443: 0 OK
TCP 10080: 0 OK
TCP 11711: 0 OK
TCP 11712: 0 OK
TCP 12080: 0 OK
TCP 12346: 0 OK
TCP 12721: 0 OK
TCP 13080: 0 OK
TCP 15005: 0 OK
TCP 15007: 0 OK
TCP 16666: 0 OK
TCP 18090: 0 OK
TCP 18091: 0 OK
TCP 22000: 0 OK
TCP 22100: 0 OK
TCP 39950: 0 OK
---------------------------------------------------------
172.17.0.100:

UDP 427: 0 OK
UDP 443: 0 OK
UDP 549: 0 OK
UDP 902: 0 OK
TCP 22: 0 OK
TCP 80: 0 OK
TCP 427: 0 OK
TCP 443: 0 OK
TCP 549: 0 OK
TCP 902: 0 OK
---------------------------------------------------------
172.17.0.128:

UDP 427: 0 OK
UDP 443: 0 OK
UDP 549: 0 OK
UDP 902: 0 OK
TCP 22: 0 OK
TCP 80: 0 OK
TCP 427: 0 OK
TCP 443: 0 OK
TCP 549: 0 OK
TCP 902: 0 OK
---------------------------------------------------------
172.17.0.215:

UDP 53: 0 OK
UDP 5353: 0 OK
TCP 22: 0 OK
TCP 25: 0 OK
TCP 53: 0 OK
TCP 80: 0 OK
TCP 443: 0 OK
TCP 445: 0 OK
TCP 3306: 0 OK
TCP 5900: 0 OK
TCP 10022: 0 OK
---------------------------------------------------------
OK: 323, TCP-OPEN: 60, UDP-OPEN: 2, RUNTIME: 42

---------------------------------------------------------
PORTSCAN-TRIGGER:

CALL HONEYPOT: http://172.17.0.98:445: OK: (Status: 1)
CALL ALLOWED: http://172.17.0.6:80: OK: (Status: 1)
CALL TRIGGER: http://172.17.0.6:445: OK: (Status: 0)
CALL CHECK: http://172.17.0.6:80: OK: (Status: 0)
CALL CHECK: http://172.17.0.6:80: OK: (Status: 0)
SLEEP 12 seconds
OK: (http://172.17.0.6:80, Status: 1)
---------------------------------------------------------

---------------------------------------------------------
CONNLIMIT:

CALL ALLOWED: http://172.17.0.6:80: OK: (Status: 1)
CALL 500 TIMES IN BACKGROUND: http://172.17.0.6:80
CALL: http://172.17.0.6:80: OK: (Status: 0)
---------------------------------------------------------

---------------------------------------------------------
VPN:

UDP 172.17.0.6:53: 0 OK
UDP 172.16.0.6:53: 1 OK
TCP 172.17.0.6:445: 0 OK
TCP 172.16.0.6:445: 1 OK

WIREGUARD:
Mon Jun 22 17:13:50 CEST 2020
Mon Jun 22 17:14:06 CEST 2020
 * 200 Mbits/sec
 * 203 Mbits/sec
Mon Jun 22 17:14:22 CEST 2020

NAT:
Mon Jun 22 17:14:22 CEST 2020
Mon Jun 22 17:14:38 CEST 2020
 * 661 Mbits/sec
 * 1.20 Gbits/sec
Mon Jun 22 17:14:54 CEST 2020

LAN:
Mon Jun 22 17:14:54 CEST 2020
Mon Jun 22 17:15:10 CEST 2020
 * 3.50 Gbits/sec
 * 4.46 Gbits/sec
Mon Jun 22 17:15:26 CEST 2020

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: iptables user space performance benchmarks published
  2020-06-22 15:19           ` Reindl Harald
@ 2020-06-22 15:44             ` Phil Sutter
  2020-06-22 16:29               ` Reindl Harald
  0 siblings, 1 reply; 16+ messages in thread
From: Phil Sutter @ 2020-06-22 15:44 UTC (permalink / raw)
  To: Reindl Harald; +Cc: Pablo Neira Ayuso, netfilter-devel

Harald,

On Mon, Jun 22, 2020 at 05:19:53PM +0200, Reindl Harald wrote:
> Am 22.06.20 um 16:54 schrieb Phil Sutter:
> > On Mon, Jun 22, 2020 at 04:11:06PM +0200, Reindl Harald wrote:
> >> Am 22.06.20 um 16:04 schrieb Phil Sutter:
> >>>> i gave it one try and used "iptables-nft-restore" and "ip6tables-nft",
> >>>> after reboot nothing worked at all
> >>>
> >>> Not good. Did you find out *why* nothing worked anymore? Would you maybe
> >>> care to share your script and ruleset with us?
> >>
> >> i could share it offlist, it's a bunch of stuff including a managament
> >> interface written in bash and is designed for a /24 1:1 NETMAP
> > 
> > Yes, please share off-list. I'll see if I can reproduce the problem.
> > 
> >> basicaly it already has a config-switch to enforce iptables-nft
> >>
> >> FILE                    TOTAL  STRIPPED  SIZE
> >> tui.sh                  1653   1413      80K
> >> firewall.sh             984    738       57K
> >> shared.inc.sh           578    407       28K
> >> custom.inc.sh           355    112       13K
> >> config.inc.sh           193    113       6.2K
> >> update-blocked-feed.sh  68     32        4.1K
> > 
> > Let's hope I don't have to read all of that. /o\
> 
> to see the testing implemented please scroll at the bottom :-)
> 
> that whole stuff lives in a demo-setup at home reacting slightly
> different when $HOSTNAME is "firewall.vmware.local"
> 
> surely, you can have the scripts alone but it's likely easier to get the
> ESXi started somehow and have a fully working network reflecting
> produtkin just with different LAN/WAN ranges

Sorry, no thanks. If your setup is so complicated you rather send me an
image of the machine(s?) running it, you're in dire need to simplify
things in order to prepare for me helping out. Assuming that
'firewall.sh' is also really 57KB in size, I'll probably have a hard
time even making it do what it's supposed to, let alone reproduce the
problem.

Let's go another route: Before and after switching from legacy to nft
backend, please collect the current ruleset by recording the output of:

- iptables-save
- ip6tables-save
- nft list ruleset
- ipset list

Cheers, Phil

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: iptables user space performance benchmarks published
  2020-06-22 13:34   ` Reindl Harald
  2020-06-22 14:04     ` Phil Sutter
@ 2020-06-22 16:23     ` Stefano Brivio
  2020-06-22 16:38       ` Reindl Harald
  1 sibling, 1 reply; 16+ messages in thread
From: Stefano Brivio @ 2020-06-22 16:23 UTC (permalink / raw)
  To: Reindl Harald, Phil Sutter
  Cc: Pablo Neira Ayuso, netfilter-devel, Jozsef Kadlecsik

[Adding József]

On Mon, 22 Jun 2020 15:34:24 +0200
Reindl Harald <h.reindl@thelounge.net> wrote:

> Am 22.06.20 um 14:42 schrieb Pablo Neira Ayuso:
> > Hi Phil,
> > 
> > On Fri, Jun 19, 2020 at 04:11:57PM +0200, Phil Sutter wrote:  
> >> Hi Pablo,
> >>
> >> I remember you once asked for the benchmark scripts I used to compare
> >> performance of iptables-nft with -legacy in terms of command overhead
> >> and caching, as detailed in a blog[1] I wrote about it. I meanwhile
> >> managed to polish the scripts a bit and push them into a public repo,
> >> accessible here[2]. I'm not sure whether they are useful for regular
> >> runs (or even CI) as a single run takes a few hours and parallel use
> >> likely kills result precision.  
> > 
> > So what is the _technical_ incentive for using the iptables blob
> > interface (a.k.a. legacy) these days then?
> > 
> > The iptables-nft frontend is transparent and it outperforms the legacy
> > code for dynamic rulesets.  
> 
> it is not transparent enough because it don't understand classical ipset

By the way, now nftables should natively support all the features from
ipset.

My plan (for which I haven't found the time in months) would be to
write some kind of "reference" wrapper to create nftables sets from
ipset commands, and to render them back as ipset-style output.

I wonder if this should become the job of iptables-nft, eventually.

-- 
Stefano


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: iptables user space performance benchmarks published
  2020-06-22 15:44             ` Phil Sutter
@ 2020-06-22 16:29               ` Reindl Harald
  2020-06-22 16:45                 ` Phil Sutter
  0 siblings, 1 reply; 16+ messages in thread
From: Reindl Harald @ 2020-06-22 16:29 UTC (permalink / raw)
  To: Phil Sutter, Pablo Neira Ayuso, netfilter-devel



Am 22.06.20 um 17:44 schrieb Phil Sutter:
> Sorry, no thanks. If your setup is so complicated you rather send me an
> image of the machine(s?) running it, you're in dire need to simplify
> things in order to prepare for me helping out. Assuming that
> 'firewall.sh' is also really 57KB in size, I'll probably have a hard
> time even making it do what it's supposed to, let alone reproduce the
> problem.

yeah, it's a corporate firewall with dos-protection, portscan-triggers
and a ton of fancy stuff ending in 270 rules which are 100% needed (most
are chains log something with -m limit and now do something using
nflog/ulogd)

> Let's go another route: Before and after switching from legacy to nft
> backend, please collect the current ruleset by recording the output of:
> 
> - iptables-save
> - ip6tables-save
> - nft list ruleset
> - ipset list

*good news* with xtables-save v1.8.3 on Fedora 31

other than at the last try after switch to ip(6)tables-nft-(restore) and
reboot the network seems to work now properly

not only that ssh behind a ipset-rule now works also my "test.php"
confirms that ratelimits, portscan-trigger and the nat is working

iptables-legacy layer is for sure empty after reboot

-------------------------------

but what is the replacement for iterate "/proc/net/ip_tables_names" and
"/proc/net/ip6_tables_names" in case "iptables-nft" is in use

that is not only used for reset but also on several places for status
counters, display rulets in "-t filter", "-t mangle and "-t raw"

-------------------------------

missing that explains that everything is falling in pieces and add
things which are supposed to be no longer there fails

$IPTABLES here is a macro within my application

 for TABLE in $(<'/proc/net/ip_tables_names'); do
  hlp_rule_ipv4 "$IPTABLES -t $TABLE -F"
  hlp_rule_ipv4 "$IPTABLES -t $TABLE -X"
 done
 if [ "$IPV6_LOADED" == 1 ]; then
  for TABLE in $(<'/proc/net/ip6_tables_names'); do
   hlp_rule_ipv6 "$IPTABLES -t $TABLE -F"
   hlp_rule_ipv6 "$IPTABLES -t $TABLE -X"
  done
 fi

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: iptables user space performance benchmarks published
  2020-06-22 16:23     ` Stefano Brivio
@ 2020-06-22 16:38       ` Reindl Harald
  0 siblings, 0 replies; 16+ messages in thread
From: Reindl Harald @ 2020-06-22 16:38 UTC (permalink / raw)
  To: Stefano Brivio, Phil Sutter
  Cc: Pablo Neira Ayuso, netfilter-devel, Jozsef Kadlecsik



Am 22.06.20 um 18:23 schrieb Stefano Brivio:
> By the way, now nftables should natively support all the features from
> ipset.
> 
> My plan (for which I haven't found the time in months) would be to
> write some kind of "reference" wrapper to create nftables sets from
> ipset commands, and to render them back as ipset-style output.
> 
> I wonder if this should become the job of iptables-nft, eventually

no, thanks

way too much work behind to get a admin-backend calling nano and friends
to maintain that stuff and support a wild mix of ipv4 and ipv6 which is
assigend to the correct ipset

that's a whole and very fancy toolkit and when ruled out the issues of
my last mail probably everything works transparnet with iptables-legacy
and iptables-nft while keep ipset as it is a seperate layer

it's not only about assign, load and save but also about find, list and
count things - if someone wants it native when staring from scratch i
understand why but i don't want to in this lifetime :-)

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: iptables user space performance benchmarks published
  2020-06-22 16:29               ` Reindl Harald
@ 2020-06-22 16:45                 ` Phil Sutter
  2020-06-22 16:59                   ` Reindl Harald
  0 siblings, 1 reply; 16+ messages in thread
From: Phil Sutter @ 2020-06-22 16:45 UTC (permalink / raw)
  To: Reindl Harald; +Cc: Pablo Neira Ayuso, netfilter-devel

Hi Harald,

On Mon, Jun 22, 2020 at 06:29:05PM +0200, Reindl Harald wrote:
> Am 22.06.20 um 17:44 schrieb Phil Sutter:
> > Sorry, no thanks. If your setup is so complicated you rather send me an
> > image of the machine(s?) running it, you're in dire need to simplify
> > things in order to prepare for me helping out. Assuming that
> > 'firewall.sh' is also really 57KB in size, I'll probably have a hard
> > time even making it do what it's supposed to, let alone reproduce the
> > problem.
> 
> yeah, it's a corporate firewall with dos-protection, portscan-triggers
> and a ton of fancy stuff ending in 270 rules which are 100% needed (most
> are chains log something with -m limit and now do something using
> nflog/ulogd)
> 
> > Let's go another route: Before and after switching from legacy to nft
> > backend, please collect the current ruleset by recording the output of:
> > 
> > - iptables-save
> > - ip6tables-save
> > - nft list ruleset
> > - ipset list
> 
> *good news* with xtables-save v1.8.3 on Fedora 31
> 
> other than at the last try after switch to ip(6)tables-nft-(restore) and
> reboot the network seems to work now properly
> 
> not only that ssh behind a ipset-rule now works also my "test.php"
> confirms that ratelimits, portscan-trigger and the nat is working
> 
> iptables-legacy layer is for sure empty after reboot
> 
> -------------------------------
> 
> but what is the replacement for iterate "/proc/net/ip_tables_names" and
> "/proc/net/ip6_tables_names" in case "iptables-nft" is in use
> 
> that is not only used for reset but also on several places for status
> counters, display rulets in "-t filter", "-t mangle and "-t raw"
> 
> -------------------------------
> 
> missing that explains that everything is falling in pieces and add
> things which are supposed to be no longer there fails

Ah yes, that's an obvious change and there's nothing we can do about it.
Unlike legacy iptables, there are no dedicated modules supporting each
table in iptables-nft. For instance, nft_chain_filter.ko suffices for
raw, filter and security tables. For nat table you need nft_chain_nat.ko
and mangle needs nft_chain_route.ko (actually just for OUTPUT chain).

> $IPTABLES here is a macro within my application
> 
>  for TABLE in $(<'/proc/net/ip_tables_names'); do
>   hlp_rule_ipv4 "$IPTABLES -t $TABLE -F"
>   hlp_rule_ipv4 "$IPTABLES -t $TABLE -X"
>  done
>  if [ "$IPV6_LOADED" == 1 ]; then
>   for TABLE in $(<'/proc/net/ip6_tables_names'); do
>    hlp_rule_ipv6 "$IPTABLES -t $TABLE -F"
>    hlp_rule_ipv6 "$IPTABLES -t $TABLE -X"
>   done
>  fi

For iptables-services in Fedora, I simply hard-coded the table names.

Cheers, Phil

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: iptables user space performance benchmarks published
  2020-06-22 16:45                 ` Phil Sutter
@ 2020-06-22 16:59                   ` Reindl Harald
  0 siblings, 0 replies; 16+ messages in thread
From: Reindl Harald @ 2020-06-22 16:59 UTC (permalink / raw)
  To: Phil Sutter, Pablo Neira Ayuso, netfilter-devel

Hi Phil

Am 22.06.20 um 18:45 schrieb Phil Sutter:
>> but what is the replacement for iterate "/proc/net/ip_tables_names" and
>> "/proc/net/ip6_tables_names" in case "iptables-nft" is in use
>>
>> that is not only used for reset but also on several places for status
>> counters, display rulets in "-t filter", "-t mangle and "-t raw"
>>
>> -------------------------------
>>
>> missing that explains that everything is falling in pieces and add
>> things which are supposed to be no longer there fails
> 
> Ah yes, that's an obvious change and there's nothing we can do about it.
> Unlike legacy iptables, there are no dedicated modules supporting each
> table in iptables-nft. For instance, nft_chain_filter.ko suffices for
> raw, filter and security tables. For nat table you need nft_chain_nat.ko
> and mangle needs nft_chain_route.ko (actually just for OUTPUT chain).
> 
>> $IPTABLES here is a macro within my application
>>
>>  for TABLE in $(<'/proc/net/ip_tables_names'); do
>>   hlp_rule_ipv4 "$IPTABLES -t $TABLE -F"
>>   hlp_rule_ipv4 "$IPTABLES -t $TABLE -X"
>>  done
>>  if [ "$IPV6_LOADED" == 1 ]; then
>>   for TABLE in $(<'/proc/net/ip6_tables_names'); do
>>    hlp_rule_ipv6 "$IPTABLES -t $TABLE -F"
>>    hlp_rule_ipv6 "$IPTABLES -t $TABLE -X"
>>   done
>>  fi
> 
> For iptables-services in Fedora, I simply hard-coded the table names

that's exactly what i want to avoid beause in case of iptables-legacy
that would load stuff not needed

given that "iptables-nft -t raw", "iptables-nft -t mangle",
"iptables-nft -t nat" are working as expected as far as i can see some
way with "iptables-nft" would be cool

---------------------

[root@firewall:/proc/net]$ iptables-nft -t natx -L
iptables v1.8.3 (nf_tables): table 'natx' does not exist
Perhaps iptables or your kernel needs to be upgraded.

well i could write a loop testing that and provide a abstraction layer
in case the whole beast runs in iptables-nft mode but that's ugly as hell

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2020-06-22 16:59 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-06-19 14:11 iptables user space performance benchmarks published Phil Sutter
2020-06-22 12:42 ` Pablo Neira Ayuso
2020-06-22 13:34   ` Reindl Harald
2020-06-22 14:04     ` Phil Sutter
2020-06-22 14:11       ` Reindl Harald
2020-06-22 14:54         ` Phil Sutter
2020-06-22 15:19           ` Reindl Harald
2020-06-22 15:44             ` Phil Sutter
2020-06-22 16:29               ` Reindl Harald
2020-06-22 16:45                 ` Phil Sutter
2020-06-22 16:59                   ` Reindl Harald
2020-06-22 16:23     ` Stefano Brivio
2020-06-22 16:38       ` Reindl Harald
2020-06-22 13:40   ` Phil Sutter
2020-06-22 14:04 ` Jan Engelhardt
2020-06-22 14:35   ` Phil Sutter

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.