All of lore.kernel.org
 help / color / mirror / Atom feed
* Extending an IPv4 filter to IPv6, Extending an IPv4 filter to IPv6
@ 2023-08-18 10:56 Alessandro Vesely
  2023-08-19  1:46 ` Duncan Roe
  2023-08-20 21:41 ` Pablo Neira Ayuso
  0 siblings, 2 replies; 16+ messages in thread
From: Alessandro Vesely @ 2023-08-18 10:56 UTC (permalink / raw)
  To: netfilter

Hi all,

I have an old program (ipqbdb) which filters IPv4 packets using 
libnetfilter_queue.  I want to extend it to also filter IPv6, now that at last 
I can use some of those addresses.

The program obtains a handle by nfq_open(), and then (after unbind) binds by 
nfq_bind_pf(h, AF_INET).  Afterwards it creates the configured number of queues 
and filters the packets it finds there.

There is a big DEPRECATED in the documentation, and the generated doc for 
nfq_bind_pf() parameters says "This call is obsolete, Linux kernels from 3.8 
onwards ignore it" (which is obviously false).
https://netfilter.org/projects/libnetfilter_queue/doxygen/

So, the first question: Can I keep using these functions?  What is the alternative?

Second question: Is there a "mixed mode" parameter, besides PF_INET and 
PF_INET6, that allows to capture both types?  In that case, can a queue receive 
either packet?


Any other suggestion about extending to IPv6 is probably going to be appreciated.


Thank you
Ale
-- 






^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Extending an IPv4 filter to IPv6
  2023-08-18 10:56 Extending an IPv4 filter to IPv6, Extending an IPv4 filter to IPv6 Alessandro Vesely
@ 2023-08-19  1:46 ` Duncan Roe
  2023-08-19  9:53   ` Extending an IPv4 filter to IPv6, " Alessandro Vesely
  2023-08-20 21:39   ` Pablo Neira Ayuso
  2023-08-20 21:41 ` Pablo Neira Ayuso
  1 sibling, 2 replies; 16+ messages in thread
From: Duncan Roe @ 2023-08-19  1:46 UTC (permalink / raw)
  To: netfilter

Hi Ale,

On Fri, Aug 18, 2023 at 12:56:38PM +0200, Alessandro Vesely wrote:
> Hi all,
>
> I have an old program (ipqbdb) which filters IPv4 packets using
> libnetfilter_queue.  I want to extend it to also filter IPv6, now that at
> last I can use some of those addresses.
>
> The program obtains a handle by nfq_open(), and then (after unbind) binds by
> nfq_bind_pf(h, AF_INET).  Afterwards it creates the configured number of
> queues and filters the packets it finds there.
>
> There is a big DEPRECATED in the documentation, and the generated doc for
> nfq_bind_pf() parameters says "This call is obsolete, Linux kernels from 3.8
> onwards ignore it" (which is obviously false).
> https://netfilter.org/projects/libnetfilter_queue/doxygen/
>
> So, the first question: Can I keep using these functions?  What is the alternative?
>
> Second question: Is there a "mixed mode" parameter, besides PF_INET and
> PF_INET6, that allows to capture both types?  In that case, can a queue
> receive either packet?
>
>
> Any other suggestion about extending to IPv6 is probably going to be appreciated.
>
>
> Thank you
> Ale
> --
>
There are 2 separate APIs in libnetfilter_queue, examplified by
utils/nfqnl_test.c (your program) and examples/nf-queue.c (newer, has functions
for packet mangling).

DEPRECATED was an unfortunate choice of label for the older API: the functions
are not deprecated but the underlying library that they currently use is
deprecated. In answer to your questions:

1a Can I keep using these functions?: Certainly.

1b What is the alternative?: No need to change if your current program does all
you need. I assume here that you don't access the IPv4 header fields: the new
API has functions for that (and IPv6) but the old API has nothing of that
nature.

2 ...can a queue receive either packet?: Yes. utils/nfqnl_test.c works fine
with IPv6. nfq_bind_pf() really *is* obsolete - I'll explain:

In libnetfilter_queue:
  In libnetfilter_queue.c:
    493 int nfq_bind_pf(struct nfq_handle *h, uint16_t pf)
    494 {
    495         return __build_send_cfg_msg(h, NFQNL_CFG_CMD_PF_BIND, 0, pf);
    496 }

In Linux kernel:
  In net/netfilter/nfnetlink_queue.c
    1380     case NFQNL_CFG_CMD_PF_BIND:
    1381     case NFQNL_CFG_CMD_PF_UNBIND:
    1382       break;
    1383     default:
    1384       ret = -ENOTSUPP;
    1385       goto err_out_unlock;

Cheers ... Duncan.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Extending an IPv4 filter to IPv6, Re: Extending an IPv4 filter to IPv6
  2023-08-19  1:46 ` Duncan Roe
@ 2023-08-19  9:53   ` Alessandro Vesely
  2023-08-20  1:09     ` Duncan Roe
  2023-08-20 21:39   ` Pablo Neira Ayuso
  1 sibling, 1 reply; 16+ messages in thread
From: Alessandro Vesely @ 2023-08-19  9:53 UTC (permalink / raw)
  To: netfilter

Hi Duncan, thank you for your reply.

On Sat 19/Aug/2023 03:46:06 +0200 Duncan Roe wrote:
> On Fri, Aug 18, 2023 at 12:56:38PM +0200, Alessandro Vesely wrote:
>>
>> I have an old program (ipqbdb) which filters IPv4 packets using
>> libnetfilter_queue.  I want to extend it to also filter IPv6, now that at
>> last I can use some of those addresses. [...]
>>
>> There is a big DEPRECATED in the documentation, and the generated doc for
>> nfq_bind_pf() parameters says "This call is obsolete, Linux kernels from 3.8
>> onwards ignore it" (which is obviously false).
>> https://netfilter.org/projects/libnetfilter_queue/doxygen/
>>
>> So, the first question: Can I keep using these functions?  What is the alternative?
>>
>> Second question: Is there a "mixed mode" parameter, besides PF_INET and
>> PF_INET6, that allows to capture both types?  In that case, can a queue
>> receive either packet?
>
> There are 2 separate APIs in libnetfilter_queue, examplified by
> utils/nfqnl_test.c (your program) and examples/nf-queue.c (newer, has functions
> for packet mangling).


Only the latter is included in the Debian package.


> DEPRECATED was an unfortunate choice of label for the older API: the functions
> are not deprecated but the underlying library that they currently use is
> deprecated. In answer to your questions:


In fact it still works.


> 1a Can I keep using these functions?: Certainly.
> 
> 1b What is the alternative?: No need to change if your current program does all
> you need. I assume here that you don't access the IPv4 header fields: the new
> API has functions for that (and IPv6) but the old API has nothing of that
> nature.


I have nfq_set_mode(qh, NFQNL_COPY_PACKET, 20), where a copy_range of 20 is 
enough to see the IP addresses —which is what it filters.

I think I'll go for the alternative anyway.


> 2 ...can a queue receive either packet?: Yes. utils/nfqnl_test.c works fine
> with IPv6. nfq_bind_pf() really *is* obsolete - I'll explain:
> 
> In libnetfilter_queue:
>    In libnetfilter_queue.c:
>      493 int nfq_bind_pf(struct nfq_handle *h, uint16_t pf)
>      494 {
>      495         return __build_send_cfg_msg(h, NFQNL_CFG_CMD_PF_BIND, 0, pf);
>      496 }
> 
> In Linux kernel:
>    In net/netfilter/nfnetlink_queue.c
>      1380     case NFQNL_CFG_CMD_PF_BIND:
>      1381     case NFQNL_CFG_CMD_PF_UNBIND:
>      1382       break;
>      1383     default:
>      1384       ret = -ENOTSUPP;
>      1385       goto err_out_unlock;


Heck, I see.  In particular, the cmd->pf argument is never used.  That means 
that the type of packet a filter receives only depends on what iptables of nft 
are shoving at its queue, irrespective of compile and runtime config.  Correct?


Best
Ale
-- 

















^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Extending an IPv4 filter to IPv6
  2023-08-19  9:53   ` Extending an IPv4 filter to IPv6, " Alessandro Vesely
@ 2023-08-20  1:09     ` Duncan Roe
  0 siblings, 0 replies; 16+ messages in thread
From: Duncan Roe @ 2023-08-20  1:09 UTC (permalink / raw)
  To: netfilter

On Sat, Aug 19, 2023 at 11:53:19AM +0200, Alessandro Vesely wrote:
> Hi Duncan, thank you for your reply.
>
[...]
>
> > 2 ...can a queue receive either packet?: Yes. utils/nfqnl_test.c works fine
> > with IPv6. nfq_bind_pf() really *is* obsolete - I'll explain:
> >
> > In libnetfilter_queue:
> >    In libnetfilter_queue.c:
> >      493 int nfq_bind_pf(struct nfq_handle *h, uint16_t pf)
> >      494 {
> >      495         return __build_send_cfg_msg(h, NFQNL_CFG_CMD_PF_BIND, 0, pf);
> >      496 }
> >
> > In Linux kernel:
> >    In net/netfilter/nfnetlink_queue.c
> >      1380     case NFQNL_CFG_CMD_PF_BIND:
> >      1381     case NFQNL_CFG_CMD_PF_UNBIND:
> >      1382       break;
> >      1383     default:
> >      1384       ret = -ENOTSUPP;
> >      1385       goto err_out_unlock;
>
>
> Heck, I see.  In particular, the cmd->pf argument is never used.  That means
> that the type of packet a filter receives only depends on what iptables of
> nft are shoving at its queue, irrespective of compile and runtime config.
> Correct?
>
Yes, correct.
>
> Best
> Ale
> --
>
Cheers ... Duncan.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Extending an IPv4 filter to IPv6
  2023-08-19  1:46 ` Duncan Roe
  2023-08-19  9:53   ` Extending an IPv4 filter to IPv6, " Alessandro Vesely
@ 2023-08-20 21:39   ` Pablo Neira Ayuso
  1 sibling, 0 replies; 16+ messages in thread
From: Pablo Neira Ayuso @ 2023-08-20 21:39 UTC (permalink / raw)
  To: netfilter

On Sat, Aug 19, 2023 at 11:46:06AM +1000, Duncan Roe wrote:
> On Fri, Aug 18, 2023 at 12:56:38PM +0200, Alessandro Vesely wrote:
[...]
> There are 2 separate APIs in libnetfilter_queue, examplified by
> utils/nfqnl_test.c (your program) and examples/nf-queue.c (newer, has functions
> for packet mangling).
> 
> DEPRECATED was an unfortunate choice of label for the older API: the functions
> are not deprecated but the underlying library that they currently use is
> deprecated. In answer to your questions:
> 
> 1a Can I keep using these functions?: Certainly.

No. For new applications, the new libmnl-based API is the way to go.

Old API depends on libnfnetlink and it will go away sooner or later if
there is not a way to make it work behind the scenes with libmnl.

We are steadly removing all users of libnfnetlink in favour of libmnl.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Extending an IPv4 filter to IPv6
  2023-08-18 10:56 Extending an IPv4 filter to IPv6, Extending an IPv4 filter to IPv6 Alessandro Vesely
  2023-08-19  1:46 ` Duncan Roe
@ 2023-08-20 21:41 ` Pablo Neira Ayuso
  2023-08-21 17:18   ` Extending an IPv4 filter to IPv6, " Alessandro Vesely
  1 sibling, 1 reply; 16+ messages in thread
From: Pablo Neira Ayuso @ 2023-08-20 21:41 UTC (permalink / raw)
  To: Alessandro Vesely; +Cc: netfilter

On Fri, Aug 18, 2023 at 12:56:38PM +0200, Alessandro Vesely wrote:
> Hi all,
> 
> I have an old program (ipqbdb) which filters IPv4 packets using
> libnetfilter_queue.  I want to extend it to also filter IPv6, now that at
> last I can use some of those addresses.
> 
> The program obtains a handle by nfq_open(), and then (after unbind) binds by
> nfq_bind_pf(h, AF_INET).  Afterwards it creates the configured number of
> queues and filters the packets it finds there.
> 
> There is a big DEPRECATED in the documentation, and the generated doc for
> nfq_bind_pf() parameters says "This call is obsolete, Linux kernels from 3.8
> onwards ignore it" (which is obviously false).
> https://netfilter.org/projects/libnetfilter_queue/doxygen/
> 
> So, the first question: Can I keep using these functions?  What is the alternative?

The alternative is the libmnl-based API which is the way to go for new
applications.

> Second question: Is there a "mixed mode" parameter, besides PF_INET and
> PF_INET6, that allows to capture both types?  In that case, can a queue
> receive either packet?

Using the 'inet' family in nftables, it should be possible to send
both IPv4 and IPv6 packets to one single queue in userspace.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Extending an IPv4 filter to IPv6, Re: Extending an IPv4 filter to IPv6
  2023-08-20 21:41 ` Pablo Neira Ayuso
@ 2023-08-21 17:18   ` Alessandro Vesely
  2023-08-21 19:10     ` Pablo Neira Ayuso
  0 siblings, 1 reply; 16+ messages in thread
From: Alessandro Vesely @ 2023-08-21 17:18 UTC (permalink / raw)
  To: Pablo Neira Ayuso; +Cc: netfilter, netfilter

On Sun 20/Aug/2023 23:41:43 +0200 Pablo Neira Ayuso wrote:
> On Fri, Aug 18, 2023 at 12:56:38PM +0200, Alessandro Vesely wrote:
>> [...]
>>
>> So, the first question: Can I keep using these functions?  What is the alternative?
>
> The alternative is the libmnl-based API which is the way to go for new 
> applications.


The nf-queue.c[*] example that illustrates libmnl is strange.  It show a 
function nfq_nlmsg_put() (libnetfilter-queue).  I have two questions about it:

1) In the example it is called twice, the second time after setting attrs. 
What purpose does the first call serve?

2) Is it fine to use a small buffer?  My filter only looks at addresses, so it 
should be enough to copy 40 bytes.  Can it be on stack?


>> Second question: Is there a "mixed mode" parameter, besides PF_INET and 
>> PF_INET6, that allows to capture both types?  In that case, can a queue 
>> receive either packet?
>
> Using the 'inet' family in nftables, it should be possible to send 
> both IPv4 and IPv6 packets to one single queue in userspace.


Yes, or two calls to iptables and ip6tables.  However, nfq_nlmsg_cfg_put_cmd() 
takes a pf argument, AF_INET in the example.  Is that argument used at all?


Thanks
Ale
-- 
[*] https://git.netfilter.org/libnetfilter_queue/tree/examples/nf-queue.c




^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Extending an IPv4 filter to IPv6
  2023-08-21 17:18   ` Extending an IPv4 filter to IPv6, " Alessandro Vesely
@ 2023-08-21 19:10     ` Pablo Neira Ayuso
  2023-08-22 18:09       ` Extending an IPv4 filter to IPv6, " Alessandro Vesely
  0 siblings, 1 reply; 16+ messages in thread
From: Pablo Neira Ayuso @ 2023-08-21 19:10 UTC (permalink / raw)
  To: Alessandro Vesely; +Cc: netfilter

On Mon, Aug 21, 2023 at 07:18:46PM +0200, Alessandro Vesely wrote:
> On Sun 20/Aug/2023 23:41:43 +0200 Pablo Neira Ayuso wrote:
> > On Fri, Aug 18, 2023 at 12:56:38PM +0200, Alessandro Vesely wrote:
> > > [...]
> > >
> > > So, the first question: Can I keep using these functions?  What is the alternative?
> >
> > The alternative is the libmnl-based API which is the way to go for new
> > applications.
>
>
> The nf-queue.c[*] example that illustrates libmnl is strange.  It show a
> function nfq_nlmsg_put() (libnetfilter-queue).

Yes, that is a helper function that is provided by libnetfilter_queue
to assist a libmnl-based program in building the netlink headers:

EXPORT_SYMBOL
struct nlmsghdr *nfq_nlmsg_put(char *buf, int type, uint32_t queue_num)
{
        struct nlmsghdr *nlh = mnl_nlmsg_put_header(buf);
        nlh->nlmsg_type = (NFNL_SUBSYS_QUEUE << 8) | type;
        nlh->nlmsg_flags = NLM_F_REQUEST;

        struct nfgenmsg *nfg = mnl_nlmsg_put_extra_header(nlh, sizeof(*nfg));
        nfg->nfgen_family = AF_UNSPEC;
        nfg->version = NFNETLINK_V0;
        nfg->res_id = htons(queue_num);

        return nlh;
}

This sets up two headers, one is the netlink header, that tells the
subsystem and the type of request. Then it follows the nfgenmsg header
which is specific of the nfnetlink_queue subsystem. It stores the
queue number, family and version are set up to unspec and version_0
respectively.

There helpers function are offered in libnetfilter_queue, it is up to
you to opt-in to use them or not.

> I have two questions about it:
>
> 1) In the example it is called twice, the second time after setting attrs.
> What purpose does the first call serve?

There are two sections in the nf-queue example:

Section #1 (main function)
  Set up and configure the pipeline between kernel and
  userspace.  This creates the netlink socket and you send the
  configuration to the kernel for this pipeline.

Section #2 (packet processing loop)
   This is an infinite loop where your software reads for packets
   to come from the kernel and it calls a callback to handle the
   netlink message that encapsulates the packet and its metadata.

You full have control on the socket, so you instantiate a non-blocking
socket and use select()/poll() if your software handles more that one
single socket for I/O multiplexing. This examples uses a blocking socket.

> 2) Is it fine to use a small buffer?  My filter only looks at addresses, so
> it should be enough to copy 40 bytes.  Can it be on stack?

You can specify NFQNL_COPY_PACKET in your configuration to tell the
kernel to send you 40 bytes only, when setting up the pipeline. The
kernel sends you a netlink message that contains attributes to
encapsulate packet metadata and the actual payload. The attribute
comes as an attribute of the netlink message.

You can fetch the payload directly from the attribute:

        data = mnl_attr_get_payload(attr[NFQA_PAYLOAD]);

This is accessing the data that is stored in the onstack buffer that
stores the netlink message that your software have received.

You can obtain the packet payload length via:

        len = mnl_attr_get_payload_len(attr[NFQA_PAYLOAD]);

> > > Second question: Is there a "mixed mode" parameter, besides PF_INET
> > > and PF_INET6, that allows to capture both types?  In that case, can
> > > a queue receive either packet?
> >
> > Using the 'inet' family in nftables, it should be possible to send both
> > IPv4 and IPv6 packets to one single queue in userspace.
>
> Yes, or two calls to iptables and ip6tables.

Exactly.

> However, nfq_nlmsg_cfg_put_cmd() takes a pf argument, AF_INET in the
> example.  Is that argument used at all?

This is a legacy parameter which is not used by the kernel anymore,
you can specify AF_UNSPEC there.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Extending an IPv4 filter to IPv6, Re: Extending an IPv4 filter to IPv6
  2023-08-21 19:10     ` Pablo Neira Ayuso
@ 2023-08-22 18:09       ` Alessandro Vesely
  2023-08-27  8:34         ` Duncan Roe
  2023-08-27 20:48         ` Pablo Neira Ayuso
  0 siblings, 2 replies; 16+ messages in thread
From: Alessandro Vesely @ 2023-08-22 18:09 UTC (permalink / raw)
  To: Pablo Neira Ayuso; +Cc: netfilter, netfilter

On Mon 21/Aug/2023 21:10:35 +0200 Pablo Neira Ayuso wrote:
> On Mon, Aug 21, 2023 at 07:18:46PM +0200, Alessandro Vesely wrote:
>> On Sun 20/Aug/2023 23:41:43 +0200 Pablo Neira Ayuso wrote:
>>> On Fri, Aug 18, 2023 at 12:56:38PM +0200, Alessandro Vesely wrote:
>>>> [...]
>>>>
>>>> So, the first question: Can I keep using these functions?  What is the alternative?
>>>
>>> The alternative is the libmnl-based API which is the way to go for new 
>>> applications.
>>
>>
>> The nf-queue.c[*] example that illustrates libmnl is strange.  It show a 
>> function nfq_nlmsg_put() (libnetfilter-queue).
>
> Yes, that is a helper function that is provided by libnetfilter_queue 
> to assist a libmnl-based program in building the netlink headers:
>
> EXPORT_SYMBOL
> struct nlmsghdr *nfq_nlmsg_put(char *buf, int type, uint32_t queue_num)
> {
>          [...]
> }
>
> This sets up two headers, one is the netlink header, that tells the 
> subsystem and the type of request. Then it follows the nfgenmsg header 
> which is specific of the nfnetlink_queue subsystem. It stores the 
> queue number, family and version are set up to unspec and version_0 
> respectively.
>
> There helpers function are offered in libnetfilter_queue, it is up to 
> you to opt-in to use them or not.


I'm starting to understand.  The example reuses the big buffer to set up the 
queue parameters via mnl_socket_sendto(), received by nfqnl_recv_config().  The 
old API had specific calls instead, nfq_create_queue(), nfq_set_mode(),... It 
is this style which makes everything look more complicated, as it requires 
several calls.


>> I have two questions about it:
>>
>> 1) In the example it is called twice, the second time after setting attrs.
>> What purpose does the first call serve?
>
> There are two sections in the nf-queue example:
>
> Section #1 (main function)
>    Set up and configure the pipeline between kernel and
>    userspace.  This creates the netlink socket and you send the
>    configuration to the kernel for this pipeline.


Now I gathered the first call creates the queue, the second one sets a flag. 
It's not clear why that needs to be two calls.  Could all have been stuffed in 
a single buffer and delivered by a single call?  Hm... perhaps it is split just 
to show that it doesn't have to be monolithic.  If I want to set queue maxlen 
do I have to add a third call?


> Section #2 (packet processing loop)
>     This is an infinite loop where your software reads for packets
>     to come from the kernel and it calls a callback to handle the
>     netlink message that encapsulates the packet and its metadata.


In this section the example has a single call to mnl_socket_sendto() per packet.


> You full have control on the socket, so you instantiate a non-blocking
> socket and use select()/poll() if your software handles more that one
> single socket for I/O multiplexing. This examples uses a blocking socket.


But I can handle multiple queues using the same socket, can't I?  Are there any 
advantages handling, say a netlink socket for each queue?


>> 2) Is it fine to use a small buffer?  My filter only looks at addresses, so 
>> it should be enough to copy 40 bytes.  Can it be on stack?
>
> You can specify NFQNL_COPY_PACKET in your configuration to tell the 
> kernel to send you 40 bytes only, when setting up the pipeline. The 
> kernel sends you a netlink message that contains attributes to 
> encapsulate packet metadata and the actual payload. The attribute 
> comes as an attribute of the netlink message.


So the buffer must be bigger than just the payload.  libmnl.h defines a large 
MNL_SOCKET_BUFFER_SIZE...


> You can fetch the payload directly from the attribute:
> 
>          data = mnl_attr_get_payload(attr[NFQA_PAYLOAD]);


Yup, that's what the example does.


> This is accessing the data that is stored in the onstack buffer that
> stores the netlink message that your software have received.


It seems a buffer can contain several packets.  Is that related with the queue 
maxlen?


> You can obtain the packet payload length via:
> 
>          len = mnl_attr_get_payload_len(attr[NFQA_PAYLOAD]);


And this should be the length specified with NFQNL_COPY_PACKET (or less), correct?


Thanks
Ale
-- 








^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Extending an IPv4 filter to IPv6
  2023-08-22 18:09       ` Extending an IPv4 filter to IPv6, " Alessandro Vesely
@ 2023-08-27  8:34         ` Duncan Roe
  2023-08-27 17:20           ` Extending an IPv4 filter to IPv6, " Alessandro Vesely
  2023-08-27 20:49           ` Pablo Neira Ayuso
  2023-08-27 20:48         ` Pablo Neira Ayuso
  1 sibling, 2 replies; 16+ messages in thread
From: Duncan Roe @ 2023-08-27  8:34 UTC (permalink / raw)
  To: netfilter; +Cc: Pablo Neira Ayuso

On Tue, Aug 22, 2023 at 08:09:53PM +0200, Alessandro Vesely wrote:
> On Mon 21/Aug/2023 21:10:35 +0200 Pablo Neira Ayuso wrote:
> > On Mon, Aug 21, 2023 at 07:18:46PM +0200, Alessandro Vesely wrote:
> > > On Sun 20/Aug/2023 23:41:43 +0200 Pablo Neira Ayuso wrote:
> > > > On Fri, Aug 18, 2023 at 12:56:38PM +0200, Alessandro Vesely wrote:
> > > > > [...]
> > > > >
> > > > > So, the first question: Can I keep using these functions?  What is the alternative?
> > > >
> > > > The alternative is the libmnl-based API which is the way to go
> > > > for new applications.
> > >
> > >
> > > The nf-queue.c[*] example that illustrates libmnl is strange.  It
> > > show a function nfq_nlmsg_put() (libnetfilter-queue).
> >
> > Yes, that is a helper function that is provided by libnetfilter_queue to
> > assist a libmnl-based program in building the netlink headers:
> >
> > EXPORT_SYMBOL
> > struct nlmsghdr *nfq_nlmsg_put(char *buf, int type, uint32_t queue_num)
> > {
> >          [...]
> > }
> >
> > This sets up two headers, one is the netlink header, that tells the
> > subsystem and the type of request. Then it follows the nfgenmsg header
> > which is specific of the nfnetlink_queue subsystem. It stores the queue
> > number, family and version are set up to unspec and version_0
> > respectively.
> >
> > There helpers function are offered in libnetfilter_queue, it is up to
> > you to opt-in to use them or not.
>
>
> I'm starting to understand.  The example reuses the big buffer to set up the
> queue parameters via mnl_socket_sendto(), received by nfqnl_recv_config().
> The old API had specific calls instead, nfq_create_queue(),
> nfq_set_mode(),... It is this style which makes everything look more
> complicated, as it requires several calls.
>
>
> > > I have two questions about it:
> > >
> > > 1) In the example it is called twice, the second time after setting attrs.
> > > What purpose does the first call serve?
> >
> > There are two sections in the nf-queue example:
> >
> > Section #1 (main function)
> >    Set up and configure the pipeline between kernel and
> >    userspace.  This creates the netlink socket and you send the
> >    configuration to the kernel for this pipeline.
>
>
> Now I gathered the first call creates the queue, the second one sets a flag.
> It's not clear why that needs to be two calls.  Could all have been stuffed
> in a single buffer and delivered by a single call?  Hm... perhaps it is
> split just to show that it doesn't have to be monolithic.  If I want to set
> queue maxlen do I have to add a third call?
>
>
> > Section #2 (packet processing loop)
> >     This is an infinite loop where your software reads for packets
> >     to come from the kernel and it calls a callback to handle the
> >     netlink message that encapsulates the packet and its metadata.
>
>
> In this section the example has a single call to mnl_socket_sendto() per packet.
>
>
> > You full have control on the socket, so you instantiate a non-blocking
> > socket and use select()/poll() if your software handles more that one
> > single socket for I/O multiplexing. This examples uses a blocking socket.
>
>
> But I can handle multiple queues using the same socket, can't I?  Are there
> any advantages handling, say a netlink socket for each queue?
>
>
> > > 2) Is it fine to use a small buffer?  My filter only looks at
> > > addresses, so it should be enough to copy 40 bytes.  Can it be on
> > > stack?
> >
> > You can specify NFQNL_COPY_PACKET in your configuration to tell the
> > kernel to send you 40 bytes only, when setting up the pipeline. The
> > kernel sends you a netlink message that contains attributes to
> > encapsulate packet metadata and the actual payload. The attribute comes
> > as an attribute of the netlink message.
>
>
> So the buffer must be bigger than just the payload.  libmnl.h defines a
> large MNL_SOCKET_BUFFER_SIZE...
>
>
> > You can fetch the payload directly from the attribute:
> >
> >          data = mnl_attr_get_payload(attr[NFQA_PAYLOAD]);
>
>
> Yup, that's what the example does.
>
>
> > This is accessing the data that is stored in the onstack buffer that
> > stores the netlink message that your software have received.
>
>
> It seems a buffer can contain several packets.  Is that related with the
> queue maxlen?
>
man 7 netlink will tell you that netlink messages may be batched. This is
straightforward to observe in a libnetfilter_log program under gdb.

However libnetfilter_queue programs never get batched netlink messages. So the
callback isn't strictly necessary but it would mean extra code to special-case
libnetfilter_queue (among all the other netfilter libraries) so it's been left
there.

If you rely on this behaviour it might be prudent to check that bytes read ==
*(struct nlmsghdr *)buf.nlmsg_len.
>
> > You can obtain the packet payload length via:
> >
> >          len = mnl_attr_get_payload_len(attr[NFQA_PAYLOAD]);
>
>
> And this should be the length specified with NFQNL_COPY_PACKET (or less), correct?
>
You can check for packet truncation by checking `len` above against what you
actually recieved.
>
> Thanks
> Ale
> --
>
Cheers ... Duncan.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Extending an IPv4 filter to IPv6, Re: Extending an IPv4 filter to IPv6
  2023-08-27  8:34         ` Duncan Roe
@ 2023-08-27 17:20           ` Alessandro Vesely
  2023-08-27 18:58             ` Duncan Roe
  2023-08-27 20:49           ` Pablo Neira Ayuso
  1 sibling, 1 reply; 16+ messages in thread
From: Alessandro Vesely @ 2023-08-27 17:20 UTC (permalink / raw)
  To: netfilter

On Sun 27/Aug/2023 10:34:09 +0200 Duncan Roe wrote:
>> It seems a buffer can contain several packets.  Is that related with the
>> queue maxlen?
>>
> man 7 netlink will tell you that netlink messages may be batched.


Thanks for the pointer, I hadn't noticed it.


> This is straightforward to observe in a libnetfilter_log program under gdb. >
> However libnetfilter_queue programs never get batched netlink messages. So the
> callback isn't strictly necessary but it would mean extra code to special-case
> libnetfilter_queue (among all the other netfilter libraries) so it's been left
> there.
> 
> If you rely on this behaviour it might be prudent to check that bytes read ==
> *(struct nlmsghdr *)buf.nlmsg_len.
>
>>> You can obtain the packet payload length via:
>>>
>>>           len = mnl_attr_get_payload_len(attr[NFQA_PAYLOAD]);
>>
>> And this should be the length specified with NFQNL_COPY_PACKET (or less), correct?
>>
> You can check for packet truncation by checking `len` above against what you
> actually received.


I'll try.  However, I'd never know if my test conditions equal what can happen 
at runtime.  As I only look at addresses, it's fine to truncate packets at that 
length.

I just want to minimize memory footprint, but without hampering performance.


Thanks
Ale
-- 





^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Extending an IPv4 filter to IPv6
  2023-08-27 17:20           ` Extending an IPv4 filter to IPv6, " Alessandro Vesely
@ 2023-08-27 18:58             ` Duncan Roe
  2023-08-27 21:12               ` Pablo Neira Ayuso
  0 siblings, 1 reply; 16+ messages in thread
From: Duncan Roe @ 2023-08-27 18:58 UTC (permalink / raw)
  To: netfilter

On Sun, Aug 27, 2023 at 07:20:45PM +0200, Alessandro Vesely wrote:
> On Sun 27/Aug/2023 10:34:09 +0200 Duncan Roe wrote:
> > > It seems a buffer can contain several packets.  Is that related with the
> > > queue maxlen?
> > >
> > man 7 netlink will tell you that netlink messages may be batched.
>
>
> Thanks for the pointer, I hadn't noticed it.
>
>
> > This is straightforward to observe in a libnetfilter_log program under gdb. >
> > However libnetfilter_queue programs never get batched netlink messages. So the
> > callback isn't strictly necessary but it would mean extra code to special-case
> > libnetfilter_queue (among all the other netfilter libraries) so it's been left
> > there.
> >
> > If you rely on this behaviour it might be prudent to check that bytes read ==
> > *(struct nlmsghdr *)buf.nlmsg_len.
> >
> > > > You can obtain the packet payload length via:
> > > >
> > > >           len = mnl_attr_get_payload_len(attr[NFQA_PAYLOAD]);
> > >
> > > And this should be the length specified with NFQNL_COPY_PACKET (or less), correct?
> > >
> > You can check for packet truncation by checking `len` above against what you
> > actually received.
>
>
> I'll try.  However, I'd never know if my test conditions equal what can
> happen at runtime.  As I only look at addresses, it's fine to truncate
> packets at that length.
>
> I just want to minimize memory footprint, but without hampering performance.
>
You definitely want to use the new pktb_setup_raw() function then. git clone or
fork the repo at https://git.netfilter.org/libnetfilter_queue/

Cheers ... Duncan.
>
> Thanks
> Ale

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Extending an IPv4 filter to IPv6
  2023-08-22 18:09       ` Extending an IPv4 filter to IPv6, " Alessandro Vesely
  2023-08-27  8:34         ` Duncan Roe
@ 2023-08-27 20:48         ` Pablo Neira Ayuso
  2023-08-31  9:22           ` Extending an IPv4 filter to IPv6, " Alessandro Vesely
  1 sibling, 1 reply; 16+ messages in thread
From: Pablo Neira Ayuso @ 2023-08-27 20:48 UTC (permalink / raw)
  To: Alessandro Vesely; +Cc: netfilter

On Tue, Aug 22, 2023 at 08:09:53PM +0200, Alessandro Vesely wrote:
> On Mon 21/Aug/2023 21:10:35 +0200 Pablo Neira Ayuso wrote:
> > On Mon, Aug 21, 2023 at 07:18:46PM +0200, Alessandro Vesely wrote:
> > > On Sun 20/Aug/2023 23:41:43 +0200 Pablo Neira Ayuso wrote:
> > > > On Fri, Aug 18, 2023 at 12:56:38PM +0200, Alessandro Vesely wrote:
> > > > > [...]
> > > > > 
> > > > > So, the first question: Can I keep using these functions?  What is the alternative?
> > > > 
> > > > The alternative is the libmnl-based API which is the way to go
> > > > for new applications.
> > > 
> > > 
> > > The nf-queue.c[*] example that illustrates libmnl is strange.  It
> > > show a function nfq_nlmsg_put() (libnetfilter-queue).
> > 
> > Yes, that is a helper function that is provided by libnetfilter_queue to
> > assist a libmnl-based program in building the netlink headers:
> > 
> > EXPORT_SYMBOL
> > struct nlmsghdr *nfq_nlmsg_put(char *buf, int type, uint32_t queue_num)
> > {
> >          [...]
> > }
> > 
> > This sets up two headers, one is the netlink header, that tells the
> > subsystem and the type of request. Then it follows the nfgenmsg header
> > which is specific of the nfnetlink_queue subsystem. It stores the queue
> > number, family and version are set up to unspec and version_0
> > respectively.
> > 
> > There helpers function are offered in libnetfilter_queue, it is up to
> > you to opt-in to use them or not.
> 
> 
> I'm starting to understand.  The example reuses the big buffer to set up the
> queue parameters via mnl_socket_sendto(), received by nfqnl_recv_config().
> The old API had specific calls instead, nfq_create_queue(),
> nfq_set_mode(),... It is this style which makes everything look more
> complicated, as it requires several calls.

I agree it is more low level.

> > > I have two questions about it:
> > > 
> > > 1) In the example it is called twice, the second time after setting attrs.
> > > What purpose does the first call serve?
> > 
> > There are two sections in the nf-queue example:
> > 
> > Section #1 (main function)
> >    Set up and configure the pipeline between kernel and
> >    userspace.  This creates the netlink socket and you send the
> >    configuration to the kernel for this pipeline.
> 
> Now I gathered the first call creates the queue, the second one sets a flag.
> It's not clear why that needs to be two calls.  Could all have been stuffed
> in a single buffer and delivered by a single call?  Hm... perhaps it is
> split just to show that it doesn't have to be monolithic.  If I want to set
> queue maxlen do I have to add a third call?

It should be possible to batch three netlink messages in one single
buffer, there is a batch API in libmnl. You will have to assign
different sequence numbers to each message in the batch to identify
errors, because kernel tells you what message has failed (including
the original sequence number) and the reason (expressed as errno).

Since this is only called once to set up the data pipeline between
kernel and userspace, I do not think the batching is worth the effort.

> > Section #2 (packet processing loop)
> >     This is an infinite loop where your software reads for packets
> >     to come from the kernel and it calls a callback to handle the
> >     netlink message that encapsulates the packet and its metadata.
> 
> 
> In this section the example has a single call to mnl_socket_sendto() per packet.

This is to send a verdict back to kernel space on the packet that
userspace has received.

> > You full have control on the socket, so you instantiate a non-blocking
> > socket and use select()/poll() if your software handles more that one
> > single socket for I/O multiplexing. This examples uses a blocking socket.
> 
> But I can handle multiple queues using the same socket, can't I?  Are there
> any advantages handling, say a netlink socket for each queue?

Yes you can handle all queues with one single socket.

Splitting queues in sockets combined with CPU pinning allows you to
improve CPU utilization. There is a number of options to fan out
packets between several queues, see documntation.

> > > 2) Is it fine to use a small buffer?  My filter only looks at
> > > addresses, so it should be enough to copy 40 bytes.  Can it be on
> > > stack?
> > 
> > You can specify NFQNL_COPY_PACKET in your configuration to tell the
> > kernel to send you 40 bytes only, when setting up the pipeline. The
> > kernel sends you a netlink message that contains attributes to
> > encapsulate packet metadata and the actual payload. The attribute comes
> > as an attribute of the netlink message.
> 
> So the buffer must be bigger than just the payload.  libmnl.h defines a
> large MNL_SOCKET_BUFFER_SIZE...

That is a generic buffer definition. In the specific case of nfqueue,
you have to allocate a buffer to accomodate enough data to be received
from the kernel.

> > You can fetch the payload directly from the attribute:
> > 
> >          data = mnl_attr_get_payload(attr[NFQA_PAYLOAD]);
> 
> 
> Yup, that's what the example does.
> 
> > This is accessing the data that is stored in the onstack buffer that
> > stores the netlink message that your software have received.
> 
> 
> It seems a buffer can contain several packets.  Is that related with the
> queue maxlen?

Linux provides GSO/GRO support, if it is turned on as it is by
default, then you might be receiving a large large whose size might be
larger than your device MTU. The nfqueue subsystem reports this via
NFQA_CFG_F_GSO flag.

> > You can obtain the packet payload length via:
> > 
> >          len = mnl_attr_get_payload_len(attr[NFQA_PAYLOAD]);
> 
> 
> And this should be the length specified with NFQNL_COPY_PACKET (or less), correct?

Exactly.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Extending an IPv4 filter to IPv6
  2023-08-27  8:34         ` Duncan Roe
  2023-08-27 17:20           ` Extending an IPv4 filter to IPv6, " Alessandro Vesely
@ 2023-08-27 20:49           ` Pablo Neira Ayuso
  1 sibling, 0 replies; 16+ messages in thread
From: Pablo Neira Ayuso @ 2023-08-27 20:49 UTC (permalink / raw)
  To: netfilter

On Sun, Aug 27, 2023 at 06:34:09PM +1000, Duncan Roe wrote:
> On Tue, Aug 22, 2023 at 08:09:53PM +0200, Alessandro Vesely wrote:
> > On Mon 21/Aug/2023 21:10:35 +0200 Pablo Neira Ayuso wrote:
> > > On Mon, Aug 21, 2023 at 07:18:46PM +0200, Alessandro Vesely wrote:
> > > > On Sun 20/Aug/2023 23:41:43 +0200 Pablo Neira Ayuso wrote:
> > > > > On Fri, Aug 18, 2023 at 12:56:38PM +0200, Alessandro Vesely wrote:
> > > > > > [...]
> > It seems a buffer can contain several packets.  Is that related with the
> > queue maxlen?
> >
> man 7 netlink will tell you that netlink messages may be batched. This is
> straightforward to observe in a libnetfilter_log program under gdb.
> 
> However libnetfilter_queue programs never get batched netlink messages. So the
> callback isn't strictly necessary but it would mean extra code to special-case
> libnetfilter_queue (among all the other netfilter libraries) so it's been left
> there.

The "several packets" in this case refers to Linux GRO/GSO.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Extending an IPv4 filter to IPv6
  2023-08-27 18:58             ` Duncan Roe
@ 2023-08-27 21:12               ` Pablo Neira Ayuso
  0 siblings, 0 replies; 16+ messages in thread
From: Pablo Neira Ayuso @ 2023-08-27 21:12 UTC (permalink / raw)
  To: netfilter

On Mon, Aug 28, 2023 at 04:58:09AM +1000, Duncan Roe wrote:
> On Sun, Aug 27, 2023 at 07:20:45PM +0200, Alessandro Vesely wrote:
> > On Sun 27/Aug/2023 10:34:09 +0200 Duncan Roe wrote:
> > > > It seems a buffer can contain several packets.  Is that related with the
> > > > queue maxlen?
> > > >
> > > man 7 netlink will tell you that netlink messages may be batched.
> >
> >
> > Thanks for the pointer, I hadn't noticed it.
> >
> >
> > > This is straightforward to observe in a libnetfilter_log program under gdb. >
> > > However libnetfilter_queue programs never get batched netlink messages. So the
> > > callback isn't strictly necessary but it would mean extra code to special-case
> > > libnetfilter_queue (among all the other netfilter libraries) so it's been left
> > > there.
> > >
> > > If you rely on this behaviour it might be prudent to check that bytes read ==
> > > *(struct nlmsghdr *)buf.nlmsg_len.
> > >
> > > > > You can obtain the packet payload length via:
> > > > >
> > > > >           len = mnl_attr_get_payload_len(attr[NFQA_PAYLOAD]);
> > > >
> > > > And this should be the length specified with NFQNL_COPY_PACKET (or less), correct?
> > > >
> > > You can check for packet truncation by checking `len` above against what you
> > > actually received.
> >
> >
> > I'll try.  However, I'd never know if my test conditions equal what can
> > happen at runtime.  As I only look at addresses, it's fine to truncate
> > packets at that length.
> >
> > I just want to minimize memory footprint, but without hampering performance.
>
> You definitely want to use the new pktb_setup_raw() function then. git clone or
> fork the repo at https://git.netfilter.org/libnetfilter_queue/

If Andrea would like to use the pkbuff infrastructure, then yes.
Please note that such pktbuff infrastructure is entirely optional.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Extending an IPv4 filter to IPv6, Re: Extending an IPv4 filter to IPv6
  2023-08-27 20:48         ` Pablo Neira Ayuso
@ 2023-08-31  9:22           ` Alessandro Vesely
  0 siblings, 0 replies; 16+ messages in thread
From: Alessandro Vesely @ 2023-08-31  9:22 UTC (permalink / raw)
  To: Pablo Neira Ayuso; +Cc: netfilter, netfilter

Thanks for all the tips and the comments in the example.  I now seem to be able 
to code something...


Best
Ale


On Sun 27/Aug/2023 22:48:37 +0200 Pablo Neira Ayuso wrote:
> On Tue, Aug 22, 2023 at 08:09:53PM +0200, Alessandro Vesely wrote:
>> On Mon 21/Aug/2023 21:10:35 +0200 Pablo Neira Ayuso wrote:
>> > On Mon, Aug 21, 2023 at 07:18:46PM +0200, Alessandro Vesely wrote:
>> > > On Sun 20/Aug/2023 23:41:43 +0200 Pablo Neira Ayuso wrote:
>> > > > On Fri, Aug 18, 2023 at 12:56:38PM +0200, Alessandro Vesely wrote:
>> > > > > [...]
>> > > > > 
>> > > > > So, the first question: Can I keep using these functions?  What is the alternative?
>> > > > 
>> > > > The alternative is the libmnl-based API which is the way to go
>> > > > for new applications.
>> > > 
>> > > 
>> > > The nf-queue.c[*] example that illustrates libmnl is strange.  It
>> > > show a function nfq_nlmsg_put() (libnetfilter-queue).
>> > 
>> > Yes, that is a helper function that is provided by libnetfilter_queue to
>> > assist a libmnl-based program in building the netlink headers:
>> > 
>> > EXPORT_SYMBOL
>> > struct nlmsghdr *nfq_nlmsg_put(char *buf, int type, uint32_t queue_num)
>> > {
>> >          [...]
>> > }
>> > 
>> > This sets up two headers, one is the netlink header, that tells the
>> > subsystem and the type of request. Then it follows the nfgenmsg header
>> > which is specific of the nfnetlink_queue subsystem. It stores the queue
>> > number, family and version are set up to unspec and version_0
>> > respectively.
>> > 
>> > There helpers function are offered in libnetfilter_queue, it is up to
>> > you to opt-in to use them or not.
>> 
>> 
>> I'm starting to understand.  The example reuses the big buffer to set up the
>> queue parameters via mnl_socket_sendto(), received by nfqnl_recv_config().
>> The old API had specific calls instead, nfq_create_queue(),
>> nfq_set_mode(),... It is this style which makes everything look more
>> complicated, as it requires several calls.
> 
> I agree it is more low level.
> 
>> > > I have two questions about it:
>> > > 
>> > > 1) In the example it is called twice, the second time after setting attrs.
>> > > What purpose does the first call serve?
>> > 
>> > There are two sections in the nf-queue example:
>> > 
>> > Section #1 (main function)
>> >    Set up and configure the pipeline between kernel and
>> >    userspace.  This creates the netlink socket and you send the
>> >    configuration to the kernel for this pipeline.
>> 
>> Now I gathered the first call creates the queue, the second one sets a flag.
>> It's not clear why that needs to be two calls.  Could all have been stuffed
>> in a single buffer and delivered by a single call?  Hm... perhaps it is
>> split just to show that it doesn't have to be monolithic.  If I want to set
>> queue maxlen do I have to add a third call?
> 
> It should be possible to batch three netlink messages in one single
> buffer, there is a batch API in libmnl. You will have to assign
> different sequence numbers to each message in the batch to identify
> errors, because kernel tells you what message has failed (including
> the original sequence number) and the reason (expressed as errno).
> 
> Since this is only called once to set up the data pipeline between
> kernel and userspace, I do not think the batching is worth the effort.
> 
>> > Section #2 (packet processing loop)
>> >     This is an infinite loop where your software reads for packets
>> >     to come from the kernel and it calls a callback to handle the
>> >     netlink message that encapsulates the packet and its metadata.
>> 
>> 
>> In this section the example has a single call to mnl_socket_sendto() per packet.
> 
> This is to send a verdict back to kernel space on the packet that
> userspace has received.
> 
>> > You full have control on the socket, so you instantiate a non-blocking
>> > socket and use select()/poll() if your software handles more that one
>> > single socket for I/O multiplexing. This examples uses a blocking socket.
>> 
>> But I can handle multiple queues using the same socket, can't I?  Are there
>> any advantages handling, say a netlink socket for each queue?
> 
> Yes you can handle all queues with one single socket.
> 
> Splitting queues in sockets combined with CPU pinning allows you to
> improve CPU utilization. There is a number of options to fan out
> packets between several queues, see documntation.
> 
>> > > 2) Is it fine to use a small buffer?  My filter only looks at
>> > > addresses, so it should be enough to copy 40 bytes.  Can it be on
>> > > stack?
>> > 
>> > You can specify NFQNL_COPY_PACKET in your configuration to tell the
>> > kernel to send you 40 bytes only, when setting up the pipeline. The
>> > kernel sends you a netlink message that contains attributes to
>> > encapsulate packet metadata and the actual payload. The attribute comes
>> > as an attribute of the netlink message.
>> 
>> So the buffer must be bigger than just the payload.  libmnl.h defines a
>> large MNL_SOCKET_BUFFER_SIZE...
> 
> That is a generic buffer definition. In the specific case of nfqueue,
> you have to allocate a buffer to accomodate enough data to be received
> from the kernel.
> 
>> > You can fetch the payload directly from the attribute:
>> > 
>> >          data = mnl_attr_get_payload(attr[NFQA_PAYLOAD]);
>> 
>> 
>> Yup, that's what the example does.
>> 
>> > This is accessing the data that is stored in the onstack buffer that
>> > stores the netlink message that your software have received.
>> 
>> 
>> It seems a buffer can contain several packets.  Is that related with the
>> queue maxlen?
> 
> Linux provides GSO/GRO support, if it is turned on as it is by
> default, then you might be receiving a large large whose size might be
> larger than your device MTU. The nfqueue subsystem reports this via
> NFQA_CFG_F_GSO flag.
> 
>> > You can obtain the packet payload length via:
>> > 
>> >          len = mnl_attr_get_payload_len(attr[NFQA_PAYLOAD]);
>> 
>> 
>> And this should be the length specified with NFQNL_COPY_PACKET (or less), correct?
> 
> Exactly.

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2023-08-31  9:22 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-08-18 10:56 Extending an IPv4 filter to IPv6, Extending an IPv4 filter to IPv6 Alessandro Vesely
2023-08-19  1:46 ` Duncan Roe
2023-08-19  9:53   ` Extending an IPv4 filter to IPv6, " Alessandro Vesely
2023-08-20  1:09     ` Duncan Roe
2023-08-20 21:39   ` Pablo Neira Ayuso
2023-08-20 21:41 ` Pablo Neira Ayuso
2023-08-21 17:18   ` Extending an IPv4 filter to IPv6, " Alessandro Vesely
2023-08-21 19:10     ` Pablo Neira Ayuso
2023-08-22 18:09       ` Extending an IPv4 filter to IPv6, " Alessandro Vesely
2023-08-27  8:34         ` Duncan Roe
2023-08-27 17:20           ` Extending an IPv4 filter to IPv6, " Alessandro Vesely
2023-08-27 18:58             ` Duncan Roe
2023-08-27 21:12               ` Pablo Neira Ayuso
2023-08-27 20:49           ` Pablo Neira Ayuso
2023-08-27 20:48         ` Pablo Neira Ayuso
2023-08-31  9:22           ` Extending an IPv4 filter to IPv6, " Alessandro Vesely

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.