All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: netfilter question
       [not found] <cad49557-7c7a-83c9-d2b6-71d9624f0d52@miromedia.ca>
@ 2016-11-16 13:33 ` Eric Dumazet
  2016-11-16 15:02   ` Florian Westphal
  0 siblings, 1 reply; 28+ messages in thread
From: Eric Dumazet @ 2016-11-16 13:33 UTC (permalink / raw)
  To: Eric Desrochers; +Cc: Florian Westphal, netfilter-devel

On Wed, Nov 16, 2016 at 2:22 AM, Eric Desrochers <ericd@miromedia.ca> wrote:
> Hi Eric,
>
> My name is Eric and I'm reaching you today as I found your name in multiple netfilter kernel commits, and was hoping we can discuss about a potential regression.
>
> I identified (git bisect) this commit [https://github.com/torvalds/linux/commit/71ae0dff02d756e4d2ca710b79f2ff5390029a5f] as the one introducing a serious performance slowdown when using the binary ip/ip6tables with a large number of policies.
>
> I also tried with the latest and greatest v4.9-rc4 mainline kernel, and the slowdown is still present.
>
> So even commit [https://github.com/torvalds/linux/commit/a1a56aaa0735c7821e85c37268a7c12e132415cb] which introduce a 16 bytes alignment on xt_counter percpu allocations so that bytes and packets sit in same cache line, doesn't have impact too.
>
>
> Everything I found is detailed in the following bug : https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1640786
>
> Of course, I'm totally aware that "iptables-restore" should be the favorite choice as it is way more efficient (note that using iptables-restore doesn't exhibit the problem) but some folks still rely on ip/ip6tables and might face this performance slowdown.
>
> I found the problem today, I will continue to investigate on my side, but I was wondering if we could have a discussion about this subject.
>
> Thanks in advance.
>
> Regards,
>
> Eric
>

Hi Eric

Thanks for your mail. But you should CC it on netfilter-devel mailing list.

Key point is that we really care about fast path : packet processing.
And cited commit helps this a lot by lowering the memory foot print on
hosts with many cores.
This is a step into right direction.

Now we probably should batch the percpu allocations one page at a
time, or ask Tejun if percpu allocations could be really really fast
(probably much harder)

But really you should not use iptables one rule at a time...
This will never compete with iptables-restore. ;)

Florian, would you have time to work on a patch trying to group the
percpu allocations one page at a time ?

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: netfilter question
  2016-11-16 13:33 ` netfilter question Eric Dumazet
@ 2016-11-16 15:02   ` Florian Westphal
  2016-11-16 15:23     ` Eric Dumazet
  0 siblings, 1 reply; 28+ messages in thread
From: Florian Westphal @ 2016-11-16 15:02 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: Eric Desrochers, Florian Westphal, netfilter-devel

Eric Dumazet <edumazet@google.com> wrote:
> On Wed, Nov 16, 2016 at 2:22 AM, Eric Desrochers <ericd@miromedia.ca> wrote:
> > Hi Eric,
> >
> > My name is Eric and I'm reaching you today as I found your name in multiple netfilter kernel commits, and was hoping we can discuss about a potential regression.
> >
> > I identified (git bisect) this commit [https://github.com/torvalds/linux/commit/71ae0dff02d756e4d2ca710b79f2ff5390029a5f] as the one introducing a serious performance slowdown when using the binary ip/ip6tables with a large number of policies.
> >
> > I also tried with the latest and greatest v4.9-rc4 mainline kernel, and the slowdown is still present.
> >
> > So even commit [https://github.com/torvalds/linux/commit/a1a56aaa0735c7821e85c37268a7c12e132415cb] which introduce a 16 bytes alignment on xt_counter percpu allocations so that bytes and packets sit in same cache line, doesn't have impact too.
> >
> >
> > Everything I found is detailed in the following bug : https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1640786
> >
> > Of course, I'm totally aware that "iptables-restore" should be the favorite choice as it is way more efficient (note that using iptables-restore doesn't exhibit the problem) but some folks still rely on ip/ip6tables and might face this performance slowdown.
> >
> > I found the problem today, I will continue to investigate on my side, but I was wondering if we could have a discussion about this subject.
> >
> > Thanks in advance.

[..]

> Key point is that we really care about fast path : packet processing.
> And cited commit helps this a lot by lowering the memory foot print on
> hosts with many cores.
> This is a step into right direction.
> 
> Now we probably should batch the percpu allocations one page at a
> time, or ask Tejun if percpu allocations could be really really fast
> (probably much harder)
> 
> But really you should not use iptables one rule at a time...
> This will never compete with iptables-restore. ;)
> 
> Florian, would you have time to work on a patch trying to group the
> percpu allocations one page at a time ?

You mean something like this ? :
        xt_entry_foreach(iter, entry0, newinfo->size) {
-               ret = find_check_entry(iter, net, repl->name, repl->size);
-               if (ret != 0)
+               if (pcpu_alloc == 0) {
+                       pcnt = __alloc_percpu(PAGE_SIZE, sizeof(struct xt_counters));
+                       if (IS_ERR_VALUE(pcnt))
+                               BUG();
+               }
+
+               iter->counters.pcnt = pcnt + pcpu_alloc;
+               iter->counters.bcnt = !!pcpu_alloc;
+               pcpu_alloc += sizeof(struct xt_counters);
+
+               if (pcpu_alloc > PAGE_SIZE - sizeof(struct xt_counters))
+                       pcpu_alloc = 0;
+
+               ret = find_check_entry(iter, net, repl->name, repl->size)
 ...

This is going to be ugly since we'll have to deal with SMP vs. NONSMP (i.e. no perpcu allocations)
in ip/ip6/arptables.

Error unwind will also be a mess (we can abuse .bcnt to tell if pcpu offset should be free'd or not).

But maybe I don't understand what you are suggesting :)
Can you elaborate?

Thanks!

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: netfilter question
  2016-11-16 15:02   ` Florian Westphal
@ 2016-11-16 15:23     ` Eric Dumazet
  2016-11-17  0:07       ` Florian Westphal
  0 siblings, 1 reply; 28+ messages in thread
From: Eric Dumazet @ 2016-11-16 15:23 UTC (permalink / raw)
  To: Florian Westphal; +Cc: Eric Dumazet, Eric Desrochers, netfilter-devel

On Wed, 2016-11-16 at 16:02 +0100, Florian Westphal wrote:
> Eric Dumazet <edumazet@google.com> wrote:
> > On Wed, Nov 16, 2016 at 2:22 AM, Eric Desrochers <ericd@miromedia.ca> wrote:
> > > Hi Eric,
> > >
> > > My name is Eric and I'm reaching you today as I found your name in multiple netfilter kernel commits, and was hoping we can discuss about a potential regression.
> > >
> > > I identified (git bisect) this commit [https://github.com/torvalds/linux/commit/71ae0dff02d756e4d2ca710b79f2ff5390029a5f] as the one introducing a serious performance slowdown when using the binary ip/ip6tables with a large number of policies.
> > >
> > > I also tried with the latest and greatest v4.9-rc4 mainline kernel, and the slowdown is still present.
> > >
> > > So even commit [https://github.com/torvalds/linux/commit/a1a56aaa0735c7821e85c37268a7c12e132415cb] which introduce a 16 bytes alignment on xt_counter percpu allocations so that bytes and packets sit in same cache line, doesn't have impact too.
> > >
> > >
> > > Everything I found is detailed in the following bug : https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1640786
> > >
> > > Of course, I'm totally aware that "iptables-restore" should be the favorite choice as it is way more efficient (note that using iptables-restore doesn't exhibit the problem) but some folks still rely on ip/ip6tables and might face this performance slowdown.
> > >
> > > I found the problem today, I will continue to investigate on my side, but I was wondering if we could have a discussion about this subject.
> > >
> > > Thanks in advance.
> 
> [..]
> 
> > Key point is that we really care about fast path : packet processing.
> > And cited commit helps this a lot by lowering the memory foot print on
> > hosts with many cores.
> > This is a step into right direction.
> > 
> > Now we probably should batch the percpu allocations one page at a
> > time, or ask Tejun if percpu allocations could be really really fast
> > (probably much harder)
> > 
> > But really you should not use iptables one rule at a time...
> > This will never compete with iptables-restore. ;)
> > 
> > Florian, would you have time to work on a patch trying to group the
> > percpu allocations one page at a time ?
> 
> You mean something like this ? :
>         xt_entry_foreach(iter, entry0, newinfo->size) {
> -               ret = find_check_entry(iter, net, repl->name, repl->size);
> -               if (ret != 0)
> +               if (pcpu_alloc == 0) {
> +                       pcnt = __alloc_percpu(PAGE_SIZE, sizeof(struct xt_counters));

alignment should be a page.

> +                       if (IS_ERR_VALUE(pcnt))
> +                               BUG();

well. no BUG() for sure ;)

> +               }
> +
> +               iter->counters.pcnt = pcnt + pcpu_alloc;
> +               iter->counters.bcnt = !!pcpu_alloc;
> +               pcpu_alloc += sizeof(struct xt_counters);
> +
> +               if (pcpu_alloc > PAGE_SIZE - sizeof(struct xt_counters))
> +                       pcpu_alloc = 0;
> +
> +               ret = find_check_entry(iter, net, repl->name, repl->size)
>  ...
> 
> This is going to be ugly since we'll have to deal with SMP vs. NONSMP (i.e. no perpcu allocations)
> in ip/ip6/arptables.

Time for a common helper then ...

> 
> Error unwind will also be a mess (we can abuse .bcnt to tell if pcpu offset should be free'd or not).

Free if the address is aligned to a page boundary ?

Otherwise skip it, it already has been freed earlier.

> 
> But maybe I don't understand what you are suggesting :)
> Can you elaborate?

Note that this grouping will also help data locality.

I definitely have servers with a huge number of percpu allocations and I
fear we might have many TLB misses because of possible spread of
xt_counters.

Note that percpu pages must not be shared by multiple users
(ip/ip6/arptable), each table should get its own cache.




^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: netfilter question
  2016-11-16 15:23     ` Eric Dumazet
@ 2016-11-17  0:07       ` Florian Westphal
  2016-11-17  2:34         ` Eric Dumazet
                           ` (2 more replies)
  0 siblings, 3 replies; 28+ messages in thread
From: Florian Westphal @ 2016-11-17  0:07 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: Florian Westphal, Eric Dumazet, Eric Desrochers, netfilter-devel

Eric Dumazet <eric.dumazet@gmail.com> wrote:
> > > On Wed, Nov 16, 2016 at 2:22 AM, Eric Desrochers <ericd@miromedia.ca> wrote:
> > > > Hi Eric,
> > > >
> > > > My name is Eric and I'm reaching you today as I found your name in multiple netfilter kernel commits, and was hoping we can discuss about a potential regression.
> > > >
> > > > I identified (git bisect) this commit [https://github.com/torvalds/linux/commit/71ae0dff02d756e4d2ca710b79f2ff5390029a5f] as the one introducing a serious performance slowdown when using the binary ip/ip6tables with a large number of policies.
> > > >
> > > > I also tried with the latest and greatest v4.9-rc4 mainline kernel, and the slowdown is still present.
> > > >
> > > > So even commit [https://github.com/torvalds/linux/commit/a1a56aaa0735c7821e85c37268a7c12e132415cb] which introduce a 16 bytes alignment on xt_counter percpu allocations so that bytes and packets sit in same cache line, doesn't have impact too.
> > > >
> > > >
> > > > Everything I found is detailed in the following bug : https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1640786
> > > >
> > > > Of course, I'm totally aware that "iptables-restore" should be the favorite choice as it is way more efficient (note that using iptables-restore doesn't exhibit the problem) but some folks still rely on ip/ip6tables and might face this performance slowdown.
> > > >
> > > > I found the problem today, I will continue to investigate on my side, but I was wondering if we could have a discussion about this subject.
> > > >
> > > > Thanks in advance.
> > 
> > [..]
> > 
> > > Key point is that we really care about fast path : packet processing.
> > > And cited commit helps this a lot by lowering the memory foot print on
> > > hosts with many cores.
> > > This is a step into right direction.
> > > 
> > > Now we probably should batch the percpu allocations one page at a
> > > time, or ask Tejun if percpu allocations could be really really fast
> > > (probably much harder)
> > > 
> > > But really you should not use iptables one rule at a time...
> > > This will never compete with iptables-restore. ;)
> > > 
> > > Florian, would you have time to work on a patch trying to group the
> > > percpu allocations one page at a time ?
> > 
> > You mean something like this ? :
> >         xt_entry_foreach(iter, entry0, newinfo->size) {
> > -               ret = find_check_entry(iter, net, repl->name, repl->size);
> > -               if (ret != 0)
> > +               if (pcpu_alloc == 0) {
> > +                       pcnt = __alloc_percpu(PAGE_SIZE, sizeof(struct xt_counters));
> 
> alignment should be a page.

[..]

> > Error unwind will also be a mess (we can abuse .bcnt to tell if pcpu offset should be free'd or not).
> 
> Free if the address is aligned to a page boundary ?

Good idea.  This seems to work for me.  Eric (Desrochers), does this
improve the situation for you as well?

diff --git a/include/linux/netfilter/x_tables.h b/include/linux/netfilter/x_tables.h
--- a/include/linux/netfilter/x_tables.h
+++ b/include/linux/netfilter/x_tables.h
@@ -403,38 +403,14 @@ static inline unsigned long ifname_compare_aligned(const char *_a,
 	return ret;
 }
 
+struct xt_percpu_counter_alloc_state {
+	unsigned int off;
+	const char *mem;
+};
 
-/* On SMP, ip(6)t_entry->counters.pcnt holds address of the
- * real (percpu) counter.  On !SMP, its just the packet count,
- * so nothing needs to be done there.
- *
- * xt_percpu_counter_alloc returns the address of the percpu
- * counter, or 0 on !SMP. We force an alignment of 16 bytes
- * so that bytes/packets share a common cache line.
- *
- * Hence caller must use IS_ERR_VALUE to check for error, this
- * allows us to return 0 for single core systems without forcing
- * callers to deal with SMP vs. NONSMP issues.
- */
-static inline unsigned long xt_percpu_counter_alloc(void)
-{
-	if (nr_cpu_ids > 1) {
-		void __percpu *res = __alloc_percpu(sizeof(struct xt_counters),
-						    sizeof(struct xt_counters));
-
-		if (res == NULL)
-			return -ENOMEM;
-
-		return (__force unsigned long) res;
-	}
-
-	return 0;
-}
-static inline void xt_percpu_counter_free(u64 pcnt)
-{
-	if (nr_cpu_ids > 1)
-		free_percpu((void __percpu *) (unsigned long) pcnt);
-}
+bool xt_percpu_counter_alloc(struct xt_percpu_counter_alloc_state *state,
+			     struct xt_counters *counter);
+void xt_percpu_counter_free(struct xt_counters *cnt);
 
 static inline struct xt_counters *
 xt_get_this_cpu_counter(struct xt_counters *cnt)
diff --git a/net/ipv4/netfilter/arp_tables.c b/net/ipv4/netfilter/arp_tables.c
index 39004da318e2..cbea0cb030da 100644
--- a/net/ipv4/netfilter/arp_tables.c
+++ b/net/ipv4/netfilter/arp_tables.c
@@ -411,17 +411,15 @@ static inline int check_target(struct arpt_entry *e, const char *name)
 }
 
 static inline int
-find_check_entry(struct arpt_entry *e, const char *name, unsigned int size)
+find_check_entry(struct arpt_entry *e, const char *name, unsigned int size,
+		 struct xt_percpu_counter_alloc_state *alloc_state)
 {
 	struct xt_entry_target *t;
 	struct xt_target *target;
-	unsigned long pcnt;
 	int ret;
 
-	pcnt = xt_percpu_counter_alloc();
-	if (IS_ERR_VALUE(pcnt))
+	if (!xt_percpu_counter_alloc(alloc_state, &e->counters))
 		return -ENOMEM;
-	e->counters.pcnt = pcnt;
 
 	t = arpt_get_target(e);
 	target = xt_request_find_target(NFPROTO_ARP, t->u.user.name,
@@ -439,7 +437,7 @@ find_check_entry(struct arpt_entry *e, const char *name, unsigned int size)
 err:
 	module_put(t->u.kernel.target->me);
 out:
-	xt_percpu_counter_free(e->counters.pcnt);
+	xt_percpu_counter_free(&e->counters);
 
 	return ret;
 }
@@ -519,7 +517,7 @@ static inline void cleanup_entry(struct arpt_entry *e)
 	if (par.target->destroy != NULL)
 		par.target->destroy(&par);
 	module_put(par.target->me);
-	xt_percpu_counter_free(e->counters.pcnt);
+	xt_percpu_counter_free(&e->counters);
 }
 
 /* Checks and translates the user-supplied table segment (held in
@@ -528,6 +526,7 @@ static inline void cleanup_entry(struct arpt_entry *e)
 static int translate_table(struct xt_table_info *newinfo, void *entry0,
 			   const struct arpt_replace *repl)
 {
+	struct xt_percpu_counter_alloc_state alloc_state = { 0 };
 	struct arpt_entry *iter;
 	unsigned int *offsets;
 	unsigned int i;
@@ -590,7 +589,7 @@ static int translate_table(struct xt_table_info *newinfo, void *entry0,
 	/* Finally, each sanity check must pass */
 	i = 0;
 	xt_entry_foreach(iter, entry0, newinfo->size) {
-		ret = find_check_entry(iter, repl->name, repl->size);
+		ret = find_check_entry(iter, repl->name, repl->size, &alloc_state);
 		if (ret != 0)
 			break;
 		++i;
diff --git a/net/ipv4/netfilter/ip_tables.c b/net/ipv4/netfilter/ip_tables.c
index 46815c8a60d7..0024550516d1 100644
--- a/net/ipv4/netfilter/ip_tables.c
+++ b/net/ipv4/netfilter/ip_tables.c
@@ -531,7 +531,8 @@ static int check_target(struct ipt_entry *e, struct net *net, const char *name)
 
 static int
 find_check_entry(struct ipt_entry *e, struct net *net, const char *name,
-		 unsigned int size)
+		 unsigned int size,
+		 struct xt_percpu_counter_alloc_state *alloc_state)
 {
 	struct xt_entry_target *t;
 	struct xt_target *target;
@@ -539,12 +540,9 @@ find_check_entry(struct ipt_entry *e, struct net *net, const char *name,
 	unsigned int j;
 	struct xt_mtchk_param mtpar;
 	struct xt_entry_match *ematch;
-	unsigned long pcnt;
 
-	pcnt = xt_percpu_counter_alloc();
-	if (IS_ERR_VALUE(pcnt))
+	if (!xt_percpu_counter_alloc(alloc_state, &e->counters))
 		return -ENOMEM;
-	e->counters.pcnt = pcnt;
 
 	j = 0;
 	mtpar.net	= net;
@@ -582,7 +580,7 @@ find_check_entry(struct ipt_entry *e, struct net *net, const char *name,
 		cleanup_match(ematch, net);
 	}
 
-	xt_percpu_counter_free(e->counters.pcnt);
+	xt_percpu_counter_free(&e->counters);
 
 	return ret;
 }
@@ -670,7 +668,7 @@ cleanup_entry(struct ipt_entry *e, struct net *net)
 	if (par.target->destroy != NULL)
 		par.target->destroy(&par);
 	module_put(par.target->me);
-	xt_percpu_counter_free(e->counters.pcnt);
+	xt_percpu_counter_free(&e->counters);
 }
 
 /* Checks and translates the user-supplied table segment (held in
@@ -679,6 +677,7 @@ static int
 translate_table(struct net *net, struct xt_table_info *newinfo, void *entry0,
 		const struct ipt_replace *repl)
 {
+	struct xt_percpu_counter_alloc_state alloc_state = { 0 };
 	struct ipt_entry *iter;
 	unsigned int *offsets;
 	unsigned int i;
@@ -738,7 +737,7 @@ translate_table(struct net *net, struct xt_table_info *newinfo, void *entry0,
 	/* Finally, each sanity check must pass */
 	i = 0;
 	xt_entry_foreach(iter, entry0, newinfo->size) {
-		ret = find_check_entry(iter, net, repl->name, repl->size);
+		ret = find_check_entry(iter, net, repl->name, repl->size, &alloc_state);
 		if (ret != 0)
 			break;
 		++i;
diff --git a/net/ipv6/netfilter/ip6_tables.c b/net/ipv6/netfilter/ip6_tables.c
index 6ff42b8301cc..123d9af6742e 100644
--- a/net/ipv6/netfilter/ip6_tables.c
+++ b/net/ipv6/netfilter/ip6_tables.c
@@ -562,7 +562,8 @@ static int check_target(struct ip6t_entry *e, struct net *net, const char *name)
 
 static int
 find_check_entry(struct ip6t_entry *e, struct net *net, const char *name,
-		 unsigned int size)
+		 unsigned int size,
+		 struct xt_percpu_counter_alloc_state *alloc_state)
 {
 	struct xt_entry_target *t;
 	struct xt_target *target;
@@ -570,12 +571,9 @@ find_check_entry(struct ip6t_entry *e, struct net *net, const char *name,
 	unsigned int j;
 	struct xt_mtchk_param mtpar;
 	struct xt_entry_match *ematch;
-	unsigned long pcnt;
 
-	pcnt = xt_percpu_counter_alloc();
-	if (IS_ERR_VALUE(pcnt))
+	if (!xt_percpu_counter_alloc(alloc_state, &e->counters))
 		return -ENOMEM;
-	e->counters.pcnt = pcnt;
 
 	j = 0;
 	mtpar.net	= net;
@@ -612,7 +610,7 @@ find_check_entry(struct ip6t_entry *e, struct net *net, const char *name,
 		cleanup_match(ematch, net);
 	}
 
-	xt_percpu_counter_free(e->counters.pcnt);
+	xt_percpu_counter_free(&e->counters);
 
 	return ret;
 }
@@ -699,8 +697,7 @@ static void cleanup_entry(struct ip6t_entry *e, struct net *net)
 	if (par.target->destroy != NULL)
 		par.target->destroy(&par);
 	module_put(par.target->me);
-
-	xt_percpu_counter_free(e->counters.pcnt);
+	xt_percpu_counter_free(&e->counters);
 }
 
 /* Checks and translates the user-supplied table segment (held in
@@ -709,6 +706,7 @@ static int
 translate_table(struct net *net, struct xt_table_info *newinfo, void *entry0,
 		const struct ip6t_replace *repl)
 {
+	struct xt_percpu_counter_alloc_state alloc_state = { 0 };
 	struct ip6t_entry *iter;
 	unsigned int *offsets;
 	unsigned int i;
@@ -768,7 +766,7 @@ translate_table(struct net *net, struct xt_table_info *newinfo, void *entry0,
 	/* Finally, each sanity check must pass */
 	i = 0;
 	xt_entry_foreach(iter, entry0, newinfo->size) {
-		ret = find_check_entry(iter, net, repl->name, repl->size);
+		ret = find_check_entry(iter, net, repl->name, repl->size, &alloc_state);
 		if (ret != 0)
 			break;
 		++i;
diff --git a/net/netfilter/x_tables.c b/net/netfilter/x_tables.c
index ad818e52859b..a4d1084b163f 100644
--- a/net/netfilter/x_tables.c
+++ b/net/netfilter/x_tables.c
@@ -1615,6 +1615,59 @@ void xt_proto_fini(struct net *net, u_int8_t af)
 }
 EXPORT_SYMBOL_GPL(xt_proto_fini);
 
+/**
+ * xt_percpu_counter_alloc - allocate x_tables rule counter
+ *
+ * @state: pointer to xt_percpu allocation state
+ * @counter: pointer to counter struct inside the ip(6)/arpt_entry struct
+ *
+ * On SMP, the packet counter [ ip(6)t_entry->counters.pcnt ] will then
+ * contain the address of the real (percpu) counter.
+ *
+ * Rule evaluation needs to use xt_get_this_cpu_counter() helper
+ * to fetch the real percpu counter.
+ *
+ * To speed up allocation and improve data locality, an entire
+ * page is allocated.
+ *
+ * xt_percpu_counter_alloc_state contains the base address of the
+ * allocated page and the current sub-offset.
+ *
+ * returns false on error.
+ */
+bool xt_percpu_counter_alloc(struct xt_percpu_counter_alloc_state *state,
+			     struct xt_counters *counter)
+{
+	BUILD_BUG_ON(PAGE_SIZE < (sizeof(*counter) * 2));
+
+	if (nr_cpu_ids <= 1)
+		return true;
+
+	if (state->mem == NULL) {
+		state->mem = __alloc_percpu(PAGE_SIZE, PAGE_SIZE);
+		if (!state->mem)
+			return false;
+	}
+	counter->pcnt = (__force unsigned long)(state->mem + state->off);
+	state->off += sizeof(*counter);
+	if (state->off > (PAGE_SIZE - sizeof(*counter))) {
+		state->mem = NULL;
+		state->off = 0;
+	}
+
+	return true;
+}
+EXPORT_SYMBOL_GPL(xt_percpu_counter_alloc);
+
+void xt_percpu_counter_free(struct xt_counters *counters)
+{
+	unsigned long pcnt = counters->pcnt;
+
+	if (nr_cpu_ids > 1 && (pcnt & (PAGE_SIZE - 1)) == 0)
+		free_percpu((void __percpu *) (unsigned long)pcnt);
+}
+EXPORT_SYMBOL_GPL(xt_percpu_counter_free);
+
 static int __net_init xt_net_init(struct net *net)
 {
 	int i;

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: netfilter question
  2016-11-17  0:07       ` Florian Westphal
@ 2016-11-17  2:34         ` Eric Dumazet
  2016-11-17 15:49         ` Eric Desrochers
  2016-11-20  6:33         ` Eric Dumazet
  2 siblings, 0 replies; 28+ messages in thread
From: Eric Dumazet @ 2016-11-17  2:34 UTC (permalink / raw)
  To: Florian Westphal; +Cc: Eric Dumazet, Eric Desrochers, netfilter-devel

On Thu, 2016-11-17 at 01:07 +0100, Florian Westphal wrote:

Seems very nice !

> +
> +void xt_percpu_counter_free(struct xt_counters *counters)
> +{
> +	unsigned long pcnt = counters->pcnt;
> +
> +	if (nr_cpu_ids > 1 && (pcnt & (PAGE_SIZE - 1)) == 0)
> +		free_percpu((void __percpu *) (unsigned long)pcnt);
> +}


pcnt is already an "unsigned long"

This packing might also speed up "iptables -nvL" dumps ;)



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: netfilter question
  2016-11-17  0:07       ` Florian Westphal
  2016-11-17  2:34         ` Eric Dumazet
@ 2016-11-17 15:49         ` Eric Desrochers
  2016-11-20  6:33         ` Eric Dumazet
  2 siblings, 0 replies; 28+ messages in thread
From: Eric Desrochers @ 2016-11-17 15:49 UTC (permalink / raw)
  To: Florian Westphal, Eric Dumazet; +Cc: Eric Dumazet, netfilter-devel

Hi Florian,

thanks for quick response, will give it a try and get back to you with the outcome of my test.

On 2016-11-17 01:07 AM, Florian Westphal wrote:
> Eric Dumazet <eric.dumazet@gmail.com> wrote:
>>>> On Wed, Nov 16, 2016 at 2:22 AM, Eric Desrochers <ericd@miromedia.ca> wrote:
>>>>> Hi Eric,
>>>>>
>>>>> My name is Eric and I'm reaching you today as I found your name in multiple netfilter kernel commits, and was hoping we can discuss about a potential regression.
>>>>>
>>>>> I identified (git bisect) this commit [https://github.com/torvalds/linux/commit/71ae0dff02d756e4d2ca710b79f2ff5390029a5f] as the one introducing a serious performance slowdown when using the binary ip/ip6tables with a large number of policies.
>>>>>
>>>>> I also tried with the latest and greatest v4.9-rc4 mainline kernel, and the slowdown is still present.
>>>>>
>>>>> So even commit [https://github.com/torvalds/linux/commit/a1a56aaa0735c7821e85c37268a7c12e132415cb] which introduce a 16 bytes alignment on xt_counter percpu allocations so that bytes and packets sit in same cache line, doesn't have impact too.
>>>>>
>>>>>
>>>>> Everything I found is detailed in the following bug : https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1640786
>>>>>
>>>>> Of course, I'm totally aware that "iptables-restore" should be the favorite choice as it is way more efficient (note that using iptables-restore doesn't exhibit the problem) but some folks still rely on ip/ip6tables and might face this performance slowdown.
>>>>>
>>>>> I found the problem today, I will continue to investigate on my side, but I was wondering if we could have a discussion about this subject.
>>>>>
>>>>> Thanks in advance.
>>> [..]
>>>
>>>> Key point is that we really care about fast path : packet processing.
>>>> And cited commit helps this a lot by lowering the memory foot print on
>>>> hosts with many cores.
>>>> This is a step into right direction.
>>>>
>>>> Now we probably should batch the percpu allocations one page at a
>>>> time, or ask Tejun if percpu allocations could be really really fast
>>>> (probably much harder)
>>>>
>>>> But really you should not use iptables one rule at a time...
>>>> This will never compete with iptables-restore. ;)
>>>>
>>>> Florian, would you have time to work on a patch trying to group the
>>>> percpu allocations one page at a time ?
>>> You mean something like this ? :
>>>         xt_entry_foreach(iter, entry0, newinfo->size) {
>>> -               ret = find_check_entry(iter, net, repl->name, repl->size);
>>> -               if (ret != 0)
>>> +               if (pcpu_alloc == 0) {
>>> +                       pcnt = __alloc_percpu(PAGE_SIZE, sizeof(struct xt_counters));
>> alignment should be a page.
> [..]
>
>>> Error unwind will also be a mess (we can abuse .bcnt to tell if pcpu offset should be free'd or not).
>> Free if the address is aligned to a page boundary ?
> Good idea.  This seems to work for me.  Eric (Desrochers), does this
> improve the situation for you as well?
>
> diff --git a/include/linux/netfilter/x_tables.h b/include/linux/netfilter/x_tables.h
> --- a/include/linux/netfilter/x_tables.h
> +++ b/include/linux/netfilter/x_tables.h
> @@ -403,38 +403,14 @@ static inline unsigned long ifname_compare_aligned(const char *_a,
>  	return ret;
>  }
>  
> +struct xt_percpu_counter_alloc_state {
> +	unsigned int off;
> +	const char *mem;
> +};
>  
> -/* On SMP, ip(6)t_entry->counters.pcnt holds address of the
> - * real (percpu) counter.  On !SMP, its just the packet count,
> - * so nothing needs to be done there.
> - *
> - * xt_percpu_counter_alloc returns the address of the percpu
> - * counter, or 0 on !SMP. We force an alignment of 16 bytes
> - * so that bytes/packets share a common cache line.
> - *
> - * Hence caller must use IS_ERR_VALUE to check for error, this
> - * allows us to return 0 for single core systems without forcing
> - * callers to deal with SMP vs. NONSMP issues.
> - */
> -static inline unsigned long xt_percpu_counter_alloc(void)
> -{
> -	if (nr_cpu_ids > 1) {
> -		void __percpu *res = __alloc_percpu(sizeof(struct xt_counters),
> -						    sizeof(struct xt_counters));
> -
> -		if (res == NULL)
> -			return -ENOMEM;
> -
> -		return (__force unsigned long) res;
> -	}
> -
> -	return 0;
> -}
> -static inline void xt_percpu_counter_free(u64 pcnt)
> -{
> -	if (nr_cpu_ids > 1)
> -		free_percpu((void __percpu *) (unsigned long) pcnt);
> -}
> +bool xt_percpu_counter_alloc(struct xt_percpu_counter_alloc_state *state,
> +			     struct xt_counters *counter);
> +void xt_percpu_counter_free(struct xt_counters *cnt);
>  
>  static inline struct xt_counters *
>  xt_get_this_cpu_counter(struct xt_counters *cnt)
> diff --git a/net/ipv4/netfilter/arp_tables.c b/net/ipv4/netfilter/arp_tables.c
> index 39004da318e2..cbea0cb030da 100644
> --- a/net/ipv4/netfilter/arp_tables.c
> +++ b/net/ipv4/netfilter/arp_tables.c
> @@ -411,17 +411,15 @@ static inline int check_target(struct arpt_entry *e, const char *name)
>  }
>  
>  static inline int
> -find_check_entry(struct arpt_entry *e, const char *name, unsigned int size)
> +find_check_entry(struct arpt_entry *e, const char *name, unsigned int size,
> +		 struct xt_percpu_counter_alloc_state *alloc_state)
>  {
>  	struct xt_entry_target *t;
>  	struct xt_target *target;
> -	unsigned long pcnt;
>  	int ret;
>  
> -	pcnt = xt_percpu_counter_alloc();
> -	if (IS_ERR_VALUE(pcnt))
> +	if (!xt_percpu_counter_alloc(alloc_state, &e->counters))
>  		return -ENOMEM;
> -	e->counters.pcnt = pcnt;
>  
>  	t = arpt_get_target(e);
>  	target = xt_request_find_target(NFPROTO_ARP, t->u.user.name,
> @@ -439,7 +437,7 @@ find_check_entry(struct arpt_entry *e, const char *name, unsigned int size)
>  err:
>  	module_put(t->u.kernel.target->me);
>  out:
> -	xt_percpu_counter_free(e->counters.pcnt);
> +	xt_percpu_counter_free(&e->counters);
>  
>  	return ret;
>  }
> @@ -519,7 +517,7 @@ static inline void cleanup_entry(struct arpt_entry *e)
>  	if (par.target->destroy != NULL)
>  		par.target->destroy(&par);
>  	module_put(par.target->me);
> -	xt_percpu_counter_free(e->counters.pcnt);
> +	xt_percpu_counter_free(&e->counters);
>  }
>  
>  /* Checks and translates the user-supplied table segment (held in
> @@ -528,6 +526,7 @@ static inline void cleanup_entry(struct arpt_entry *e)
>  static int translate_table(struct xt_table_info *newinfo, void *entry0,
>  			   const struct arpt_replace *repl)
>  {
> +	struct xt_percpu_counter_alloc_state alloc_state = { 0 };
>  	struct arpt_entry *iter;
>  	unsigned int *offsets;
>  	unsigned int i;
> @@ -590,7 +589,7 @@ static int translate_table(struct xt_table_info *newinfo, void *entry0,
>  	/* Finally, each sanity check must pass */
>  	i = 0;
>  	xt_entry_foreach(iter, entry0, newinfo->size) {
> -		ret = find_check_entry(iter, repl->name, repl->size);
> +		ret = find_check_entry(iter, repl->name, repl->size, &alloc_state);
>  		if (ret != 0)
>  			break;
>  		++i;
> diff --git a/net/ipv4/netfilter/ip_tables.c b/net/ipv4/netfilter/ip_tables.c
> index 46815c8a60d7..0024550516d1 100644
> --- a/net/ipv4/netfilter/ip_tables.c
> +++ b/net/ipv4/netfilter/ip_tables.c
> @@ -531,7 +531,8 @@ static int check_target(struct ipt_entry *e, struct net *net, const char *name)
>  
>  static int
>  find_check_entry(struct ipt_entry *e, struct net *net, const char *name,
> -		 unsigned int size)
> +		 unsigned int size,
> +		 struct xt_percpu_counter_alloc_state *alloc_state)
>  {
>  	struct xt_entry_target *t;
>  	struct xt_target *target;
> @@ -539,12 +540,9 @@ find_check_entry(struct ipt_entry *e, struct net *net, const char *name,
>  	unsigned int j;
>  	struct xt_mtchk_param mtpar;
>  	struct xt_entry_match *ematch;
> -	unsigned long pcnt;
>  
> -	pcnt = xt_percpu_counter_alloc();
> -	if (IS_ERR_VALUE(pcnt))
> +	if (!xt_percpu_counter_alloc(alloc_state, &e->counters))
>  		return -ENOMEM;
> -	e->counters.pcnt = pcnt;
>  
>  	j = 0;
>  	mtpar.net	= net;
> @@ -582,7 +580,7 @@ find_check_entry(struct ipt_entry *e, struct net *net, const char *name,
>  		cleanup_match(ematch, net);
>  	}
>  
> -	xt_percpu_counter_free(e->counters.pcnt);
> +	xt_percpu_counter_free(&e->counters);
>  
>  	return ret;
>  }
> @@ -670,7 +668,7 @@ cleanup_entry(struct ipt_entry *e, struct net *net)
>  	if (par.target->destroy != NULL)
>  		par.target->destroy(&par);
>  	module_put(par.target->me);
> -	xt_percpu_counter_free(e->counters.pcnt);
> +	xt_percpu_counter_free(&e->counters);
>  }
>  
>  /* Checks and translates the user-supplied table segment (held in
> @@ -679,6 +677,7 @@ static int
>  translate_table(struct net *net, struct xt_table_info *newinfo, void *entry0,
>  		const struct ipt_replace *repl)
>  {
> +	struct xt_percpu_counter_alloc_state alloc_state = { 0 };
>  	struct ipt_entry *iter;
>  	unsigned int *offsets;
>  	unsigned int i;
> @@ -738,7 +737,7 @@ translate_table(struct net *net, struct xt_table_info *newinfo, void *entry0,
>  	/* Finally, each sanity check must pass */
>  	i = 0;
>  	xt_entry_foreach(iter, entry0, newinfo->size) {
> -		ret = find_check_entry(iter, net, repl->name, repl->size);
> +		ret = find_check_entry(iter, net, repl->name, repl->size, &alloc_state);
>  		if (ret != 0)
>  			break;
>  		++i;
> diff --git a/net/ipv6/netfilter/ip6_tables.c b/net/ipv6/netfilter/ip6_tables.c
> index 6ff42b8301cc..123d9af6742e 100644
> --- a/net/ipv6/netfilter/ip6_tables.c
> +++ b/net/ipv6/netfilter/ip6_tables.c
> @@ -562,7 +562,8 @@ static int check_target(struct ip6t_entry *e, struct net *net, const char *name)
>  
>  static int
>  find_check_entry(struct ip6t_entry *e, struct net *net, const char *name,
> -		 unsigned int size)
> +		 unsigned int size,
> +		 struct xt_percpu_counter_alloc_state *alloc_state)
>  {
>  	struct xt_entry_target *t;
>  	struct xt_target *target;
> @@ -570,12 +571,9 @@ find_check_entry(struct ip6t_entry *e, struct net *net, const char *name,
>  	unsigned int j;
>  	struct xt_mtchk_param mtpar;
>  	struct xt_entry_match *ematch;
> -	unsigned long pcnt;
>  
> -	pcnt = xt_percpu_counter_alloc();
> -	if (IS_ERR_VALUE(pcnt))
> +	if (!xt_percpu_counter_alloc(alloc_state, &e->counters))
>  		return -ENOMEM;
> -	e->counters.pcnt = pcnt;
>  
>  	j = 0;
>  	mtpar.net	= net;
> @@ -612,7 +610,7 @@ find_check_entry(struct ip6t_entry *e, struct net *net, const char *name,
>  		cleanup_match(ematch, net);
>  	}
>  
> -	xt_percpu_counter_free(e->counters.pcnt);
> +	xt_percpu_counter_free(&e->counters);
>  
>  	return ret;
>  }
> @@ -699,8 +697,7 @@ static void cleanup_entry(struct ip6t_entry *e, struct net *net)
>  	if (par.target->destroy != NULL)
>  		par.target->destroy(&par);
>  	module_put(par.target->me);
> -
> -	xt_percpu_counter_free(e->counters.pcnt);
> +	xt_percpu_counter_free(&e->counters);
>  }
>  
>  /* Checks and translates the user-supplied table segment (held in
> @@ -709,6 +706,7 @@ static int
>  translate_table(struct net *net, struct xt_table_info *newinfo, void *entry0,
>  		const struct ip6t_replace *repl)
>  {
> +	struct xt_percpu_counter_alloc_state alloc_state = { 0 };
>  	struct ip6t_entry *iter;
>  	unsigned int *offsets;
>  	unsigned int i;
> @@ -768,7 +766,7 @@ translate_table(struct net *net, struct xt_table_info *newinfo, void *entry0,
>  	/* Finally, each sanity check must pass */
>  	i = 0;
>  	xt_entry_foreach(iter, entry0, newinfo->size) {
> -		ret = find_check_entry(iter, net, repl->name, repl->size);
> +		ret = find_check_entry(iter, net, repl->name, repl->size, &alloc_state);
>  		if (ret != 0)
>  			break;
>  		++i;
> diff --git a/net/netfilter/x_tables.c b/net/netfilter/x_tables.c
> index ad818e52859b..a4d1084b163f 100644
> --- a/net/netfilter/x_tables.c
> +++ b/net/netfilter/x_tables.c
> @@ -1615,6 +1615,59 @@ void xt_proto_fini(struct net *net, u_int8_t af)
>  }
>  EXPORT_SYMBOL_GPL(xt_proto_fini);
>  
> +/**
> + * xt_percpu_counter_alloc - allocate x_tables rule counter
> + *
> + * @state: pointer to xt_percpu allocation state
> + * @counter: pointer to counter struct inside the ip(6)/arpt_entry struct
> + *
> + * On SMP, the packet counter [ ip(6)t_entry->counters.pcnt ] will then
> + * contain the address of the real (percpu) counter.
> + *
> + * Rule evaluation needs to use xt_get_this_cpu_counter() helper
> + * to fetch the real percpu counter.
> + *
> + * To speed up allocation and improve data locality, an entire
> + * page is allocated.
> + *
> + * xt_percpu_counter_alloc_state contains the base address of the
> + * allocated page and the current sub-offset.
> + *
> + * returns false on error.
> + */
> +bool xt_percpu_counter_alloc(struct xt_percpu_counter_alloc_state *state,
> +			     struct xt_counters *counter)
> +{
> +	BUILD_BUG_ON(PAGE_SIZE < (sizeof(*counter) * 2));
> +
> +	if (nr_cpu_ids <= 1)
> +		return true;
> +
> +	if (state->mem == NULL) {
> +		state->mem = __alloc_percpu(PAGE_SIZE, PAGE_SIZE);
> +		if (!state->mem)
> +			return false;
> +	}
> +	counter->pcnt = (__force unsigned long)(state->mem + state->off);
> +	state->off += sizeof(*counter);
> +	if (state->off > (PAGE_SIZE - sizeof(*counter))) {
> +		state->mem = NULL;
> +		state->off = 0;
> +	}
> +
> +	return true;
> +}
> +EXPORT_SYMBOL_GPL(xt_percpu_counter_alloc);
> +
> +void xt_percpu_counter_free(struct xt_counters *counters)
> +{
> +	unsigned long pcnt = counters->pcnt;
> +
> +	if (nr_cpu_ids > 1 && (pcnt & (PAGE_SIZE - 1)) == 0)
> +		free_percpu((void __percpu *) (unsigned long)pcnt);
> +}
> +EXPORT_SYMBOL_GPL(xt_percpu_counter_free);
> +
>  static int __net_init xt_net_init(struct net *net)
>  {
>  	int i;


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: netfilter question
  2016-11-17  0:07       ` Florian Westphal
  2016-11-17  2:34         ` Eric Dumazet
  2016-11-17 15:49         ` Eric Desrochers
@ 2016-11-20  6:33         ` Eric Dumazet
       [not found]           ` <CAGUFhKwQTRRJpfGi2fRkFfGdpLYMN-2F9G+dEsavM7UGbkjjdA@mail.gmail.com>
  2 siblings, 1 reply; 28+ messages in thread
From: Eric Dumazet @ 2016-11-20  6:33 UTC (permalink / raw)
  To: Florian Westphal; +Cc: Eric Dumazet, Eric Desrochers, netfilter-devel

On Thu, 2016-11-17 at 01:07 +0100, Florian Westphal wrote:

> +	if (state->mem == NULL) {
> +		state->mem = __alloc_percpu(PAGE_SIZE, PAGE_SIZE);
> +		if (!state->mem)
> +			return false;
> +	}

This will fail on arches where PAGE_SIZE=65536

percpu allocator limit is PCPU_MIN_UNIT_SIZE  ( 32 KB )

So maybe use a smaller value like 4096 ?

#define XT_PCPU_BLOC_SIZE 4096



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: netfilter question
       [not found]           ` <CAGUFhKwQTRRJpfGi2fRkFfGdpLYMN-2F9G+dEsavM7UGbkjjdA@mail.gmail.com>
@ 2016-11-20 17:31             ` Eric Dumazet
  2016-11-20 17:55               ` Eric Dumazet
  0 siblings, 1 reply; 28+ messages in thread
From: Eric Dumazet @ 2016-11-20 17:31 UTC (permalink / raw)
  To: Eric D; +Cc: Eric Dumazet, netfilter-devel, Florian Westphal

On Sun, 2016-11-20 at 12:22 -0500, Eric D wrote:
> I'm currently abroad for work and will come back home soon. I will
> test the solution and provide feedback to Florian by end of week.
> 
> Thanks for jumping on this quickly.
> 
> Eric
> 
> 
> On Nov 20, 2016 7:33 AM, "Eric Dumazet" <eric.dumazet@gmail.com>
> wrote:
>         On Thu, 2016-11-17 at 01:07 +0100, Florian Westphal wrote:
>         
>         > +     if (state->mem == NULL) {
>         > +             state->mem = __alloc_percpu(PAGE_SIZE,
>         PAGE_SIZE);
>         > +             if (!state->mem)
>         > +                     return false;
>         > +     }
>         
>         This will fail on arches where PAGE_SIZE=65536
>         
>         percpu allocator limit is PCPU_MIN_UNIT_SIZE  ( 32 KB )
>         
>         So maybe use a smaller value like 4096 ?
>         
>         #define XT_PCPU_BLOC_SIZE 4096
>         
Thanks Eric, I will test the patch myself, because I believe we need it
asap ;)





^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: netfilter question
  2016-11-20 17:31             ` Eric Dumazet
@ 2016-11-20 17:55               ` Eric Dumazet
  0 siblings, 0 replies; 28+ messages in thread
From: Eric Dumazet @ 2016-11-20 17:55 UTC (permalink / raw)
  To: Eric D; +Cc: Eric Dumazet, netfilter-devel, Florian Westphal

On Sun, 2016-11-20 at 09:31 -0800, Eric Dumazet wrote:

> Thanks Eric, I will test the patch myself, because I believe we need it
> asap ;)


Current net-next without Florian patch :

lpaa24:~# time for f in `seq 1 2000` ; do iptables -A FORWARD ; done

real	0m12.856s
user	0m0.590s
sys	0m11.131s


perf report ...; perf report ->

    47.45%  iptables  [kernel.kallsyms]  [k] pcpu_alloc_area                      
     8.49%  iptables  [kernel.kallsyms]  [k] memset_erms                          
     7.35%  iptables  [kernel.kallsyms]  [k] get_counters                         
     2.87%  iptables  [kernel.kallsyms]  [k] __memmove                            
     2.33%  iptables  [kernel.kallsyms]  [k] pcpu_alloc                           
     2.07%  iptables  [kernel.kallsyms]  [k] _find_next_bit.part.0                
     1.62%  iptables  xtables-multi      [.] 0x000000000001bb9d                   
     1.25%  iptables  [kernel.kallsyms]  [k] page_fault                           
     1.01%  iptables  [kernel.kallsyms]  [k] memcmp                               
     0.94%  iptables  [kernel.kallsyms]  [k] translate_table                      
     0.76%  iptables  [kernel.kallsyms]  [k] find_next_bit                        
     0.73%  iptables  [kernel.kallsyms]  [k] filemap_map_pages                    
     0.68%  iptables  [kernel.kallsyms]  [k] copy_user_enhanced_fast_string       
     0.54%  iptables  [kernel.kallsyms]  [k] __get_user_8                         
     0.54%  iptables  [kernel.kallsyms]  [k] clear_page_c_e                

After patch :

lpaa24:~# time for f in `seq 1 2000` ; do iptables -A FORWARD ; done

real	0m3.867s
user	0m0.559s
sys	0m2.216s

    22.15%  iptables  [kernel.kallsyms]  [k] get_counters                           
     5.85%  iptables  xtables-multi      [.] 0x000000000001bbac                     
     3.99%  iptables  [kernel.kallsyms]  [k] page_fault                             
     2.37%  iptables  [kernel.kallsyms]  [k] memcmp                                 
     2.19%  iptables  [kernel.kallsyms]  [k] copy_user_enhanced_fast_string         
     1.89%  iptables  [kernel.kallsyms]  [k] translate_table                        
     1.78%  iptables  [kernel.kallsyms]  [k] memset_erms                            
     1.74%  iptables  [kernel.kallsyms]  [k] clear_page_c_e                         
     1.73%  iptables  [kernel.kallsyms]  [k] __get_user_8                           
     1.72%  iptables  [kernel.kallsyms]  [k] perf_iterate_ctx                       
     1.21%  iptables  [kernel.kallsyms]  [k] handle_mm_fault                        
     0.98%  iptables  [kernel.kallsyms]  [k] unmap_page_range          

So this is a huge win. And I suspect data path will also gain from all
pcpu counters being in the same area of memory (this is where I am very interested)



^ permalink raw reply	[flat|nested] 28+ messages in thread

* netfilter question
@ 2012-12-10 20:12 ` Sri Ram Vemulpali
  0 siblings, 0 replies; 28+ messages in thread
From: Sri Ram Vemulpali @ 2012-12-10 20:12 UTC (permalink / raw)
  To: linux-netdev, linux-kernel-mail, linux-newbie

Hi Guys,

I am writing a netfilter hooks module for applying kernel rules on
incoming packets. I am implementing hook at NF_IP_PRE_ROUTING.

>From my understanding, after sk_buff passing NF_IP_PRE_ROUTING hook
enters into ip_forward (routing) path, which will determine whether
packet is for local host, if not for local host, then routes to
destination host based on IP header destination info through local
host interface.

My question is, if I modify sk_buff packet ip header info at
NF_IP_PRE_ROUTING hook, with values of external host ip (destination
host), therefore I think this packet will be routed to destination
host from right source interface. Instead passing it to local host.

Please correct my assumption. Or is anything more I should be doing.
Because I am building a system to route the packets applying kernel
rules.

thanks in advance.

-- 
Regards,
Sri.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* netfilter question
@ 2012-12-10 20:12 ` Sri Ram Vemulpali
  0 siblings, 0 replies; 28+ messages in thread
From: Sri Ram Vemulpali @ 2012-12-10 20:12 UTC (permalink / raw)
  To: linux-netdev, linux-kernel-mail, linux-newbie

Hi Guys,

I am writing a netfilter hooks module for applying kernel rules on
incoming packets. I am implementing hook at NF_IP_PRE_ROUTING.

From my understanding, after sk_buff passing NF_IP_PRE_ROUTING hook
enters into ip_forward (routing) path, which will determine whether
packet is for local host, if not for local host, then routes to
destination host based on IP header destination info through local
host interface.

My question is, if I modify sk_buff packet ip header info at
NF_IP_PRE_ROUTING hook, with values of external host ip (destination
host), therefore I think this packet will be routed to destination
host from right source interface. Instead passing it to local host.

Please correct my assumption. Or is anything more I should be doing.
Because I am building a system to route the packets applying kernel
rules.

thanks in advance.

-- 
Regards,
Sri.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* netfilter question
  2005-02-10 23:15   ` ULOG target for ipv6 Jonas Berlin
@ 2005-02-11 22:10     ` Pedro Fortuna
  0 siblings, 0 replies; 28+ messages in thread
From: Pedro Fortuna @ 2005-02-11 22:10 UTC (permalink / raw)
  To: netfilter-devel

Hello guys. I'll try to make it as short an simple as I can.

I want to develop a kernel module which will be running in two linux
hosts, connected by a crossover network cable (ethernet). This kernel
module will intercept a specific type of traffic (as an example, let's
say FTP packets (encapsulated in DIX frames)), both incomming and
outgoing, and change the ethertype in the frame header.

Outgoing dix frames carrying FTP packets get their ethertype changed
to a private, non standard ethertype number, just before they leave
the host (i.e. before they are passed to the network driver). The
frame is intercepted with the NF_IP_POST_ROUTING hook.

Incoming dix frames carrying FTP packets are get their ethertype
changed (at this point, a non standard ethertype number) to the
standard IPv4 ethertype number (i.e. 0x800), just after they are
processed by the network driver. The frame is intercepted with the
NF_IP_PRE_ROUTING hook.

My doubt is:
I'm not sure if I will be able to intercept the incoming frames
because they have a non standard ethertype number. They might get
dropped before passing through the NF_IP_PRE_ROUTING hook, due to the
unrecognized ethertype number. Is this true or false?
If the frame passes the hook before trying to identify the packet
type, then I'll have no trouble, because my netfilter module changes
the frame to the original ethertype number, thus making the hole
process transparent to the TCP/IP stacks running in both hosts.

I could explain what the hell I need to this for, but then you would
have three times more text to read :P I tried to restrict this post to
a minimum-painless-size.

Regards,
-Pedro Fortuna

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: netfilter question
  2004-02-19 20:25 John Black
@ 2004-02-19 21:22 ` Antony Stone
  0 siblings, 0 replies; 28+ messages in thread
From: Antony Stone @ 2004-02-19 21:22 UTC (permalink / raw)
  To: netfilter

On Thursday 19 February 2004 8:25 pm, John Black wrote:

> >I think you simply need to remove the "-d 161.x.x.x/21" from your rule and
> >
> >things will start working the way you want.
>
> just wanted to make sure this is right.
> iptables -t nat -A POSTROUTING -s 192.168.0.0/24 -o eth0  -j SNAT --to
> 161.x.x.x

Yes, that's what I meant.

> <iptables -t nat -L -n> gives me
>
> Chain POSTROUTING (policy ACCEPT)
> target     prot opt source               destination
> SNAT       all  --  192.168.0.0/24       0.0.0.0/0          to:161.x.x.x

A better command is "iptables -t nat -l -nv" because the v option also shows 
the interface names.

Regards,

Antony.

-- 
I'm pink, therefore I'm Spam.

                                                     Please reply to the list;
                                                           please don't CC me.



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: netfilter question
@ 2004-02-19 20:25 John Black
  2004-02-19 21:22 ` Antony Stone
  0 siblings, 1 reply; 28+ messages in thread
From: John Black @ 2004-02-19 20:25 UTC (permalink / raw)
  To: netfilter


>I think you simply need to remove the "-d 161.x.x.x/21" from your rule and

>things will start working the way you want.
>
>Regards,
>

just wanted to make sure this is right.
iptables -t nat -A POSTROUTING -s 192.168.0.0/24 -o eth0  -j SNAT --to 161.x.x.x


<iptables -t nat -L -n> gives me

Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination
 
Chain POSTROUTING (policy ACCEPT) 
target     prot opt source               destination
SNAT       all  --  192.168.0.0/24       0.0.0.0/0          to:161.x.x.x

is that right? so that should mask my internetwork?  

john
 

http://www.arbbs.net/


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: netfilter question
  2004-02-19 16:23 John Black
@ 2004-02-19 17:06 ` Antony Stone
  0 siblings, 0 replies; 28+ messages in thread
From: Antony Stone @ 2004-02-19 17:06 UTC (permalink / raw)
  To: netfilter

On Thursday 19 February 2004 4:23 pm, John Black wrote:

> >this assumption is because you're saying 161.x.x.x/21  as destination,
> >all other destinations that doesnt belong to 161.x.x.x to
> >161.x.x+8.x+255 will be not nat'ed
>
> right now i just have 1 class C private network.
>
> At work i have a static class B ipaddress of 161.x.x.x/255.255.252.0 with
> the private class C network 192.168.0.0/255.255.255.0

We are talking about the *destination* address of your packets.

The SNAT rule you currently have:

iptables -t nat -A POSTROUTING -s 192.168.0.0/24 -o eth0 -d 161.x.x.x/21 -j 
SNAT --to 161.x.x.x

Says:

 - for packets which have a source address in the range 192.168.0.0/24
 - and are going out of interface eth0
 - and have a destination address in the range 161.x.x.x/21
translate the source address to 161.x.x.x

Any other packets (eg: ones with a destination address of the netfilter 
website server) will not match this rule, and will not be translated.

I think you simply need to remove the "-d 161.x.x.x/21" from your rule and 
things will start working the way you want.

Regards,

Antony.

-- 
Ramdisk is not an installation procedure.

                                                     Please reply to the list;
                                                           please don't CC me.



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: netfilter question
@ 2004-02-19 16:56 John Black
  0 siblings, 0 replies; 28+ messages in thread
From: John Black @ 2004-02-19 16:56 UTC (permalink / raw)
  To: netfilter

>
> - for packets which have a source address in the range 192.168.0.0/24
> - and are going out of interface eth0
> - and have a destination address in the range 161.x.x.x/21
>translate the source address to 161.x.x.x
>
>Any other packets (eg: ones with a destination address of the netfilter 
>website server) will not match this rule, and will not be translated.
>
>I think you simply need to remove the "-d 161.x.x.x/21" from your rule and

>things will start working the way you want.
>
>Regards,
>
>Antony.
thanks i will try it when i get to work.

john
http://www.arbbs.net/


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: netfilter question
@ 2004-02-19 16:23 John Black
  2004-02-19 17:06 ` Antony Stone
  0 siblings, 1 reply; 28+ messages in thread
From: John Black @ 2004-02-19 16:23 UTC (permalink / raw)
  To: netfilter


>
>this assumption is because you're saying 161.x.x.x/21  as destination,
>all other destinations that doesnt belong to 161.x.x.x to
>161.x.x+8.x+255 will be not nat'ed
>

right now i just have 1 class C private network.

At work i have a static class B ipaddress of 161.x.x.x/255.255.252.0 with the
private class C network 192.168.0.0/255.255.255.0

john
http://www.arbbs.net/


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: netfilter question
@ 2004-02-19 16:00 John Black
  0 siblings, 0 replies; 28+ messages in thread
From: John Black @ 2004-02-19 16:00 UTC (permalink / raw)
  To: netfilter

>if 24 bits define a mask that is considered as Class C
>
>24-21 = 3 
>2 power 3 = 8 
>
>so, its 8 /24 or 8 Class C networks.

sorry it has been awhile since i have had basic networking.
http://www.arbbs.net/


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: netfilter question
  2004-02-19 14:13 John Black
@ 2004-02-19 14:51 ` Alexis
  0 siblings, 0 replies; 28+ messages in thread
From: Alexis @ 2004-02-19 14:51 UTC (permalink / raw)
  To: black; +Cc: Netfilter

On Thu, 2004-02-19 at 11:13, John Black wrote:
> >
> >Okay, so that rule is going to hide your 192.168.0.0/24 >network behind the
> public address of the firewall for all >packets going to addresses in the range
> 161.x.x.x/21 (ie 8 >Class C's in size).
> 8 Class C's?

if 24 bits define a mask that is considered as Class C

24-21 = 3 
2 power 3 = 8 

so, its 8 /24 or 8 Class C networks.



> 
> >How are you testing this and deciding it doesn't work?
> im testing it with my windows machine going to the msn chat rooms, because i
> know i will show you what ip address you are coming from.  is there a better
> way to check it.

www.whatsmyipaddress.com is a lazy but effective way :)

or just simply log or sniff output packets


> >(By the way, why are you only translating packets which are >going to (presumably)
> your ISP?   What about packets going >anywhere else on the Internet?).
> 
> I thought that translated all of the packets?  How is it only translating packets
> to the ISP?
> 

this assumption is because you're saying 161.x.x.x/21  as destination,
all other destinations that doesnt belong to 161.x.x.x to
161.x.x+8.x+255 will be not nat'ed




> john
> http://www.arbbs.net/
-- 
Alexis <alexis@attla.net.ar>



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: netfilter question
  2004-02-19 13:38 John Black
@ 2004-02-19 14:18 ` Antony Stone
  0 siblings, 0 replies; 28+ messages in thread
From: Antony Stone @ 2004-02-19 14:18 UTC (permalink / raw)
  To: netfilter

On Thursday 19 February 2004 1:38 pm, John Black wrote:

> here are the rule sets.
> iptables -A FORWARD -i eth0 -o eth1 -m state --state \ESTABLISHED, RELATED
> -j ACCEPT
> iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT
> iptables -A FORWARD -j LOG
>
> iptables -t nat -A POSTROUTING -s 192.168.0.0/24 -o eth0 \
> -d 161.x.x.x/21 -j SNAT --to 161.x.x.x

Okay, so that rule is going to hide your 192.168.0.0/24 network behind the 
public address of the firewall for all packets going to addresses in the 
range 161.x.x.x/21 (ie 8 Class C's in size).

How are you testing this and deciding it doesn't work?

(By the way, why are you only translating packets which are going to 
(presumably) your ISP?   What about packets going anywhere else on the 
Internet?).

Antony.

-- 
The words "e pluribus unum" on the Great Seal of the United States are from a 
poem by Virgil entitled "Moretum", which is about cheese and garlic salad 
dressing.

                                                     Please reply to the list;
                                                           please don't CC me.



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: netfilter question
@ 2004-02-19 14:13 John Black
  2004-02-19 14:51 ` Alexis
  0 siblings, 1 reply; 28+ messages in thread
From: John Black @ 2004-02-19 14:13 UTC (permalink / raw)
  To: netfilter


>
>Okay, so that rule is going to hide your 192.168.0.0/24 >network behind the
public address of the firewall for all >packets going to addresses in the range
161.x.x.x/21 (ie 8 >Class C's in size).
8 Class C's?

>How are you testing this and deciding it doesn't work?
im testing it with my windows machine going to the msn chat rooms, because i
know i will show you what ip address you are coming from.  is there a better
way to check it.

>(By the way, why are you only translating packets which are >going to (presumably)
your ISP?   What about packets going >anywhere else on the Internet?).

I thought that translated all of the packets?  How is it only translating packets
to the ISP?

john
http://www.arbbs.net/


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: netfilter question
@ 2004-02-19 13:38 John Black
  2004-02-19 14:18 ` Antony Stone
  0 siblings, 1 reply; 28+ messages in thread
From: John Black @ 2004-02-19 13:38 UTC (permalink / raw)
  To: netfilter

>Please post your complete ruleset, including the definitions >of variables such

>

here are the rule sets.  
iptables -A FORWARD -i eth0 -o eth1 -m state --state \ESTABLISHED, RELATED -j
ACCEPT
iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT
iptables -A FORWARD -j LOG

iptables -t nat -A POSTROUTING -s 192.168.0.0/24 -o eth0 \
-d 161.x.x.x/21 -j SNAT --to 161.x.x.x

<iptables -L -nvx>

Chain INPUT (policy ACCEPT 127 packets, 9436 bytes)
pkts  bytes target  prot opt in   out   source   destination


Chain FORWARD (policy ACCEPT 36 packets, 1709 bytes)
pkts     bytes target      prot opt    in    out     source
destination
 0       0    ACCEPT   all     --    eth0  eth1   0.0.0.0/0    0.0.0.0/0 state
RELATED, ESTABLISHED

 0       0    ACCEPT   all     --    eth1  eth0   0.0.0.0/0    0.0.0.0/0

 0       0    ACCEPT   all     --    *        *      0.0.0.0/0    0.0.0.0/0
LOG flags 0 level 4

Chain OUTPUT (policy ACCEPT 74 packets, 8568 bytes)
pkts  bytes target  prot opt in   out   source   destination

<iptables -t nat -L> 
target  prot opt source            destination 
SNAT    all  --  192.168.0.0/24    161.x.x.x/21 to:161.x.x.x


John
http://www.arbbs.net/


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: netfilter question
  2004-02-19 13:06   ` John Black
@ 2004-02-19 13:17     ` Antony Stone
  0 siblings, 0 replies; 28+ messages in thread
From: Antony Stone @ 2004-02-19 13:17 UTC (permalink / raw)
  To: netfilter

On Thursday 19 February 2004 1:06 pm, John Black wrote:

> > $IPT -A POSTROUTING -t nat -s $IP_LAN_RNG -o $IF_NET -j SNAT --to-source
> > $IP_NET
>
> i have that line in my firewall script. but it still doesn't mask my
> internet work

Please post your complete ruleset, including the definitions of variables such 
as $IP_LAN_RNG, so we can see if there's something wrong.   By all means 
disguise any public IP's if you wish, but not so much that we can't tell 
which one is which (if you have more than one).

Please also tell us how you are testing the rules and why you think it doesn't 
work (I know that may sound obvious, but every little helps...)

Regards,

Antony.

-- 
Your work is both good and original.  Unfortunately the parts that are good 
aren't original, and the parts that are original aren't good.

 - Samuel Johnson

                                                     Please reply to the list;
                                                           please don't CC me.



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: netfilter question
  2004-02-19  8:19 ` Klemen Kecman
  2004-02-19  9:22   ` Antony Stone
@ 2004-02-19 13:06   ` John Black
  2004-02-19 13:17     ` Antony Stone
  1 sibling, 1 reply; 28+ messages in thread
From: John Black @ 2004-02-19 13:06 UTC (permalink / raw)
  To: netfilter


> $IPT -A POSTROUTING -t nat -s $IP_LAN_RNG -o $IF_NET -j SNAT --to-source
> $IP_NET
i have that line in my firewall script. but it still doesn't mask my
internet work

> That is for routing. If you want to secure your network and the router
> itselfe it takes alot more .. like setting up a firewall :)
>
I have that part setup.  All ports a block but the ones that I need. the
firewall will drop all pings to the outside nic and inside nic.  I just have
to mask the internal network



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: netfilter question
  2004-02-19  8:19 ` Klemen Kecman
@ 2004-02-19  9:22   ` Antony Stone
  2004-02-19 13:06   ` John Black
  1 sibling, 0 replies; 28+ messages in thread
From: Antony Stone @ 2004-02-19  9:22 UTC (permalink / raw)
  To: netfilter

On Thursday 19 February 2004 8:19 am, Klemen Kecman wrote:

> For dynamic IP (ADSL)
> and for static IP (Cable)

I know this is off-topic, but I just thought I'd point out that these are not 
universal distinctions.   Different operators provide different services.   
For example, I have a dynamic IP cable service from NTL, and a static IP ADSL 
service from BT, here in the UK.

Antony.

-- 
The idea that Bill Gates appeared like a knight in shining armour to lead all 
customers out of a mire of technological chaos neatly ignores the fact that 
it was he who, by peddling second-rate technology, led them into it in the 
first place.

 - Douglas Adams in The Guardian, 25th August 1995

                                                     Please reply to the list;
                                                           please don't CC me.



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: netfilter question
  2004-02-19  3:32 John Black
@ 2004-02-19  8:19 ` Klemen Kecman
  2004-02-19  9:22   ` Antony Stone
  2004-02-19 13:06   ` John Black
  0 siblings, 2 replies; 28+ messages in thread
From: Klemen Kecman @ 2004-02-19  8:19 UTC (permalink / raw)
  To: netfilter; +Cc: John Black

For dynamic IP (ADSL)
$IPT -t nat -A POSTROUTING -o $IF_INET -j MASQUERADE

and for static IP (Cable)
$IPT -A POSTROUTING -t nat -s $IP_LAN_RNG -o $IF_NET -j SNAT --to-source
$IP_NET


That is for routing. If you want to secure your network and the router
itselfe it takes alot more .. like setting up a firewall :)

Klemen Kecman
Sting d.o.o.
Smartinska 106
1000 Ljubljana - SI
+386 1 5246033
+386 41 456421

----- Original Message -----
From: "John Black" <black@arbbs.net>
To: <netfilter@lists.netfilter.org>
Sent: Thursday, February 19, 2004 4:32 AM
Subject: netfilter question


> I'm trying to install a gateway/router with Red Hat 9 kernel 2.4.24 and
the
> stock
> iptables 1.2.7a, with full NAT compiled into the kernel. I have read the
> howto
> at netfilter.org, even have the same line of code.  But it sill will not
> change
> the source address.
>
> here is the line of code and the result of the command <iptables -L -nvx>
>
> iptables -A FORWARD -i eth0 -o eth1 -m state --state ESTABLISHED,
RELATED -j
> ACCEPT
> iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT
> iptables -A FORWARD -j LOG
>
> Chain INPUT (policy ACCEPT 127 packets, 9436 bytes)
> pkts  bytes target  prot opt in   out   source   destination
>
>
> Chain FORWARD (policy ACCEPT 36 packets, 1709 bytes)
> pkts     bytes target      prot opt    in    out     source
> destination
>  0       0    ACCEPT   all     --    eth0  eth1   0.0.0.0/0    0.0.0.0/0
> state RELATED, ESTABLISHED
>
>  0       0    ACCEPT   all     --    eth1  eth0   0.0.0.0/0    0.0.0.0/0
>
>  0       0    ACCEPT   all     --    *        *      0.0.0.0/0
0.0.0.0/0
> LOG flags 0 level 4
>
> Chain OUTPUT (policy ACCEPT 74 packets, 8568 bytes)
> pkts  bytes target  prot opt in   out   source   destination
>
>
> I new to security of a network. Am I close?
>
> thanks
> john
>
>
>
>



^ permalink raw reply	[flat|nested] 28+ messages in thread

* netfilter question
@ 2004-02-19  3:32 John Black
  2004-02-19  8:19 ` Klemen Kecman
  0 siblings, 1 reply; 28+ messages in thread
From: John Black @ 2004-02-19  3:32 UTC (permalink / raw)
  To: netfilter

I'm trying to install a gateway/router with Red Hat 9 kernel 2.4.24 and the
stock
iptables 1.2.7a, with full NAT compiled into the kernel. I have read the
howto
at netfilter.org, even have the same line of code.  But it sill will not
change
the source address.

here is the line of code and the result of the command <iptables -L -nvx>

iptables -A FORWARD -i eth0 -o eth1 -m state --state ESTABLISHED, RELATED -j
ACCEPT
iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT
iptables -A FORWARD -j LOG

Chain INPUT (policy ACCEPT 127 packets, 9436 bytes)
pkts  bytes target  prot opt in   out   source   destination


Chain FORWARD (policy ACCEPT 36 packets, 1709 bytes)
pkts     bytes target      prot opt    in    out     source
destination
 0       0    ACCEPT   all     --    eth0  eth1   0.0.0.0/0    0.0.0.0/0
state RELATED, ESTABLISHED

 0       0    ACCEPT   all     --    eth1  eth0   0.0.0.0/0    0.0.0.0/0

 0       0    ACCEPT   all     --    *        *      0.0.0.0/0    0.0.0.0/0
LOG flags 0 level 4

Chain OUTPUT (policy ACCEPT 74 packets, 8568 bytes)
pkts  bytes target  prot opt in   out   source   destination


I new to security of a network. Am I close?

thanks
john





^ permalink raw reply	[flat|nested] 28+ messages in thread

* Netfilter Question
@ 2001-10-24 13:09 Shiva Raman Pandey
  0 siblings, 0 replies; 28+ messages in thread
From: Shiva Raman Pandey @ 2001-10-24 13:09 UTC (permalink / raw)
  To: linux-kernel

Hi Friends,
When I get a packet using netfilter/iptables, I want to send it twice.
I mean I want to call set_verdict function twice.
Is it possible?
Is there any method to achieve this?
Do I have to play around packet_id and data_len parameters?
will handle create any problem?

Regards
Shiva



^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2016-11-20 17:55 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <cad49557-7c7a-83c9-d2b6-71d9624f0d52@miromedia.ca>
2016-11-16 13:33 ` netfilter question Eric Dumazet
2016-11-16 15:02   ` Florian Westphal
2016-11-16 15:23     ` Eric Dumazet
2016-11-17  0:07       ` Florian Westphal
2016-11-17  2:34         ` Eric Dumazet
2016-11-17 15:49         ` Eric Desrochers
2016-11-20  6:33         ` Eric Dumazet
     [not found]           ` <CAGUFhKwQTRRJpfGi2fRkFfGdpLYMN-2F9G+dEsavM7UGbkjjdA@mail.gmail.com>
2016-11-20 17:31             ` Eric Dumazet
2016-11-20 17:55               ` Eric Dumazet
2012-12-10 20:12 Sri Ram Vemulpali
2012-12-10 20:12 ` Sri Ram Vemulpali
  -- strict thread matches above, loose matches on Subject: below --
2005-02-08  7:50 netfilter & ipv6 Jonas Berlin
     [not found] ` <53965.213.236.112.75.1107867276.squirrel@213.236.112.75>
2005-02-10 23:15   ` ULOG target for ipv6 Jonas Berlin
2005-02-11 22:10     ` netfilter question Pedro Fortuna
2004-02-19 20:25 John Black
2004-02-19 21:22 ` Antony Stone
2004-02-19 16:56 John Black
2004-02-19 16:23 John Black
2004-02-19 17:06 ` Antony Stone
2004-02-19 16:00 John Black
2004-02-19 14:13 John Black
2004-02-19 14:51 ` Alexis
2004-02-19 13:38 John Black
2004-02-19 14:18 ` Antony Stone
2004-02-19  3:32 John Black
2004-02-19  8:19 ` Klemen Kecman
2004-02-19  9:22   ` Antony Stone
2004-02-19 13:06   ` John Black
2004-02-19 13:17     ` Antony Stone
2001-10-24 13:09 Netfilter Question Shiva Raman Pandey

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.