* [PATCH] netfilter: nf_conncount: reduce unnecessary GC
@ 2022-05-04 1:09 William Tu
2022-05-04 15:35 ` [PATCHv2] " William Tu
0 siblings, 1 reply; 5+ messages in thread
From: William Tu @ 2022-05-04 1:09 UTC (permalink / raw)
To: netfilter-devel; +Cc: fw, Yifeng Sun, Greg Rose
Currently nf_conncount can trigger garbage collection (GC)
at multiple places. Each GC process takes a spin_lock_bh
to traverse the nf_conncount_list. We found that when testing
port scanning use two parallel nmap, because the number of
connection increase fast, the nf_conncount_count and its
subsequent call to __nf_conncount_add take too much time,
causing several CPU lockup. This happens when user set the
conntrack limit to +20,000, because the larger the limit,
the longer the list that GC has to traverse.
The patch mitigate the performance issue by avoiding unnecessary
GC with a timestamp. Whenever nf_conncount has done a GC,
a timestamp is updated, and beforce the next time GC is
triggered, we make sure it's more than a jiffies.
By doin this we can greatly reduce the CPU cycles and
avoid the softirq lockup.
To reproduce it in OVS,
$ ovs-appctl dpctl/ct-set-limits zone=1,limit=20000
$ ovs-appctl dpctl/ct-get-limits
At another machine, runs two nmap
$ nmap -p1- <IP>
$ nmap -p1- <IP>
Signed-off-by: William Tu <u9012063@gmail.com>
Co-authored-by: Yifeng Sun <pkusunyifeng@gmail.com>
Reported-by: Greg Rose <gvrose8192@gmail.com>
Suggested-by: Florian Westphal <fw@strlen.de>
---
include/net/netfilter/nf_conntrack_count.h | 1 +
net/netfilter/nf_conncount.c | 12 ++++++++++++
2 files changed, 13 insertions(+)
diff --git a/include/net/netfilter/nf_conntrack_count.h b/include/net/netfilter/nf_conntrack_count.h
index 9645b47fa7e4..f39070d3e17f 100644
--- a/include/net/netfilter/nf_conntrack_count.h
+++ b/include/net/netfilter/nf_conntrack_count.h
@@ -12,6 +12,7 @@ struct nf_conncount_list {
spinlock_t list_lock;
struct list_head head; /* connections with the same filtering key */
unsigned int count; /* length of list */
+ unsigned long last_gc; /* jiffies at most recent gc */
};
struct nf_conncount_data *nf_conncount_init(struct net *net, unsigned int family,
diff --git a/net/netfilter/nf_conncount.c b/net/netfilter/nf_conncount.c
index 82f36beb2e76..6480711ecd44 100644
--- a/net/netfilter/nf_conncount.c
+++ b/net/netfilter/nf_conncount.c
@@ -134,6 +134,9 @@ static int __nf_conncount_add(struct net *net,
/* check the saved connections */
list_for_each_entry_safe(conn, conn_n, &list->head, node) {
+ if (time_after_eq(list->last_gc, jiffies))
+ break;
+
if (collect > CONNCOUNT_GC_MAX_NODES)
break;
@@ -190,6 +193,7 @@ static int __nf_conncount_add(struct net *net,
conn->jiffies32 = (u32)jiffies;
list_add_tail(&conn->node, &list->head);
list->count++;
+ list->last_gc = jiffies;
return 0;
}
@@ -214,6 +218,7 @@ void nf_conncount_list_init(struct nf_conncount_list *list)
spin_lock_init(&list->list_lock);
INIT_LIST_HEAD(&list->head);
list->count = 0;
+ list->last_gc = jiffies;
}
EXPORT_SYMBOL_GPL(nf_conncount_list_init);
@@ -231,6 +236,12 @@ bool nf_conncount_gc_list(struct net *net,
if (!spin_trylock(&list->list_lock))
return false;
+ /* don't bother if we just done GC */
+ if (time_after_eq(list->last_gc, jiffies)) {
+ spin_unlock(&list->list_lock);
+ return false;
+ }
+
list_for_each_entry_safe(conn, conn_n, &list->head, node) {
found = find_or_evict(net, list, conn);
if (IS_ERR(found)) {
@@ -258,6 +269,7 @@ bool nf_conncount_gc_list(struct net *net,
if (!list->count)
ret = true;
+ list->last_gc = jiffies;
spin_unlock(&list->list_lock);
return ret;
--
2.30.1 (Apple Git-130)
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCHv2] netfilter: nf_conncount: reduce unnecessary GC
2022-05-04 1:09 [PATCH] netfilter: nf_conncount: reduce unnecessary GC William Tu
@ 2022-05-04 15:35 ` William Tu
0 siblings, 0 replies; 5+ messages in thread
From: William Tu @ 2022-05-04 15:35 UTC (permalink / raw)
To: netfilter-devel; +Cc: Yifeng Sun, Greg Rose, Florian Westphal
Currently nf_conncount can trigger garbage collection (GC)
at multiple places. Each GC process takes a spin_lock_bh
to traverse the nf_conncount_list. We found that when testing
port scanning use two parallel nmap, because the number of
connection increase fast, the nf_conncount_count and its
subsequent call to __nf_conncount_add take too much time,
causing several CPU lockup. This happens when user set the
conntrack limit to +20,000, because the larger the limit,
the longer the list that GC has to traverse.
The patch mitigate the performance issue by avoiding unnecessary
GC with a timestamp. Whenever nf_conncount has done a GC,
a timestamp is updated, and beforce the next time GC is
triggered, we make sure it's more than a jiffies.
By doin this we can greatly reduce the CPU cycles and
avoid the softirq lockup.
To reproduce it in OVS,
$ ovs-appctl dpctl/ct-set-limits zone=1,limit=20000
$ ovs-appctl dpctl/ct-get-limits
At another machine, runs two nmap
$ nmap -p1- <IP>
$ nmap -p1- <IP>
Signed-off-by: William Tu <u9012063@gmail.com>
Co-authored-by: Yifeng Sun <pkusunyifeng@gmail.com>
Reported-by: Greg Rose <gvrose8192@gmail.com>
Suggested-by: Florian Westphal <fw@strlen.de>
---
v2:
- use u32 jiffies in struct nf_conncount_list
now its 4-byte list_lock followed by 4-byte last_ct
- move the timestamp check before lock at
nf_conncount_gc_list and use READ_ONCE
- move the timestamp check out of list for each loop
in __nf_conncount_add
---
include/net/netfilter/nf_conntrack_count.h | 1 +
net/netfilter/nf_conncount.c | 11 +++++++++++
2 files changed, 12 insertions(+)
diff --git a/include/net/netfilter/nf_conntrack_count.h b/include/net/netfilter/nf_conntrack_count.h
index 9645b47fa7e4..e227d997fc71 100644
--- a/include/net/netfilter/nf_conntrack_count.h
+++ b/include/net/netfilter/nf_conntrack_count.h
@@ -10,6 +10,7 @@ struct nf_conncount_data;
struct nf_conncount_list {
spinlock_t list_lock;
+ u32 last_gc; /* jiffies at most recent gc */
struct list_head head; /* connections with the same filtering key */
unsigned int count; /* length of list */
};
diff --git a/net/netfilter/nf_conncount.c b/net/netfilter/nf_conncount.c
index 82f36beb2e76..5d8ed6c90b7e 100644
--- a/net/netfilter/nf_conncount.c
+++ b/net/netfilter/nf_conncount.c
@@ -132,6 +132,9 @@ static int __nf_conncount_add(struct net *net,
struct nf_conn *found_ct;
unsigned int collect = 0;
+ if (time_is_after_eq_jiffies((unsigned long)list->last_gc))
+ goto add_new_node;
+
/* check the saved connections */
list_for_each_entry_safe(conn, conn_n, &list->head, node) {
if (collect > CONNCOUNT_GC_MAX_NODES)
@@ -177,6 +180,7 @@ static int __nf_conncount_add(struct net *net,
nf_ct_put(found_ct);
}
+add_new_node:
if (WARN_ON_ONCE(list->count > INT_MAX))
return -EOVERFLOW;
@@ -190,6 +194,7 @@ static int __nf_conncount_add(struct net *net,
conn->jiffies32 = (u32)jiffies;
list_add_tail(&conn->node, &list->head);
list->count++;
+ list->last_gc = (u32)jiffies;
return 0;
}
@@ -214,6 +219,7 @@ void nf_conncount_list_init(struct nf_conncount_list *list)
spin_lock_init(&list->list_lock);
INIT_LIST_HEAD(&list->head);
list->count = 0;
+ list->last_gc = (u32)jiffies;
}
EXPORT_SYMBOL_GPL(nf_conncount_list_init);
@@ -227,6 +233,10 @@ bool nf_conncount_gc_list(struct net *net,
unsigned int collected = 0;
bool ret = false;
+ /* don't bother if we just did GC */
+ if (time_is_after_eq_jiffies((unsigned long)READ_ONCE(list->last_gc)))
+ return false;
+
/* don't bother if other cpu is already doing GC */
if (!spin_trylock(&list->list_lock))
return false;
@@ -258,6 +268,7 @@ bool nf_conncount_gc_list(struct net *net,
if (!list->count)
ret = true;
+ list->last_gc = (u32)jiffies;
spin_unlock(&list->list_lock);
return ret;
--
2.30.1 (Apple Git-130)
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH] netfilter: nf_conncount: reduce unnecessary GC
2022-05-04 6:07 ` Florian Westphal
@ 2022-05-04 15:45 ` William Tu
0 siblings, 0 replies; 5+ messages in thread
From: William Tu @ 2022-05-04 15:45 UTC (permalink / raw)
To: Florian Westphal; +Cc: netfilter-devel, Yifeng Sun, Greg Rose
On Tue, May 3, 2022 at 11:07 PM Florian Westphal <fw@strlen.de> wrote:
>
> William Tu <u9012063@gmail.com> wrote:
> > @@ -231,6 +236,12 @@ bool nf_conncount_gc_list(struct net *net,
> > if (!spin_trylock(&list->list_lock))
> > return false;
> >
> > + /* don't bother if we just done GC */
> > + if (time_after_eq(list->last_gc, jiffies)) {
> > + spin_unlock(&list->list_lock);
>
> Minor nit, I think you could place the time_after_eq test before
> the spin_trylock if you do wrap the list->last_gc read with READ_ONCE().
Thanks! will do in v2.
>
> You could also check if changing last_gc to u32 and placing it after
> the "list_lock" member prevents growth of the list structure.
make sense, will use u32.
William
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] netfilter: nf_conncount: reduce unnecessary GC
2022-05-03 21:52 [PATCH] " William Tu
@ 2022-05-04 6:07 ` Florian Westphal
2022-05-04 15:45 ` William Tu
0 siblings, 1 reply; 5+ messages in thread
From: Florian Westphal @ 2022-05-04 6:07 UTC (permalink / raw)
To: William Tu; +Cc: netfilter-devel, fw, Yifeng Sun, Greg Rose
William Tu <u9012063@gmail.com> wrote:
> @@ -231,6 +236,12 @@ bool nf_conncount_gc_list(struct net *net,
> if (!spin_trylock(&list->list_lock))
> return false;
>
> + /* don't bother if we just done GC */
> + if (time_after_eq(list->last_gc, jiffies)) {
> + spin_unlock(&list->list_lock);
Minor nit, I think you could place the time_after_eq test before
the spin_trylock if you do wrap the list->last_gc read with READ_ONCE().
You could also check if changing last_gc to u32 and placing it after
the "list_lock" member prevents growth of the list structure.
^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH] netfilter: nf_conncount: reduce unnecessary GC
@ 2022-05-03 21:52 William Tu
2022-05-04 6:07 ` Florian Westphal
0 siblings, 1 reply; 5+ messages in thread
From: William Tu @ 2022-05-03 21:52 UTC (permalink / raw)
To: netfilter-devel; +Cc: fw, Yifeng Sun, Greg Rose
Currently nf_conncount can trigger garbage collection (GC)
at multiple places. Each GC process takes a spin_lock_bh
to traverse the nf_conncount_list. We found that when testing
port scanning use two parallel nmap, because the number of
connection increase fast, the nf_conncount_count and its
subsequent call to __nf_conncount_add take too much time,
causing several CPU lockup. This happens when user set the
conntrack limit to +20,000, because the larger the limit,
the longer the list that GC has to traverse.
The patch mitigate the performance issue by avoiding unnecessary
GC with a timestamp. Whenever nf_conncount has done a GC,
a timestamp is updated, and beforce the next time GC is
triggered, we make sure it's more than a jiffies.
By doin this we can greatly reduce the CPU cycles and
avoid the softirq lockup.
To reproduce it in OVS,
$ ovs-appctl dpctl/ct-set-limits zone=1,limit=20000
$ ovs-appctl dpctl/ct-get-limits
At another machine, runs two nmap
$ nmap -p1- <IP>
$ nmap -p1- <IP>
Signed-off-by: William Tu <u9012063@gmail.com>
Co-authored-by: Yifeng Sun <pkusunyifeng@gmail.com>
Reported-by: Greg Rose <gvrose8192@gmail.com>
Suggested-by: Florian Westphal <fw@strlen.de>
---
include/net/netfilter/nf_conntrack_count.h | 1 +
net/netfilter/nf_conncount.c | 12 ++++++++++++
2 files changed, 13 insertions(+)
diff --git a/include/net/netfilter/nf_conntrack_count.h b/include/net/netfilter/nf_conntrack_count.h
index 9645b47fa7e4..f39070d3e17f 100644
--- a/include/net/netfilter/nf_conntrack_count.h
+++ b/include/net/netfilter/nf_conntrack_count.h
@@ -12,6 +12,7 @@ struct nf_conncount_list {
spinlock_t list_lock;
struct list_head head; /* connections with the same filtering key */
unsigned int count; /* length of list */
+ unsigned long last_gc; /* jiffies at most recent gc */
};
struct nf_conncount_data *nf_conncount_init(struct net *net, unsigned int family,
diff --git a/net/netfilter/nf_conncount.c b/net/netfilter/nf_conncount.c
index 82f36beb2e76..6480711ecd44 100644
--- a/net/netfilter/nf_conncount.c
+++ b/net/netfilter/nf_conncount.c
@@ -134,6 +134,9 @@ static int __nf_conncount_add(struct net *net,
/* check the saved connections */
list_for_each_entry_safe(conn, conn_n, &list->head, node) {
+ if (time_after_eq(list->last_gc, jiffies))
+ break;
+
if (collect > CONNCOUNT_GC_MAX_NODES)
break;
@@ -190,6 +193,7 @@ static int __nf_conncount_add(struct net *net,
conn->jiffies32 = (u32)jiffies;
list_add_tail(&conn->node, &list->head);
list->count++;
+ list->last_gc = jiffies;
return 0;
}
@@ -214,6 +218,7 @@ void nf_conncount_list_init(struct nf_conncount_list *list)
spin_lock_init(&list->list_lock);
INIT_LIST_HEAD(&list->head);
list->count = 0;
+ list->last_gc = jiffies;
}
EXPORT_SYMBOL_GPL(nf_conncount_list_init);
@@ -231,6 +236,12 @@ bool nf_conncount_gc_list(struct net *net,
if (!spin_trylock(&list->list_lock))
return false;
+ /* don't bother if we just done GC */
+ if (time_after_eq(list->last_gc, jiffies)) {
+ spin_unlock(&list->list_lock);
+ return false;
+ }
+
list_for_each_entry_safe(conn, conn_n, &list->head, node) {
found = find_or_evict(net, list, conn);
if (IS_ERR(found)) {
@@ -258,6 +269,7 @@ bool nf_conncount_gc_list(struct net *net,
if (!list->count)
ret = true;
+ list->last_gc = jiffies;
spin_unlock(&list->list_lock);
return ret;
--
2.30.1 (Apple Git-130)
^ permalink raw reply related [flat|nested] 5+ messages in thread
end of thread, other threads:[~2022-05-04 15:45 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-04 1:09 [PATCH] netfilter: nf_conncount: reduce unnecessary GC William Tu
2022-05-04 15:35 ` [PATCHv2] " William Tu
-- strict thread matches above, loose matches on Subject: below --
2022-05-03 21:52 [PATCH] " William Tu
2022-05-04 6:07 ` Florian Westphal
2022-05-04 15:45 ` William Tu
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.