netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next] ipv4: Allow amount of dirty memory from fib resizing to be controllable
@ 2019-03-20 16:18 David Ahern
  2019-03-21 20:30 ` David Miller
  0 siblings, 1 reply; 2+ messages in thread
From: David Ahern @ 2019-03-20 16:18 UTC (permalink / raw)
  To: davem, netdev; +Cc: David Ahern

From: David Ahern <dsahern@gmail.com>

fib_trie implementation calls synchronize_rcu when a certain amount of
pages are dirty from freed entries. The number of pages was determined
experimentally in 2009 (commit c3059477fce2d).

At the current setting, synchronize_rcu is called often -- 51 times in a
second in one test with an average of an 8 msec delay adding a fib entry.
The total impact is a lot of slow down modifying the fib. This is seen
in the output of 'time' - the difference between real time and sys+user.
For example, using 720,022 single path routes and 'ip -batch'[1]:

    $ time ./ip -batch ipv4/routes-1-hops
    real    0m14.214s
    user    0m2.513s
    sys     0m6.783s

So roughly 35% of the actual time to install the routes is from the ip
command getting scheduled out, most notably due to synchronize_rcu (this
is observed using 'perf sched timehist').

This patch makes the amount of dirty memory configurable between 64k where
the synchronize_rcu is called often (small, low end systems that are memory
sensitive) to 64M where synchronize_rcu is called rarely during a large
FIB change (for high end systems with lots of memory). The default is 512kB
which corresponds to the current setting of 128 pages with a 4kB page size.

As an example, at 16MB the worst interval shows 4 calls to synchronize_rcu
in a second blocking for up to 30 msec in a single instance, and a total
of almost 100 msec across the 4 calls in the second. The trade off is
allowing FIB entries to consume more memory in a given time window but
but with much better fib insertion rates (~30% increase in prefixes/sec).
With this patch and net.ipv4.fib_sync_mem set to 16MB, the same batch
file runs in:

    $ time ./ip -batch ipv4/routes-1-hops
    real    0m9.692s
    user    0m2.491s
    sys     0m6.769s

So the dead time is reduced to about 1/2 second or <5% of the real time.

[1] 'ip' modified to not request ACK messages which improves route
    insertion times by about 20%

Signed-off-by: David Ahern <dsahern@gmail.com>
---
 Documentation/networking/ip-sysctl.txt |  5 +++++
 include/net/ip.h                       |  4 ++++
 net/ipv4/fib_trie.c                    | 14 ++++++++------
 net/ipv4/sysctl_net_ipv4.c             |  9 +++++++++
 4 files changed, 26 insertions(+), 6 deletions(-)

diff --git a/Documentation/networking/ip-sysctl.txt b/Documentation/networking/ip-sysctl.txt
index 55ea7def46be..d0bd03f56dbc 100644
--- a/Documentation/networking/ip-sysctl.txt
+++ b/Documentation/networking/ip-sysctl.txt
@@ -81,6 +81,11 @@ fib_multipath_hash_policy - INTEGER
 	0 - Layer 3
 	1 - Layer 4
 
+fib_sync_mem - UNSIGNED INTEGER
+	Amount of dirty memory from fib entries that can be backlogged before
+	synchronize_rcu is forced.
+	  Default: 512kB   Minimum: 64kB   Maximum: 64MB
+
 ip_forward_update_priority - INTEGER
 	Whether to update SKB priority from "TOS" field in IPv4 header after it
 	is forwarded. The new SKB priority is mapped from TOS field value
diff --git a/include/net/ip.h b/include/net/ip.h
index be3cad9c2e4c..aa09ae5f01a5 100644
--- a/include/net/ip.h
+++ b/include/net/ip.h
@@ -38,6 +38,10 @@
 #define IPV4_MAX_PMTU		65535U		/* RFC 2675, Section 5.1 */
 #define IPV4_MIN_MTU		68			/* RFC 791 */
 
+extern unsigned int sysctl_fib_sync_mem;
+extern unsigned int sysctl_fib_sync_mem_min;
+extern unsigned int sysctl_fib_sync_mem_max;
+
 struct sock;
 
 struct inet_skb_parm {
diff --git a/net/ipv4/fib_trie.c b/net/ipv4/fib_trie.c
index a573e37e0615..1704f432de1f 100644
--- a/net/ipv4/fib_trie.c
+++ b/net/ipv4/fib_trie.c
@@ -183,14 +183,16 @@ struct trie {
 };
 
 static struct key_vector *resize(struct trie *t, struct key_vector *tn);
-static size_t tnode_free_size;
+static unsigned int tnode_free_size;
 
 /*
- * synchronize_rcu after call_rcu for that many pages; it should be especially
- * useful before resizing the root node with PREEMPT_NONE configs; the value was
- * obtained experimentally, aiming to avoid visible slowdown.
+ * synchronize_rcu after call_rcu for outstanding dirty memory; it should be
+ * especially useful before resizing the root node with PREEMPT_NONE configs;
+ * the value was obtained experimentally, aiming to avoid visible slowdown.
  */
-static const int sync_pages = 128;
+unsigned int sysctl_fib_sync_mem = 512 * 1024;
+unsigned int sysctl_fib_sync_mem_min = 64 * 1024;
+unsigned int sysctl_fib_sync_mem_max = 64 * 1024 * 1024;
 
 static struct kmem_cache *fn_alias_kmem __ro_after_init;
 static struct kmem_cache *trie_leaf_kmem __ro_after_init;
@@ -504,7 +506,7 @@ static void tnode_free(struct key_vector *tn)
 		tn = container_of(head, struct tnode, rcu)->kv;
 	}
 
-	if (tnode_free_size >= PAGE_SIZE * sync_pages) {
+	if (tnode_free_size >= sysctl_fib_sync_mem) {
 		tnode_free_size = 0;
 		synchronize_rcu();
 	}
diff --git a/net/ipv4/sysctl_net_ipv4.c b/net/ipv4/sysctl_net_ipv4.c
index ba0fc4b18465..2316c08e9591 100644
--- a/net/ipv4/sysctl_net_ipv4.c
+++ b/net/ipv4/sysctl_net_ipv4.c
@@ -549,6 +549,15 @@ static struct ctl_table ipv4_table[] = {
 		.mode		= 0644,
 		.proc_handler	= proc_doulongvec_minmax,
 	},
+	{
+		.procname	= "fib_sync_mem",
+		.data		= &sysctl_fib_sync_mem,
+		.maxlen		= sizeof(sysctl_fib_sync_mem),
+		.mode		= 0644,
+		.proc_handler	= proc_douintvec_minmax,
+		.extra1		= &sysctl_fib_sync_mem_min,
+		.extra2		= &sysctl_fib_sync_mem_max,
+	},
 	{ }
 };
 
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH net-next] ipv4: Allow amount of dirty memory from fib resizing to be controllable
  2019-03-20 16:18 [PATCH net-next] ipv4: Allow amount of dirty memory from fib resizing to be controllable David Ahern
@ 2019-03-21 20:30 ` David Miller
  0 siblings, 0 replies; 2+ messages in thread
From: David Miller @ 2019-03-21 20:30 UTC (permalink / raw)
  To: dsahern; +Cc: netdev, dsahern

From: David Ahern <dsahern@kernel.org>
Date: Wed, 20 Mar 2019 09:18:59 -0700

> From: David Ahern <dsahern@gmail.com>
> 
> fib_trie implementation calls synchronize_rcu when a certain amount of
> pages are dirty from freed entries. The number of pages was determined
> experimentally in 2009 (commit c3059477fce2d).
> 
> At the current setting, synchronize_rcu is called often -- 51 times in a
> second in one test with an average of an 8 msec delay adding a fib entry.
> The total impact is a lot of slow down modifying the fib. This is seen
> in the output of 'time' - the difference between real time and sys+user.
> For example, using 720,022 single path routes and 'ip -batch'[1]:
> 
>     $ time ./ip -batch ipv4/routes-1-hops
>     real    0m14.214s
>     user    0m2.513s
>     sys     0m6.783s
> 
> So roughly 35% of the actual time to install the routes is from the ip
> command getting scheduled out, most notably due to synchronize_rcu (this
> is observed using 'perf sched timehist').
> 
> This patch makes the amount of dirty memory configurable between 64k where
> the synchronize_rcu is called often (small, low end systems that are memory
> sensitive) to 64M where synchronize_rcu is called rarely during a large
> FIB change (for high end systems with lots of memory). The default is 512kB
> which corresponds to the current setting of 128 pages with a 4kB page size.
> 
> As an example, at 16MB the worst interval shows 4 calls to synchronize_rcu
> in a second blocking for up to 30 msec in a single instance, and a total
> of almost 100 msec across the 4 calls in the second. The trade off is
> allowing FIB entries to consume more memory in a given time window but
> but with much better fib insertion rates (~30% increase in prefixes/sec).
> With this patch and net.ipv4.fib_sync_mem set to 16MB, the same batch
> file runs in:
> 
>     $ time ./ip -batch ipv4/routes-1-hops
>     real    0m9.692s
>     user    0m2.491s
>     sys     0m6.769s
> 
> So the dead time is reduced to about 1/2 second or <5% of the real time.
> 
> [1] 'ip' modified to not request ACK messages which improves route
>     insertion times by about 20%
> 
> Signed-off-by: David Ahern <dsahern@gmail.com>

Looks nice, applied, thanks David.

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2019-03-21 20:30 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-03-20 16:18 [PATCH net-next] ipv4: Allow amount of dirty memory from fib resizing to be controllable David Ahern
2019-03-21 20:30 ` David Miller

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).