All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH v2 0/6] Reduce cache miss for snmp_fold_field
@ 2016-09-06  2:30 ` Jia He
  0 siblings, 0 replies; 22+ messages in thread
From: Jia He @ 2016-09-06  2:30 UTC (permalink / raw)
  To: netdev
  Cc: linux-sctp, linux-kernel, davem, Alexey Kuznetsov, James Morris,
	Hideaki YOSHIFUJI, Patrick McHardy, Vlad Yasevich, Neil Horman,
	Steffen Klassert, Herbert Xu, Jia He

In a PowerPc server with large cpu number(160), besides commit
a3a773726c9f ("net: Optimize snmp stat aggregation by walking all
the percpu data at once"), I watched several other snmp_fold_field
callsites which will cause high cache miss rate.

My simple test case, which read from the procfs items endlessly:
/***********************************************************/
#include <stdlib.h>
#include <stdio.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#define LINELEN  2560
int main(int argc, char **argv)
{
        int i;
        int fd = -1 ;
        int rdsize = 0;
        char buf[LINELEN+1];

        buf[LINELEN] = 0;
        memset(buf,0,LINELEN);

        if(1 >= argc) {
                printf("file name empty\n");
                return -1;
        }

        fd = open(argv[1], O_RDWR, 0644);
        if(0 > fd){
                printf("open error\n");
                return -2;
        }

        for(i=0;i<0xffffffff;i++) {
                while(0 < (rdsize = read(fd,buf,LINELEN))){
                        //nothing here
                }

                lseek(fd, 0, SEEK_SET);
        }

        close(fd);
        return 0;
}
/**********************************************************/

compile and run:
gcc test.c -o test

perf stat -d -e cache-misses ./test /proc/net/snmp
perf stat -d -e cache-misses ./test /proc/net/snmp6
perf stat -d -e cache-misses ./test /proc/net/netstat
perf stat -d -e cache-misses ./test /proc/net/sctp/snmp
perf stat -d -e cache-misses ./test /proc/net/xfrm_stat

before the patch set:
====================
 Performance counter stats for 'system wide':

         355911097      cache-misses                                                 [40.08%]
        2356829300      L1-dcache-loads                                              [60.04%]
         355642645      L1-dcache-load-misses     #   15.09% of all L1-dcache hits   [60.02%]
         346544541      LLC-loads                                                    [59.97%]
            389763      LLC-load-misses           #    0.11% of all LL-cache hits    [40.02%]

       6.245162638 seconds time elapsed

After the patch set:
===================
 Performance counter stats for 'system wide':

         194992476      cache-misses                                                 [40.03%]
        6718051877      L1-dcache-loads                                              [60.07%]
         194871921      L1-dcache-load-misses     #    2.90% of all L1-dcache hits   [60.11%]
         187632232      LLC-loads                                                    [60.04%]
            464466      LLC-load-misses           #    0.25% of all LL-cache hits    [39.89%]

       6.868422769 seconds time elapsed
The cache-miss rate can be reduced from 15% to 2.9%

v2:
- 1/6 fix bug in udplite statistics. 
- 1/6 snmp_seq_show is split into 2 parts

Jia He (6):
  proc: Reduce cache miss in {snmp,netstat}_seq_show
  proc: Reduce cache miss in snmp6_seq_show
  proc: Reduce cache miss in sctp_snmp_seq_show
  proc: Reduce cache miss in xfrm_statistics_seq_show
  ipv6: Remove useless parameter in __snmp6_fill_statsdev
  net: Suppress the "Comparison to NULL could be written" warning

 net/ipv4/proc.c      | 144 ++++++++++++++++++++++++++++++++++-----------------
 net/ipv6/addrconf.c  |  12 ++---
 net/ipv6/proc.c      |  47 +++++++++++++----
 net/sctp/proc.c      |  15 ++++--
 net/xfrm/xfrm_proc.c |  15 ++++--
 5 files changed, 162 insertions(+), 71 deletions(-)

-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [RFC PATCH v2 0/6] Reduce cache miss for snmp_fold_field
@ 2016-09-06  2:30 ` Jia He
  0 siblings, 0 replies; 22+ messages in thread
From: Jia He @ 2016-09-06  2:30 UTC (permalink / raw)
  To: netdev
  Cc: linux-sctp, linux-kernel, davem, Alexey Kuznetsov, James Morris,
	Hideaki YOSHIFUJI, Patrick McHardy, Vlad Yasevich, Neil Horman,
	Steffen Klassert, Herbert Xu, Jia He

In a PowerPc server with large cpu number(160), besides commit
a3a773726c9f ("net: Optimize snmp stat aggregation by walking all
the percpu data at once"), I watched several other snmp_fold_field
callsites which will cause high cache miss rate.

My simple test case, which read from the procfs items endlessly:
/***********************************************************/
#include <stdlib.h>
#include <stdio.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#define LINELEN  2560
int main(int argc, char **argv)
{
        int i;
        int fd = -1 ;
        int rdsize = 0;
        char buf[LINELEN+1];

        buf[LINELEN] = 0;
        memset(buf,0,LINELEN);

        if(1 >= argc) {
                printf("file name empty\n");
                return -1;
        }

        fd = open(argv[1], O_RDWR, 0644);
        if(0 > fd){
                printf("open error\n");
                return -2;
        }

        for(i=0;i<0xffffffff;i++) {
                while(0 < (rdsize = read(fd,buf,LINELEN))){
                        //nothing here
                }

                lseek(fd, 0, SEEK_SET);
        }

        close(fd);
        return 0;
}
/**********************************************************/

compile and run:
gcc test.c -o test

perf stat -d -e cache-misses ./test /proc/net/snmp
perf stat -d -e cache-misses ./test /proc/net/snmp6
perf stat -d -e cache-misses ./test /proc/net/netstat
perf stat -d -e cache-misses ./test /proc/net/sctp/snmp
perf stat -d -e cache-misses ./test /proc/net/xfrm_stat

before the patch set:
==========
 Performance counter stats for 'system wide':

         355911097      cache-misses                                                 [40.08%]
        2356829300      L1-dcache-loads                                              [60.04%]
         355642645      L1-dcache-load-misses     #   15.09% of all L1-dcache hits   [60.02%]
         346544541      LLC-loads                                                    [59.97%]
            389763      LLC-load-misses           #    0.11% of all LL-cache hits    [40.02%]

       6.245162638 seconds time elapsed

After the patch set:
========= Performance counter stats for 'system wide':

         194992476      cache-misses                                                 [40.03%]
        6718051877      L1-dcache-loads                                              [60.07%]
         194871921      L1-dcache-load-misses     #    2.90% of all L1-dcache hits   [60.11%]
         187632232      LLC-loads                                                    [60.04%]
            464466      LLC-load-misses           #    0.25% of all LL-cache hits    [39.89%]

       6.868422769 seconds time elapsed
The cache-miss rate can be reduced from 15% to 2.9%

v2:
- 1/6 fix bug in udplite statistics. 
- 1/6 snmp_seq_show is split into 2 parts

Jia He (6):
  proc: Reduce cache miss in {snmp,netstat}_seq_show
  proc: Reduce cache miss in snmp6_seq_show
  proc: Reduce cache miss in sctp_snmp_seq_show
  proc: Reduce cache miss in xfrm_statistics_seq_show
  ipv6: Remove useless parameter in __snmp6_fill_statsdev
  net: Suppress the "Comparison to NULL could be written" warning

 net/ipv4/proc.c      | 144 ++++++++++++++++++++++++++++++++++-----------------
 net/ipv6/addrconf.c  |  12 ++---
 net/ipv6/proc.c      |  47 +++++++++++++----
 net/sctp/proc.c      |  15 ++++--
 net/xfrm/xfrm_proc.c |  15 ++++--
 5 files changed, 162 insertions(+), 71 deletions(-)

-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [RFC PATCH v2 1/6] proc: Reduce cache miss in {snmp,netstat}_seq_show
  2016-09-06  2:30 ` Jia He
@ 2016-09-06  2:30   ` Jia He
  -1 siblings, 0 replies; 22+ messages in thread
From: Jia He @ 2016-09-06  2:30 UTC (permalink / raw)
  To: netdev
  Cc: linux-sctp, linux-kernel, davem, Alexey Kuznetsov, James Morris,
	Hideaki YOSHIFUJI, Patrick McHardy, Vlad Yasevich, Neil Horman,
	Steffen Klassert, Herbert Xu, Jia He

This patch exchanges the two loop for collecting the percpu statistics
data. This can aggregate the data by going through all the items of each
cpu sequentially. Then snmp_seq_show is split into 2 parts to avoid build
warning "the frame size" larger than 1024.

Signed-off-by: Jia He <hejianet@gmail.com>
---
 net/ipv4/proc.c | 112 ++++++++++++++++++++++++++++++++++++++++----------------
 1 file changed, 81 insertions(+), 31 deletions(-)

diff --git a/net/ipv4/proc.c b/net/ipv4/proc.c
index 9f665b6..f413fdc 100644
--- a/net/ipv4/proc.c
+++ b/net/ipv4/proc.c
@@ -46,6 +46,9 @@
 #include <net/sock.h>
 #include <net/raw.h>
 
+#define MAX(a, b) ((u32)(a) >= (u32)(b) ? (a) : (b))
+#define TCPUDP_MIB_MAX MAX(UDP_MIB_MAX, TCP_MIB_MAX)
+
 /*
  *	Report socket allocation statistics [mea@utu.fi]
  */
@@ -378,13 +381,15 @@ static void icmp_put(struct seq_file *seq)
 /*
  *	Called from the PROCfs module. This outputs /proc/net/snmp.
  */
-static int snmp_seq_show(struct seq_file *seq, void *v)
+static int snmp_seq_show_ipstats(struct seq_file *seq, void *v)
 {
-	int i;
+	int i, c;
+	u64 buff64[IPSTATS_MIB_MAX];
 	struct net *net = seq->private;
 
-	seq_puts(seq, "Ip: Forwarding DefaultTTL");
+	memset(buff64, 0, IPSTATS_MIB_MAX * sizeof(u64));
 
+	seq_puts(seq, "Ip: Forwarding DefaultTTL");
 	for (i = 0; snmp4_ipstats_list[i].name != NULL; i++)
 		seq_printf(seq, " %s", snmp4_ipstats_list[i].name);
 
@@ -393,57 +398,92 @@ static int snmp_seq_show(struct seq_file *seq, void *v)
 		   net->ipv4.sysctl_ip_default_ttl);
 
 	BUILD_BUG_ON(offsetof(struct ipstats_mib, mibs) != 0);
+
+	for_each_possible_cpu(c) {
+		for (i = 0; snmp4_ipstats_list[i].name != NULL; i++)
+			buff64[i] += snmp_get_cpu_field64(
+					net->mib.ip_statistics,
+					c, snmp4_ipstats_list[i].entry,
+					offsetof(struct ipstats_mib, syncp));
+	}
 	for (i = 0; snmp4_ipstats_list[i].name != NULL; i++)
-		seq_printf(seq, " %llu",
-			   snmp_fold_field64(net->mib.ip_statistics,
-					     snmp4_ipstats_list[i].entry,
-					     offsetof(struct ipstats_mib, syncp)));
+		seq_printf(seq, " %llu", buff64[i]);
 
-	icmp_put(seq);	/* RFC 2011 compatibility */
-	icmpmsg_put(seq);
+	return 0;
+}
+
+static int snmp_seq_show_tcp_udp(struct seq_file *seq, void *v)
+{
+	int i, c;
+	unsigned long buff[TCPUDP_MIB_MAX];
+	struct net *net = seq->private;
+
+	memset(buff, 0, TCPUDP_MIB_MAX * sizeof(unsigned long));
 
 	seq_puts(seq, "\nTcp:");
 	for (i = 0; snmp4_tcp_list[i].name != NULL; i++)
 		seq_printf(seq, " %s", snmp4_tcp_list[i].name);
 
 	seq_puts(seq, "\nTcp:");
+	for_each_possible_cpu(c) {
+		for (i = 0; snmp4_tcp_list[i].name != NULL; i++)
+			buff[i] += snmp_get_cpu_field(net->mib.tcp_statistics,
+						c, snmp4_tcp_list[i].entry);
+	}
+
 	for (i = 0; snmp4_tcp_list[i].name != NULL; i++) {
 		/* MaxConn field is signed, RFC 2012 */
 		if (snmp4_tcp_list[i].entry == TCP_MIB_MAXCONN)
-			seq_printf(seq, " %ld",
-				   snmp_fold_field(net->mib.tcp_statistics,
-						   snmp4_tcp_list[i].entry));
+			seq_printf(seq, " %ld", buff[i]);
 		else
-			seq_printf(seq, " %lu",
-				   snmp_fold_field(net->mib.tcp_statistics,
-						   snmp4_tcp_list[i].entry));
+			seq_printf(seq, " %lu", buff[i]);
 	}
 
+	memset(buff, 0, TCPUDP_MIB_MAX * sizeof(unsigned long));
+
+	for_each_possible_cpu(c) {
+		for (i = 0; snmp4_udp_list[i].name != NULL; i++)
+			buff[i] += snmp_get_cpu_field(net->mib.udp_statistics,
+						c, snmp4_udp_list[i].entry);
+	}
 	seq_puts(seq, "\nUdp:");
 	for (i = 0; snmp4_udp_list[i].name != NULL; i++)
 		seq_printf(seq, " %s", snmp4_udp_list[i].name);
-
 	seq_puts(seq, "\nUdp:");
 	for (i = 0; snmp4_udp_list[i].name != NULL; i++)
-		seq_printf(seq, " %lu",
-			   snmp_fold_field(net->mib.udp_statistics,
-					   snmp4_udp_list[i].entry));
+		seq_printf(seq, " %lu", buff[i]);
+
+	memset(buff, 0, TCPUDP_MIB_MAX * sizeof(unsigned long));
 
 	/* the UDP and UDP-Lite MIBs are the same */
 	seq_puts(seq, "\nUdpLite:");
+	for_each_possible_cpu(c) {
+		for (i = 0; snmp4_udp_list[i].name != NULL; i++)
+			buff[i] += snmp_get_cpu_field(net->mib.udplite_statistics,
+						c, snmp4_udp_list[i].entry);
+	}
 	for (i = 0; snmp4_udp_list[i].name != NULL; i++)
 		seq_printf(seq, " %s", snmp4_udp_list[i].name);
-
 	seq_puts(seq, "\nUdpLite:");
 	for (i = 0; snmp4_udp_list[i].name != NULL; i++)
-		seq_printf(seq, " %lu",
-			   snmp_fold_field(net->mib.udplite_statistics,
-					   snmp4_udp_list[i].entry));
+		seq_printf(seq, " %lu", buff[i]);
 
 	seq_putc(seq, '\n');
 	return 0;
 }
 
+static int snmp_seq_show(struct seq_file *seq, void *v)
+{
+	snmp_seq_show_ipstats(seq, v);
+
+	icmp_put(seq);	/* RFC 2011 compatibility */
+	icmpmsg_put(seq);
+
+	snmp_seq_show_tcp_udp(seq, v);
+
+	return 0;
+}
+
 static int snmp_seq_open(struct inode *inode, struct file *file)
 {
 	return single_open_net(inode, file, snmp_seq_show);
@@ -464,29 +504,39 @@ static const struct file_operations snmp_seq_fops = {
  */
 static int netstat_seq_show(struct seq_file *seq, void *v)
 {
-	int i;
+	int i, c;
+	unsigned long buff[LINUX_MIB_MAX];
+	u64 buff64[IPSTATS_MIB_MAX];
 	struct net *net = seq->private;
 
+	memset(buff, 0, sizeof(unsigned long) * LINUX_MIB_MAX);
+	memset(buff64, 0, sizeof(u64) * IPSTATS_MIB_MAX);
+
 	seq_puts(seq, "TcpExt:");
 	for (i = 0; snmp4_net_list[i].name != NULL; i++)
 		seq_printf(seq, " %s", snmp4_net_list[i].name);
 
 	seq_puts(seq, "\nTcpExt:");
+	for_each_possible_cpu(c)
+		for (i = 0; snmp4_net_list[i].name != NULL; i++)
+			buff[i] += snmp_get_cpu_field(net->mib.net_statistics,
+						c, snmp4_net_list[i].entry);
 	for (i = 0; snmp4_net_list[i].name != NULL; i++)
-		seq_printf(seq, " %lu",
-			   snmp_fold_field(net->mib.net_statistics,
-					   snmp4_net_list[i].entry));
+		seq_printf(seq, " %lu", buff[i]);
 
 	seq_puts(seq, "\nIpExt:");
 	for (i = 0; snmp4_ipextstats_list[i].name != NULL; i++)
 		seq_printf(seq, " %s", snmp4_ipextstats_list[i].name);
 
 	seq_puts(seq, "\nIpExt:");
+	for_each_possible_cpu(c)
+		for (i = 0; snmp4_ipextstats_list[i].name != NULL; i++)
+			buff64[i] += snmp_get_cpu_field64(
+					net->mib.ip_statistics,
+					c, snmp4_ipextstats_list[i].entry,
+					offsetof(struct ipstats_mib, syncp));
 	for (i = 0; snmp4_ipextstats_list[i].name != NULL; i++)
-		seq_printf(seq, " %llu",
-			   snmp_fold_field64(net->mib.ip_statistics,
-					     snmp4_ipextstats_list[i].entry,
-					     offsetof(struct ipstats_mib, syncp)));
+		seq_printf(seq, " %llu", buff64[i]);
 
 	seq_putc(seq, '\n');
 	return 0;
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [RFC PATCH v2 1/6] proc: Reduce cache miss in {snmp,netstat}_seq_show
@ 2016-09-06  2:30   ` Jia He
  0 siblings, 0 replies; 22+ messages in thread
From: Jia He @ 2016-09-06  2:30 UTC (permalink / raw)
  To: netdev
  Cc: linux-sctp, linux-kernel, davem, Alexey Kuznetsov, James Morris,
	Hideaki YOSHIFUJI, Patrick McHardy, Vlad Yasevich, Neil Horman,
	Steffen Klassert, Herbert Xu, Jia He

This patch exchanges the two loop for collecting the percpu statistics
data. This can aggregate the data by going through all the items of each
cpu sequentially. Then snmp_seq_show is split into 2 parts to avoid build
warning "the frame size" larger than 1024.

Signed-off-by: Jia He <hejianet@gmail.com>
---
 net/ipv4/proc.c | 112 ++++++++++++++++++++++++++++++++++++++++----------------
 1 file changed, 81 insertions(+), 31 deletions(-)

diff --git a/net/ipv4/proc.c b/net/ipv4/proc.c
index 9f665b6..f413fdc 100644
--- a/net/ipv4/proc.c
+++ b/net/ipv4/proc.c
@@ -46,6 +46,9 @@
 #include <net/sock.h>
 #include <net/raw.h>
 
+#define MAX(a, b) ((u32)(a) >= (u32)(b) ? (a) : (b))
+#define TCPUDP_MIB_MAX MAX(UDP_MIB_MAX, TCP_MIB_MAX)
+
 /*
  *	Report socket allocation statistics [mea@utu.fi]
  */
@@ -378,13 +381,15 @@ static void icmp_put(struct seq_file *seq)
 /*
  *	Called from the PROCfs module. This outputs /proc/net/snmp.
  */
-static int snmp_seq_show(struct seq_file *seq, void *v)
+static int snmp_seq_show_ipstats(struct seq_file *seq, void *v)
 {
-	int i;
+	int i, c;
+	u64 buff64[IPSTATS_MIB_MAX];
 	struct net *net = seq->private;
 
-	seq_puts(seq, "Ip: Forwarding DefaultTTL");
+	memset(buff64, 0, IPSTATS_MIB_MAX * sizeof(u64));
 
+	seq_puts(seq, "Ip: Forwarding DefaultTTL");
 	for (i = 0; snmp4_ipstats_list[i].name != NULL; i++)
 		seq_printf(seq, " %s", snmp4_ipstats_list[i].name);
 
@@ -393,57 +398,92 @@ static int snmp_seq_show(struct seq_file *seq, void *v)
 		   net->ipv4.sysctl_ip_default_ttl);
 
 	BUILD_BUG_ON(offsetof(struct ipstats_mib, mibs) != 0);
+
+	for_each_possible_cpu(c) {
+		for (i = 0; snmp4_ipstats_list[i].name != NULL; i++)
+			buff64[i] += snmp_get_cpu_field64(
+					net->mib.ip_statistics,
+					c, snmp4_ipstats_list[i].entry,
+					offsetof(struct ipstats_mib, syncp));
+	}
 	for (i = 0; snmp4_ipstats_list[i].name != NULL; i++)
-		seq_printf(seq, " %llu",
-			   snmp_fold_field64(net->mib.ip_statistics,
-					     snmp4_ipstats_list[i].entry,
-					     offsetof(struct ipstats_mib, syncp)));
+		seq_printf(seq, " %llu", buff64[i]);
 
-	icmp_put(seq);	/* RFC 2011 compatibility */
-	icmpmsg_put(seq);
+	return 0;
+}
+
+static int snmp_seq_show_tcp_udp(struct seq_file *seq, void *v)
+{
+	int i, c;
+	unsigned long buff[TCPUDP_MIB_MAX];
+	struct net *net = seq->private;
+
+	memset(buff, 0, TCPUDP_MIB_MAX * sizeof(unsigned long));
 
 	seq_puts(seq, "\nTcp:");
 	for (i = 0; snmp4_tcp_list[i].name != NULL; i++)
 		seq_printf(seq, " %s", snmp4_tcp_list[i].name);
 
 	seq_puts(seq, "\nTcp:");
+	for_each_possible_cpu(c) {
+		for (i = 0; snmp4_tcp_list[i].name != NULL; i++)
+			buff[i] += snmp_get_cpu_field(net->mib.tcp_statistics,
+						c, snmp4_tcp_list[i].entry);
+	}
+
 	for (i = 0; snmp4_tcp_list[i].name != NULL; i++) {
 		/* MaxConn field is signed, RFC 2012 */
 		if (snmp4_tcp_list[i].entry = TCP_MIB_MAXCONN)
-			seq_printf(seq, " %ld",
-				   snmp_fold_field(net->mib.tcp_statistics,
-						   snmp4_tcp_list[i].entry));
+			seq_printf(seq, " %ld", buff[i]);
 		else
-			seq_printf(seq, " %lu",
-				   snmp_fold_field(net->mib.tcp_statistics,
-						   snmp4_tcp_list[i].entry));
+			seq_printf(seq, " %lu", buff[i]);
 	}
 
+	memset(buff, 0, TCPUDP_MIB_MAX * sizeof(unsigned long));
+
+	for_each_possible_cpu(c) {
+		for (i = 0; snmp4_udp_list[i].name != NULL; i++)
+			buff[i] += snmp_get_cpu_field(net->mib.udp_statistics,
+						c, snmp4_udp_list[i].entry);
+	}
 	seq_puts(seq, "\nUdp:");
 	for (i = 0; snmp4_udp_list[i].name != NULL; i++)
 		seq_printf(seq, " %s", snmp4_udp_list[i].name);
-
 	seq_puts(seq, "\nUdp:");
 	for (i = 0; snmp4_udp_list[i].name != NULL; i++)
-		seq_printf(seq, " %lu",
-			   snmp_fold_field(net->mib.udp_statistics,
-					   snmp4_udp_list[i].entry));
+		seq_printf(seq, " %lu", buff[i]);
+
+	memset(buff, 0, TCPUDP_MIB_MAX * sizeof(unsigned long));
 
 	/* the UDP and UDP-Lite MIBs are the same */
 	seq_puts(seq, "\nUdpLite:");
+	for_each_possible_cpu(c) {
+		for (i = 0; snmp4_udp_list[i].name != NULL; i++)
+			buff[i] += snmp_get_cpu_field(net->mib.udplite_statistics,
+						c, snmp4_udp_list[i].entry);
+	}
 	for (i = 0; snmp4_udp_list[i].name != NULL; i++)
 		seq_printf(seq, " %s", snmp4_udp_list[i].name);
-
 	seq_puts(seq, "\nUdpLite:");
 	for (i = 0; snmp4_udp_list[i].name != NULL; i++)
-		seq_printf(seq, " %lu",
-			   snmp_fold_field(net->mib.udplite_statistics,
-					   snmp4_udp_list[i].entry));
+		seq_printf(seq, " %lu", buff[i]);
 
 	seq_putc(seq, '\n');
 	return 0;
 }
 
+static int snmp_seq_show(struct seq_file *seq, void *v)
+{
+	snmp_seq_show_ipstats(seq, v);
+
+	icmp_put(seq);	/* RFC 2011 compatibility */
+	icmpmsg_put(seq);
+
+	snmp_seq_show_tcp_udp(seq, v);
+
+	return 0;
+}
+
 static int snmp_seq_open(struct inode *inode, struct file *file)
 {
 	return single_open_net(inode, file, snmp_seq_show);
@@ -464,29 +504,39 @@ static const struct file_operations snmp_seq_fops = {
  */
 static int netstat_seq_show(struct seq_file *seq, void *v)
 {
-	int i;
+	int i, c;
+	unsigned long buff[LINUX_MIB_MAX];
+	u64 buff64[IPSTATS_MIB_MAX];
 	struct net *net = seq->private;
 
+	memset(buff, 0, sizeof(unsigned long) * LINUX_MIB_MAX);
+	memset(buff64, 0, sizeof(u64) * IPSTATS_MIB_MAX);
+
 	seq_puts(seq, "TcpExt:");
 	for (i = 0; snmp4_net_list[i].name != NULL; i++)
 		seq_printf(seq, " %s", snmp4_net_list[i].name);
 
 	seq_puts(seq, "\nTcpExt:");
+	for_each_possible_cpu(c)
+		for (i = 0; snmp4_net_list[i].name != NULL; i++)
+			buff[i] += snmp_get_cpu_field(net->mib.net_statistics,
+						c, snmp4_net_list[i].entry);
 	for (i = 0; snmp4_net_list[i].name != NULL; i++)
-		seq_printf(seq, " %lu",
-			   snmp_fold_field(net->mib.net_statistics,
-					   snmp4_net_list[i].entry));
+		seq_printf(seq, " %lu", buff[i]);
 
 	seq_puts(seq, "\nIpExt:");
 	for (i = 0; snmp4_ipextstats_list[i].name != NULL; i++)
 		seq_printf(seq, " %s", snmp4_ipextstats_list[i].name);
 
 	seq_puts(seq, "\nIpExt:");
+	for_each_possible_cpu(c)
+		for (i = 0; snmp4_ipextstats_list[i].name != NULL; i++)
+			buff64[i] += snmp_get_cpu_field64(
+					net->mib.ip_statistics,
+					c, snmp4_ipextstats_list[i].entry,
+					offsetof(struct ipstats_mib, syncp));
 	for (i = 0; snmp4_ipextstats_list[i].name != NULL; i++)
-		seq_printf(seq, " %llu",
-			   snmp_fold_field64(net->mib.ip_statistics,
-					     snmp4_ipextstats_list[i].entry,
-					     offsetof(struct ipstats_mib, syncp)));
+		seq_printf(seq, " %llu", buff64[i]);
 
 	seq_putc(seq, '\n');
 	return 0;
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [RFC PATCH v2 2/6] proc: Reduce cache miss in snmp6_seq_show
  2016-09-06  2:30 ` Jia He
@ 2016-09-06  2:30   ` Jia He
  -1 siblings, 0 replies; 22+ messages in thread
From: Jia He @ 2016-09-06  2:30 UTC (permalink / raw)
  To: netdev
  Cc: linux-sctp, linux-kernel, davem, Alexey Kuznetsov, James Morris,
	Hideaki YOSHIFUJI, Patrick McHardy, Vlad Yasevich, Neil Horman,
	Steffen Klassert, Herbert Xu, Jia He

This patch exchanges the two loop for collecting the percpu
statistics data. This can reduce cache misses by going through
all the items of each cpu sequentially.

Signed-off-by: Jia He <hejianet@gmail.com>
---
 net/ipv6/proc.c | 47 ++++++++++++++++++++++++++++++++++++-----------
 1 file changed, 36 insertions(+), 11 deletions(-)

diff --git a/net/ipv6/proc.c b/net/ipv6/proc.c
index 679253d0..c834646 100644
--- a/net/ipv6/proc.c
+++ b/net/ipv6/proc.c
@@ -30,6 +30,13 @@
 #include <net/transp_v6.h>
 #include <net/ipv6.h>
 
+#define MAX(a, b) ((u32)(a) >= (u32)(b) ? (a) : (b))
+
+#define MAX4(a, b, c, d) \
+	MAX(MAX(a, b), MAX(c, d))
+#define SNMP_MIB_MAX MAX4(UDP_MIB_MAX, TCP_MIB_MAX, \
+			IPSTATS_MIB_MAX, ICMP_MIB_MAX)
+
 static int sockstat6_seq_show(struct seq_file *seq, void *v)
 {
 	struct net *net = seq->private;
@@ -191,25 +198,43 @@ static void snmp6_seq_show_item(struct seq_file *seq, void __percpu *pcpumib,
 				atomic_long_t *smib,
 				const struct snmp_mib *itemlist)
 {
-	int i;
-	unsigned long val;
-
-	for (i = 0; itemlist[i].name; i++) {
-		val = pcpumib ?
-			snmp_fold_field(pcpumib, itemlist[i].entry) :
-			atomic_long_read(smib + itemlist[i].entry);
-		seq_printf(seq, "%-32s\t%lu\n", itemlist[i].name, val);
+	int i, c;
+	unsigned long buff[SNMP_MIB_MAX];
+
+	memset(buff, 0, sizeof(unsigned long) * SNMP_MIB_MAX);
+
+	if (pcpumib) {
+		for_each_possible_cpu(c)
+			for (i = 0; itemlist[i].name; i++)
+				buff[i] += snmp_get_cpu_field(pcpumib, c,
+							itemlist[i].entry);
+		for (i = 0; itemlist[i].name; i++)
+			seq_printf(seq, "%-32s\t%lu\n",
+				   itemlist[i].name, buff[i]);
+	} else {
+		for (i = 0; itemlist[i].name; i++)
+			seq_printf(seq, "%-32s\t%lu\n", itemlist[i].name,
+				   atomic_long_read(smib + itemlist[i].entry));
 	}
 }
 
 static void snmp6_seq_show_item64(struct seq_file *seq, void __percpu *mib,
 				  const struct snmp_mib *itemlist, size_t syncpoff)
 {
-	int i;
+	int i, c;
+	u64 buff[SNMP_MIB_MAX];
+
+	memset(buff, 0, sizeof(unsigned long) * SNMP_MIB_MAX);
 
+	for_each_possible_cpu(c) {
+		for (i = 0; itemlist[i].name; i++) {
+			buff[i] += snmp_get_cpu_field64(mib, c,
+							itemlist[i].entry,
+							syncpoff);
+		}
+	}
 	for (i = 0; itemlist[i].name; i++)
-		seq_printf(seq, "%-32s\t%llu\n", itemlist[i].name,
-			   snmp_fold_field64(mib, itemlist[i].entry, syncpoff));
+		seq_printf(seq, "%-32s\t%llu\n", itemlist[i].name, buff[i]);
 }
 
 static int snmp6_seq_show(struct seq_file *seq, void *v)
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [RFC PATCH v2 2/6] proc: Reduce cache miss in snmp6_seq_show
@ 2016-09-06  2:30   ` Jia He
  0 siblings, 0 replies; 22+ messages in thread
From: Jia He @ 2016-09-06  2:30 UTC (permalink / raw)
  To: netdev
  Cc: linux-sctp, linux-kernel, davem, Alexey Kuznetsov, James Morris,
	Hideaki YOSHIFUJI, Patrick McHardy, Vlad Yasevich, Neil Horman,
	Steffen Klassert, Herbert Xu, Jia He

This patch exchanges the two loop for collecting the percpu
statistics data. This can reduce cache misses by going through
all the items of each cpu sequentially.

Signed-off-by: Jia He <hejianet@gmail.com>
---
 net/ipv6/proc.c | 47 ++++++++++++++++++++++++++++++++++++-----------
 1 file changed, 36 insertions(+), 11 deletions(-)

diff --git a/net/ipv6/proc.c b/net/ipv6/proc.c
index 679253d0..c834646 100644
--- a/net/ipv6/proc.c
+++ b/net/ipv6/proc.c
@@ -30,6 +30,13 @@
 #include <net/transp_v6.h>
 #include <net/ipv6.h>
 
+#define MAX(a, b) ((u32)(a) >= (u32)(b) ? (a) : (b))
+
+#define MAX4(a, b, c, d) \
+	MAX(MAX(a, b), MAX(c, d))
+#define SNMP_MIB_MAX MAX4(UDP_MIB_MAX, TCP_MIB_MAX, \
+			IPSTATS_MIB_MAX, ICMP_MIB_MAX)
+
 static int sockstat6_seq_show(struct seq_file *seq, void *v)
 {
 	struct net *net = seq->private;
@@ -191,25 +198,43 @@ static void snmp6_seq_show_item(struct seq_file *seq, void __percpu *pcpumib,
 				atomic_long_t *smib,
 				const struct snmp_mib *itemlist)
 {
-	int i;
-	unsigned long val;
-
-	for (i = 0; itemlist[i].name; i++) {
-		val = pcpumib ?
-			snmp_fold_field(pcpumib, itemlist[i].entry) :
-			atomic_long_read(smib + itemlist[i].entry);
-		seq_printf(seq, "%-32s\t%lu\n", itemlist[i].name, val);
+	int i, c;
+	unsigned long buff[SNMP_MIB_MAX];
+
+	memset(buff, 0, sizeof(unsigned long) * SNMP_MIB_MAX);
+
+	if (pcpumib) {
+		for_each_possible_cpu(c)
+			for (i = 0; itemlist[i].name; i++)
+				buff[i] += snmp_get_cpu_field(pcpumib, c,
+							itemlist[i].entry);
+		for (i = 0; itemlist[i].name; i++)
+			seq_printf(seq, "%-32s\t%lu\n",
+				   itemlist[i].name, buff[i]);
+	} else {
+		for (i = 0; itemlist[i].name; i++)
+			seq_printf(seq, "%-32s\t%lu\n", itemlist[i].name,
+				   atomic_long_read(smib + itemlist[i].entry));
 	}
 }
 
 static void snmp6_seq_show_item64(struct seq_file *seq, void __percpu *mib,
 				  const struct snmp_mib *itemlist, size_t syncpoff)
 {
-	int i;
+	int i, c;
+	u64 buff[SNMP_MIB_MAX];
+
+	memset(buff, 0, sizeof(unsigned long) * SNMP_MIB_MAX);
 
+	for_each_possible_cpu(c) {
+		for (i = 0; itemlist[i].name; i++) {
+			buff[i] += snmp_get_cpu_field64(mib, c,
+							itemlist[i].entry,
+							syncpoff);
+		}
+	}
 	for (i = 0; itemlist[i].name; i++)
-		seq_printf(seq, "%-32s\t%llu\n", itemlist[i].name,
-			   snmp_fold_field64(mib, itemlist[i].entry, syncpoff));
+		seq_printf(seq, "%-32s\t%llu\n", itemlist[i].name, buff[i]);
 }
 
 static int snmp6_seq_show(struct seq_file *seq, void *v)
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [RFC PATCH v2 3/6] proc: Reduce cache miss in sctp_snmp_seq_show
  2016-09-06  2:30 ` Jia He
@ 2016-09-06  2:30   ` Jia He
  -1 siblings, 0 replies; 22+ messages in thread
From: Jia He @ 2016-09-06  2:30 UTC (permalink / raw)
  To: netdev
  Cc: linux-sctp, linux-kernel, davem, Alexey Kuznetsov, James Morris,
	Hideaki YOSHIFUJI, Patrick McHardy, Vlad Yasevich, Neil Horman,
	Steffen Klassert, Herbert Xu, Jia He

This patch exchanges the two loop for collecting the percpu
statistics data. This can reduce cache misses by going through
all the items of each cpu sequentially.

Signed-off-by: Jia He <hejianet@gmail.com>
---
 net/sctp/proc.c | 13 ++++++++++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/net/sctp/proc.c b/net/sctp/proc.c
index ef8ba77..085fb95 100644
--- a/net/sctp/proc.c
+++ b/net/sctp/proc.c
@@ -74,12 +74,19 @@ static const struct snmp_mib sctp_snmp_list[] = {
 static int sctp_snmp_seq_show(struct seq_file *seq, void *v)
 {
 	struct net *net = seq->private;
-	int i;
+	int i, c;
+	unsigned long buff[SCTP_MIB_MAX];
 
+	memset(buff, 0, sizeof(unsigned long) * SCTP_MIB_MAX);
+
+	for_each_possible_cpu(c)
+		for (i = 0; sctp_snmp_list[i].name != NULL; i++)
+			buff[i] += snmp_get_cpu_field(
+						net->sctp.sctp_statistics,
+						c, sctp_snmp_list[i].entry);
 	for (i = 0; sctp_snmp_list[i].name != NULL; i++)
 		seq_printf(seq, "%-32s\t%ld\n", sctp_snmp_list[i].name,
-			   snmp_fold_field(net->sctp.sctp_statistics,
-				      sctp_snmp_list[i].entry));
+						buff[i]);
 
 	return 0;
 }
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [RFC PATCH v2 3/6] proc: Reduce cache miss in sctp_snmp_seq_show
@ 2016-09-06  2:30   ` Jia He
  0 siblings, 0 replies; 22+ messages in thread
From: Jia He @ 2016-09-06  2:30 UTC (permalink / raw)
  To: netdev
  Cc: linux-sctp, linux-kernel, davem, Alexey Kuznetsov, James Morris,
	Hideaki YOSHIFUJI, Patrick McHardy, Vlad Yasevich, Neil Horman,
	Steffen Klassert, Herbert Xu, Jia He

This patch exchanges the two loop for collecting the percpu
statistics data. This can reduce cache misses by going through
all the items of each cpu sequentially.

Signed-off-by: Jia He <hejianet@gmail.com>
---
 net/sctp/proc.c | 13 ++++++++++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/net/sctp/proc.c b/net/sctp/proc.c
index ef8ba77..085fb95 100644
--- a/net/sctp/proc.c
+++ b/net/sctp/proc.c
@@ -74,12 +74,19 @@ static const struct snmp_mib sctp_snmp_list[] = {
 static int sctp_snmp_seq_show(struct seq_file *seq, void *v)
 {
 	struct net *net = seq->private;
-	int i;
+	int i, c;
+	unsigned long buff[SCTP_MIB_MAX];
 
+	memset(buff, 0, sizeof(unsigned long) * SCTP_MIB_MAX);
+
+	for_each_possible_cpu(c)
+		for (i = 0; sctp_snmp_list[i].name != NULL; i++)
+			buff[i] += snmp_get_cpu_field(
+						net->sctp.sctp_statistics,
+						c, sctp_snmp_list[i].entry);
 	for (i = 0; sctp_snmp_list[i].name != NULL; i++)
 		seq_printf(seq, "%-32s\t%ld\n", sctp_snmp_list[i].name,
-			   snmp_fold_field(net->sctp.sctp_statistics,
-				      sctp_snmp_list[i].entry));
+						buff[i]);
 
 	return 0;
 }
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [RFC PATCH v2 4/6] proc: Reduce cache miss in xfrm_statistics_seq_show
  2016-09-06  2:30 ` Jia He
@ 2016-09-06  2:30   ` Jia He
  -1 siblings, 0 replies; 22+ messages in thread
From: Jia He @ 2016-09-06  2:30 UTC (permalink / raw)
  To: netdev
  Cc: linux-sctp, linux-kernel, davem, Alexey Kuznetsov, James Morris,
	Hideaki YOSHIFUJI, Patrick McHardy, Vlad Yasevich, Neil Horman,
	Steffen Klassert, Herbert Xu, Jia He

This patch exchanges the two loop for collecting the percpu
statistics data. This can reduce cache misses by going through
all the items of each cpu sequentially.

Signed-off-by: Jia He <hejianet@gmail.com>
---
 net/xfrm/xfrm_proc.c | 17 +++++++++++++----
 1 file changed, 13 insertions(+), 4 deletions(-)

diff --git a/net/xfrm/xfrm_proc.c b/net/xfrm/xfrm_proc.c
index 9c4fbd8..c9df546 100644
--- a/net/xfrm/xfrm_proc.c
+++ b/net/xfrm/xfrm_proc.c
@@ -51,11 +51,20 @@ static const struct snmp_mib xfrm_mib_list[] = {
 static int xfrm_statistics_seq_show(struct seq_file *seq, void *v)
 {
 	struct net *net = seq->private;
-	int i;
-	for (i = 0; xfrm_mib_list[i].name; i++)
+	int i, c;
+	unsigned long buff[LINUX_MIB_XFRMMAX];
+
+	memset(buff, 0, sizeof(unsigned long) * LINUX_MIB_XFRMMAX);
+
+	for_each_possible_cpu(c)
+		for (i = 0; xfrm_mib_list[i].name != NULL; i++)
+			buff[i] += snmp_get_cpu_field(
+						net->mib.xfrm_statistics,
+						c, xfrm_mib_list[i].entry);
+	for (i = 0; xfrm_mib_list[i].name != NULL; i++)
 		seq_printf(seq, "%-24s\t%lu\n", xfrm_mib_list[i].name,
-			   snmp_fold_field(net->mib.xfrm_statistics,
-					   xfrm_mib_list[i].entry));
+						buff[i]);
+
 	return 0;
 }
 
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [RFC PATCH v2 4/6] proc: Reduce cache miss in xfrm_statistics_seq_show
@ 2016-09-06  2:30   ` Jia He
  0 siblings, 0 replies; 22+ messages in thread
From: Jia He @ 2016-09-06  2:30 UTC (permalink / raw)
  To: netdev
  Cc: linux-sctp, linux-kernel, davem, Alexey Kuznetsov, James Morris,
	Hideaki YOSHIFUJI, Patrick McHardy, Vlad Yasevich, Neil Horman,
	Steffen Klassert, Herbert Xu, Jia He

This patch exchanges the two loop for collecting the percpu
statistics data. This can reduce cache misses by going through
all the items of each cpu sequentially.

Signed-off-by: Jia He <hejianet@gmail.com>
---
 net/xfrm/xfrm_proc.c | 17 +++++++++++++----
 1 file changed, 13 insertions(+), 4 deletions(-)

diff --git a/net/xfrm/xfrm_proc.c b/net/xfrm/xfrm_proc.c
index 9c4fbd8..c9df546 100644
--- a/net/xfrm/xfrm_proc.c
+++ b/net/xfrm/xfrm_proc.c
@@ -51,11 +51,20 @@ static const struct snmp_mib xfrm_mib_list[] = {
 static int xfrm_statistics_seq_show(struct seq_file *seq, void *v)
 {
 	struct net *net = seq->private;
-	int i;
-	for (i = 0; xfrm_mib_list[i].name; i++)
+	int i, c;
+	unsigned long buff[LINUX_MIB_XFRMMAX];
+
+	memset(buff, 0, sizeof(unsigned long) * LINUX_MIB_XFRMMAX);
+
+	for_each_possible_cpu(c)
+		for (i = 0; xfrm_mib_list[i].name != NULL; i++)
+			buff[i] += snmp_get_cpu_field(
+						net->mib.xfrm_statistics,
+						c, xfrm_mib_list[i].entry);
+	for (i = 0; xfrm_mib_list[i].name != NULL; i++)
 		seq_printf(seq, "%-24s\t%lu\n", xfrm_mib_list[i].name,
-			   snmp_fold_field(net->mib.xfrm_statistics,
-					   xfrm_mib_list[i].entry));
+						buff[i]);
+
 	return 0;
 }
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [RFC PATCH v2 5/6] ipv6: Remove useless parameter in __snmp6_fill_statsdev
  2016-09-06  2:30 ` Jia He
@ 2016-09-06  2:30   ` Jia He
  -1 siblings, 0 replies; 22+ messages in thread
From: Jia He @ 2016-09-06  2:30 UTC (permalink / raw)
  To: netdev
  Cc: linux-sctp, linux-kernel, davem, Alexey Kuznetsov, James Morris,
	Hideaki YOSHIFUJI, Patrick McHardy, Vlad Yasevich, Neil Horman,
	Steffen Klassert, Herbert Xu, Jia He

The parameter items(always ICMP6_MIB_MAX) is useless for __snmp6_fill_statsdev.

Signed-off-by: Jia He <hejianet@gmail.com>
---
 net/ipv6/addrconf.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
index f418d2e..e170554 100644
--- a/net/ipv6/addrconf.c
+++ b/net/ipv6/addrconf.c
@@ -4952,18 +4952,18 @@ static inline size_t inet6_if_nlmsg_size(void)
 }
 
 static inline void __snmp6_fill_statsdev(u64 *stats, atomic_long_t *mib,
-				      int items, int bytes)
+					int bytes)
 {
 	int i;
-	int pad = bytes - sizeof(u64) * items;
+	int pad = bytes - sizeof(u64) * ICMP6_MIB_MAX;
 	BUG_ON(pad < 0);
 
 	/* Use put_unaligned() because stats may not be aligned for u64. */
-	put_unaligned(items, &stats[0]);
-	for (i = 1; i < items; i++)
+	put_unaligned(ICMP6_MIB_MAX, &stats[0]);
+	for (i = 1; i < ICMP6_MIB_MAX; i++)
 		put_unaligned(atomic_long_read(&mib[i]), &stats[i]);
 
-	memset(&stats[items], 0, pad);
+	memset(&stats[ICMP6_MIB_MAX], 0, pad);
 }
 
 static inline void __snmp6_fill_stats64(u64 *stats, void __percpu *mib,
@@ -4996,7 +4996,7 @@ static void snmp6_fill_stats(u64 *stats, struct inet6_dev *idev, int attrtype,
 				     offsetof(struct ipstats_mib, syncp));
 		break;
 	case IFLA_INET6_ICMP6STATS:
-		__snmp6_fill_statsdev(stats, idev->stats.icmpv6dev->mibs, ICMP6_MIB_MAX, bytes);
+		__snmp6_fill_statsdev(stats, idev->stats.icmpv6dev->mibs, bytes);
 		break;
 	}
 }
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [RFC PATCH v2 5/6] ipv6: Remove useless parameter in __snmp6_fill_statsdev
@ 2016-09-06  2:30   ` Jia He
  0 siblings, 0 replies; 22+ messages in thread
From: Jia He @ 2016-09-06  2:30 UTC (permalink / raw)
  To: netdev
  Cc: linux-sctp, linux-kernel, davem, Alexey Kuznetsov, James Morris,
	Hideaki YOSHIFUJI, Patrick McHardy, Vlad Yasevich, Neil Horman,
	Steffen Klassert, Herbert Xu, Jia He

The parameter items(always ICMP6_MIB_MAX) is useless for __snmp6_fill_statsdev.

Signed-off-by: Jia He <hejianet@gmail.com>
---
 net/ipv6/addrconf.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
index f418d2e..e170554 100644
--- a/net/ipv6/addrconf.c
+++ b/net/ipv6/addrconf.c
@@ -4952,18 +4952,18 @@ static inline size_t inet6_if_nlmsg_size(void)
 }
 
 static inline void __snmp6_fill_statsdev(u64 *stats, atomic_long_t *mib,
-				      int items, int bytes)
+					int bytes)
 {
 	int i;
-	int pad = bytes - sizeof(u64) * items;
+	int pad = bytes - sizeof(u64) * ICMP6_MIB_MAX;
 	BUG_ON(pad < 0);
 
 	/* Use put_unaligned() because stats may not be aligned for u64. */
-	put_unaligned(items, &stats[0]);
-	for (i = 1; i < items; i++)
+	put_unaligned(ICMP6_MIB_MAX, &stats[0]);
+	for (i = 1; i < ICMP6_MIB_MAX; i++)
 		put_unaligned(atomic_long_read(&mib[i]), &stats[i]);
 
-	memset(&stats[items], 0, pad);
+	memset(&stats[ICMP6_MIB_MAX], 0, pad);
 }
 
 static inline void __snmp6_fill_stats64(u64 *stats, void __percpu *mib,
@@ -4996,7 +4996,7 @@ static void snmp6_fill_stats(u64 *stats, struct inet6_dev *idev, int attrtype,
 				     offsetof(struct ipstats_mib, syncp));
 		break;
 	case IFLA_INET6_ICMP6STATS:
-		__snmp6_fill_statsdev(stats, idev->stats.icmpv6dev->mibs, ICMP6_MIB_MAX, bytes);
+		__snmp6_fill_statsdev(stats, idev->stats.icmpv6dev->mibs, bytes);
 		break;
 	}
 }
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [RFC PATCH v2 6/6] net: Suppress the "Comparison to NULL could be written" warning
  2016-09-06  2:30 ` Jia He
@ 2016-09-06  2:30   ` Jia He
  -1 siblings, 0 replies; 22+ messages in thread
From: Jia He @ 2016-09-06  2:30 UTC (permalink / raw)
  To: netdev
  Cc: linux-sctp, linux-kernel, davem, Alexey Kuznetsov, James Morris,
	Hideaki YOSHIFUJI, Patrick McHardy, Vlad Yasevich, Neil Horman,
	Steffen Klassert, Herbert Xu, Jia He

This is to suppress the checkpatch.pl warning "Comparison to NULL
could be written". No functional changes here.

Signed-off-by: Jia He <hejianet@gmail.com>
---
 net/ipv4/proc.c      | 44 ++++++++++++++++++++++----------------------
 net/sctp/proc.c      |  4 ++--
 net/xfrm/xfrm_proc.c |  4 ++--
 3 files changed, 26 insertions(+), 26 deletions(-)

diff --git a/net/ipv4/proc.c b/net/ipv4/proc.c
index f413fdc..bf0bb22 100644
--- a/net/ipv4/proc.c
+++ b/net/ipv4/proc.c
@@ -358,22 +358,22 @@ static void icmp_put(struct seq_file *seq)
 	atomic_long_t *ptr = net->mib.icmpmsg_statistics->mibs;
 
 	seq_puts(seq, "\nIcmp: InMsgs InErrors InCsumErrors");
-	for (i = 0; icmpmibmap[i].name != NULL; i++)
+	for (i = 0; icmpmibmap[i].name; i++)
 		seq_printf(seq, " In%s", icmpmibmap[i].name);
 	seq_puts(seq, " OutMsgs OutErrors");
-	for (i = 0; icmpmibmap[i].name != NULL; i++)
+	for (i = 0; icmpmibmap[i].name; i++)
 		seq_printf(seq, " Out%s", icmpmibmap[i].name);
 	seq_printf(seq, "\nIcmp: %lu %lu %lu",
 		snmp_fold_field(net->mib.icmp_statistics, ICMP_MIB_INMSGS),
 		snmp_fold_field(net->mib.icmp_statistics, ICMP_MIB_INERRORS),
 		snmp_fold_field(net->mib.icmp_statistics, ICMP_MIB_CSUMERRORS));
-	for (i = 0; icmpmibmap[i].name != NULL; i++)
+	for (i = 0; icmpmibmap[i].name; i++)
 		seq_printf(seq, " %lu",
 			   atomic_long_read(ptr + icmpmibmap[i].index));
 	seq_printf(seq, " %lu %lu",
 		snmp_fold_field(net->mib.icmp_statistics, ICMP_MIB_OUTMSGS),
 		snmp_fold_field(net->mib.icmp_statistics, ICMP_MIB_OUTERRORS));
-	for (i = 0; icmpmibmap[i].name != NULL; i++)
+	for (i = 0; icmpmibmap[i].name; i++)
 		seq_printf(seq, " %lu",
 			   atomic_long_read(ptr + (icmpmibmap[i].index | 0x100)));
 }
@@ -390,7 +390,7 @@ static int snmp_seq_show_ipstats(struct seq_file *seq, void *v)
 	memset(buff64, 0, IPSTATS_MIB_MAX * sizeof(u64));
 
 	seq_puts(seq, "Ip: Forwarding DefaultTTL");
-	for (i = 0; snmp4_ipstats_list[i].name != NULL; i++)
+	for (i = 0; snmp4_ipstats_list[i].name; i++)
 		seq_printf(seq, " %s", snmp4_ipstats_list[i].name);
 
 	seq_printf(seq, "\nIp: %d %d",
@@ -400,13 +400,13 @@ static int snmp_seq_show_ipstats(struct seq_file *seq, void *v)
 	BUILD_BUG_ON(offsetof(struct ipstats_mib, mibs) != 0);
 
 	for_each_possible_cpu(c) {
-		for (i = 0; snmp4_ipstats_list[i].name != NULL; i++)
+		for (i = 0; snmp4_ipstats_list[i].name; i++)
 			buff64[i] += snmp_get_cpu_field64(
 					net->mib.ip_statistics,
 					c, snmp4_ipstats_list[i].entry,
 					offsetof(struct ipstats_mib, syncp));
 	}
-	for (i = 0; snmp4_ipstats_list[i].name != NULL; i++)
+	for (i = 0; snmp4_ipstats_list[i].name; i++)
 		seq_printf(seq, " %llu", buff64[i]);
 
 	return 0;
@@ -421,17 +421,17 @@ static int snmp_seq_show_tcp_udp(struct seq_file *seq, void *v)
 	memset(buff, 0, TCPUDP_MIB_MAX * sizeof(unsigned long));
 
 	seq_puts(seq, "\nTcp:");
-	for (i = 0; snmp4_tcp_list[i].name != NULL; i++)
+	for (i = 0; snmp4_tcp_list[i].name; i++)
 		seq_printf(seq, " %s", snmp4_tcp_list[i].name);
 
 	seq_puts(seq, "\nTcp:");
 	for_each_possible_cpu(c) {
-		for (i = 0; snmp4_tcp_list[i].name != NULL; i++)
+		for (i = 0; snmp4_tcp_list[i].name; i++)
 			buff[i] += snmp_get_cpu_field(net->mib.tcp_statistics,
 						c, snmp4_tcp_list[i].entry);
 	}
 
-	for (i = 0; snmp4_tcp_list[i].name != NULL; i++) {
+	for (i = 0; snmp4_tcp_list[i].name; i++) {
 		/* MaxConn field is signed, RFC 2012 */
 		if (snmp4_tcp_list[i].entry == TCP_MIB_MAXCONN)
 			seq_printf(seq, " %ld", buff[i]);
@@ -442,15 +442,15 @@ static int snmp_seq_show_tcp_udp(struct seq_file *seq, void *v)
 	memset(buff, 0, TCPUDP_MIB_MAX * sizeof(unsigned long));
 
 	for_each_possible_cpu(c) {
-		for (i = 0; snmp4_udp_list[i].name != NULL; i++)
+		for (i = 0; snmp4_udp_list[i].name; i++)
 			buff[i] += snmp_get_cpu_field(net->mib.udp_statistics,
 						c, snmp4_udp_list[i].entry);
 	}
 	seq_puts(seq, "\nUdp:");
-	for (i = 0; snmp4_udp_list[i].name != NULL; i++)
+	for (i = 0; snmp4_udp_list[i].name; i++)
 		seq_printf(seq, " %s", snmp4_udp_list[i].name);
 	seq_puts(seq, "\nUdp:");
-	for (i = 0; snmp4_udp_list[i].name != NULL; i++)
+	for (i = 0; snmp4_udp_list[i].name; i++)
 		seq_printf(seq, " %lu", buff[i]);
 
 	memset(buff, 0, TCPUDP_MIB_MAX * sizeof(unsigned long));
@@ -458,14 +458,14 @@ static int snmp_seq_show_tcp_udp(struct seq_file *seq, void *v)
 	/* the UDP and UDP-Lite MIBs are the same */
 	seq_puts(seq, "\nUdpLite:");
 	for_each_possible_cpu(c) {
-		for (i = 0; snmp4_udp_list[i].name != NULL; i++)
+		for (i = 0; snmp4_udp_list[i].name; i++)
 			buff[i] += snmp_get_cpu_field(net->mib.udplite_statistics,
 						c, snmp4_udp_list[i].entry);
 	}
-	for (i = 0; snmp4_udp_list[i].name != NULL; i++)
+	for (i = 0; snmp4_udp_list[i].name; i++)
 		seq_printf(seq, " %s", snmp4_udp_list[i].name);
 	seq_puts(seq, "\nUdpLite:");
-	for (i = 0; snmp4_udp_list[i].name != NULL; i++)
+	for (i = 0; snmp4_udp_list[i].name; i++)
 		seq_printf(seq, " %lu", buff[i]);
 
 	seq_putc(seq, '\n');
@@ -513,29 +513,29 @@ static int netstat_seq_show(struct seq_file *seq, void *v)
 	memset(buff64, 0, sizeof(u64) * IPSTATS_MIB_MAX);
 
 	seq_puts(seq, "TcpExt:");
-	for (i = 0; snmp4_net_list[i].name != NULL; i++)
+	for (i = 0; snmp4_net_list[i].name; i++)
 		seq_printf(seq, " %s", snmp4_net_list[i].name);
 
 	seq_puts(seq, "\nTcpExt:");
 	for_each_possible_cpu(c)
-		for (i = 0; snmp4_net_list[i].name != NULL; i++)
+		for (i = 0; snmp4_net_list[i].name; i++)
 			buff[i] += snmp_get_cpu_field(net->mib.net_statistics,
 						c, snmp4_net_list[i].entry);
-	for (i = 0; snmp4_net_list[i].name != NULL; i++)
+	for (i = 0; snmp4_net_list[i].name; i++)
 		seq_printf(seq, " %lu", buff[i]);
 
 	seq_puts(seq, "\nIpExt:");
-	for (i = 0; snmp4_ipextstats_list[i].name != NULL; i++)
+	for (i = 0; snmp4_ipextstats_list[i].name; i++)
 		seq_printf(seq, " %s", snmp4_ipextstats_list[i].name);
 
 	seq_puts(seq, "\nIpExt:");
 	for_each_possible_cpu(c)
-		for (i = 0; snmp4_ipextstats_list[i].name != NULL; i++)
+		for (i = 0; snmp4_ipextstats_list[i].name; i++)
 			buff64[i] += snmp_get_cpu_field64(
 					net->mib.ip_statistics,
 					c, snmp4_ipextstats_list[i].entry,
 					offsetof(struct ipstats_mib, syncp));
-	for (i = 0; snmp4_ipextstats_list[i].name != NULL; i++)
+	for (i = 0; snmp4_ipextstats_list[i].name; i++)
 		seq_printf(seq, " %llu", buff64[i]);
 
 	seq_putc(seq, '\n');
diff --git a/net/sctp/proc.c b/net/sctp/proc.c
index 085fb95..816a5e8 100644
--- a/net/sctp/proc.c
+++ b/net/sctp/proc.c
@@ -80,11 +80,11 @@ static int sctp_snmp_seq_show(struct seq_file *seq, void *v)
 	memset(buff, 0, sizeof(unsigned long) * SCTP_MIB_MAX);
 
 	for_each_possible_cpu(c)
-		for (i = 0; sctp_snmp_list[i].name != NULL; i++)
+		for (i = 0; sctp_snmp_list[i].name; i++)
 			buff[i] += snmp_get_cpu_field(
 						net->sctp.sctp_statistics,
 						c, sctp_snmp_list[i].entry);
-	for (i = 0; sctp_snmp_list[i].name != NULL; i++)
+	for (i = 0; sctp_snmp_list[i].name; i++)
 		seq_printf(seq, "%-32s\t%ld\n", sctp_snmp_list[i].name,
 						buff[i]);
 
diff --git a/net/xfrm/xfrm_proc.c b/net/xfrm/xfrm_proc.c
index c9df546..2f1da2d 100644
--- a/net/xfrm/xfrm_proc.c
+++ b/net/xfrm/xfrm_proc.c
@@ -57,11 +57,11 @@ static int xfrm_statistics_seq_show(struct seq_file *seq, void *v)
 	memset(buff, 0, sizeof(unsigned long) * LINUX_MIB_XFRMMAX);
 
 	for_each_possible_cpu(c)
-		for (i = 0; xfrm_mib_list[i].name != NULL; i++)
+		for (i = 0; xfrm_mib_list[i].name; i++)
 			buff[i] += snmp_get_cpu_field(
 						net->mib.xfrm_statistics,
 						c, xfrm_mib_list[i].entry);
-	for (i = 0; xfrm_mib_list[i].name != NULL; i++)
+	for (i = 0; xfrm_mib_list[i].name; i++)
 		seq_printf(seq, "%-24s\t%lu\n", xfrm_mib_list[i].name,
 						buff[i]);
 
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [RFC PATCH v2 6/6] net: Suppress the "Comparison to NULL could be written" warning
@ 2016-09-06  2:30   ` Jia He
  0 siblings, 0 replies; 22+ messages in thread
From: Jia He @ 2016-09-06  2:30 UTC (permalink / raw)
  To: netdev
  Cc: linux-sctp, linux-kernel, davem, Alexey Kuznetsov, James Morris,
	Hideaki YOSHIFUJI, Patrick McHardy, Vlad Yasevich, Neil Horman,
	Steffen Klassert, Herbert Xu, Jia He

This is to suppress the checkpatch.pl warning "Comparison to NULL
could be written". No functional changes here.

Signed-off-by: Jia He <hejianet@gmail.com>
---
 net/ipv4/proc.c      | 44 ++++++++++++++++++++++----------------------
 net/sctp/proc.c      |  4 ++--
 net/xfrm/xfrm_proc.c |  4 ++--
 3 files changed, 26 insertions(+), 26 deletions(-)

diff --git a/net/ipv4/proc.c b/net/ipv4/proc.c
index f413fdc..bf0bb22 100644
--- a/net/ipv4/proc.c
+++ b/net/ipv4/proc.c
@@ -358,22 +358,22 @@ static void icmp_put(struct seq_file *seq)
 	atomic_long_t *ptr = net->mib.icmpmsg_statistics->mibs;
 
 	seq_puts(seq, "\nIcmp: InMsgs InErrors InCsumErrors");
-	for (i = 0; icmpmibmap[i].name != NULL; i++)
+	for (i = 0; icmpmibmap[i].name; i++)
 		seq_printf(seq, " In%s", icmpmibmap[i].name);
 	seq_puts(seq, " OutMsgs OutErrors");
-	for (i = 0; icmpmibmap[i].name != NULL; i++)
+	for (i = 0; icmpmibmap[i].name; i++)
 		seq_printf(seq, " Out%s", icmpmibmap[i].name);
 	seq_printf(seq, "\nIcmp: %lu %lu %lu",
 		snmp_fold_field(net->mib.icmp_statistics, ICMP_MIB_INMSGS),
 		snmp_fold_field(net->mib.icmp_statistics, ICMP_MIB_INERRORS),
 		snmp_fold_field(net->mib.icmp_statistics, ICMP_MIB_CSUMERRORS));
-	for (i = 0; icmpmibmap[i].name != NULL; i++)
+	for (i = 0; icmpmibmap[i].name; i++)
 		seq_printf(seq, " %lu",
 			   atomic_long_read(ptr + icmpmibmap[i].index));
 	seq_printf(seq, " %lu %lu",
 		snmp_fold_field(net->mib.icmp_statistics, ICMP_MIB_OUTMSGS),
 		snmp_fold_field(net->mib.icmp_statistics, ICMP_MIB_OUTERRORS));
-	for (i = 0; icmpmibmap[i].name != NULL; i++)
+	for (i = 0; icmpmibmap[i].name; i++)
 		seq_printf(seq, " %lu",
 			   atomic_long_read(ptr + (icmpmibmap[i].index | 0x100)));
 }
@@ -390,7 +390,7 @@ static int snmp_seq_show_ipstats(struct seq_file *seq, void *v)
 	memset(buff64, 0, IPSTATS_MIB_MAX * sizeof(u64));
 
 	seq_puts(seq, "Ip: Forwarding DefaultTTL");
-	for (i = 0; snmp4_ipstats_list[i].name != NULL; i++)
+	for (i = 0; snmp4_ipstats_list[i].name; i++)
 		seq_printf(seq, " %s", snmp4_ipstats_list[i].name);
 
 	seq_printf(seq, "\nIp: %d %d",
@@ -400,13 +400,13 @@ static int snmp_seq_show_ipstats(struct seq_file *seq, void *v)
 	BUILD_BUG_ON(offsetof(struct ipstats_mib, mibs) != 0);
 
 	for_each_possible_cpu(c) {
-		for (i = 0; snmp4_ipstats_list[i].name != NULL; i++)
+		for (i = 0; snmp4_ipstats_list[i].name; i++)
 			buff64[i] += snmp_get_cpu_field64(
 					net->mib.ip_statistics,
 					c, snmp4_ipstats_list[i].entry,
 					offsetof(struct ipstats_mib, syncp));
 	}
-	for (i = 0; snmp4_ipstats_list[i].name != NULL; i++)
+	for (i = 0; snmp4_ipstats_list[i].name; i++)
 		seq_printf(seq, " %llu", buff64[i]);
 
 	return 0;
@@ -421,17 +421,17 @@ static int snmp_seq_show_tcp_udp(struct seq_file *seq, void *v)
 	memset(buff, 0, TCPUDP_MIB_MAX * sizeof(unsigned long));
 
 	seq_puts(seq, "\nTcp:");
-	for (i = 0; snmp4_tcp_list[i].name != NULL; i++)
+	for (i = 0; snmp4_tcp_list[i].name; i++)
 		seq_printf(seq, " %s", snmp4_tcp_list[i].name);
 
 	seq_puts(seq, "\nTcp:");
 	for_each_possible_cpu(c) {
-		for (i = 0; snmp4_tcp_list[i].name != NULL; i++)
+		for (i = 0; snmp4_tcp_list[i].name; i++)
 			buff[i] += snmp_get_cpu_field(net->mib.tcp_statistics,
 						c, snmp4_tcp_list[i].entry);
 	}
 
-	for (i = 0; snmp4_tcp_list[i].name != NULL; i++) {
+	for (i = 0; snmp4_tcp_list[i].name; i++) {
 		/* MaxConn field is signed, RFC 2012 */
 		if (snmp4_tcp_list[i].entry = TCP_MIB_MAXCONN)
 			seq_printf(seq, " %ld", buff[i]);
@@ -442,15 +442,15 @@ static int snmp_seq_show_tcp_udp(struct seq_file *seq, void *v)
 	memset(buff, 0, TCPUDP_MIB_MAX * sizeof(unsigned long));
 
 	for_each_possible_cpu(c) {
-		for (i = 0; snmp4_udp_list[i].name != NULL; i++)
+		for (i = 0; snmp4_udp_list[i].name; i++)
 			buff[i] += snmp_get_cpu_field(net->mib.udp_statistics,
 						c, snmp4_udp_list[i].entry);
 	}
 	seq_puts(seq, "\nUdp:");
-	for (i = 0; snmp4_udp_list[i].name != NULL; i++)
+	for (i = 0; snmp4_udp_list[i].name; i++)
 		seq_printf(seq, " %s", snmp4_udp_list[i].name);
 	seq_puts(seq, "\nUdp:");
-	for (i = 0; snmp4_udp_list[i].name != NULL; i++)
+	for (i = 0; snmp4_udp_list[i].name; i++)
 		seq_printf(seq, " %lu", buff[i]);
 
 	memset(buff, 0, TCPUDP_MIB_MAX * sizeof(unsigned long));
@@ -458,14 +458,14 @@ static int snmp_seq_show_tcp_udp(struct seq_file *seq, void *v)
 	/* the UDP and UDP-Lite MIBs are the same */
 	seq_puts(seq, "\nUdpLite:");
 	for_each_possible_cpu(c) {
-		for (i = 0; snmp4_udp_list[i].name != NULL; i++)
+		for (i = 0; snmp4_udp_list[i].name; i++)
 			buff[i] += snmp_get_cpu_field(net->mib.udplite_statistics,
 						c, snmp4_udp_list[i].entry);
 	}
-	for (i = 0; snmp4_udp_list[i].name != NULL; i++)
+	for (i = 0; snmp4_udp_list[i].name; i++)
 		seq_printf(seq, " %s", snmp4_udp_list[i].name);
 	seq_puts(seq, "\nUdpLite:");
-	for (i = 0; snmp4_udp_list[i].name != NULL; i++)
+	for (i = 0; snmp4_udp_list[i].name; i++)
 		seq_printf(seq, " %lu", buff[i]);
 
 	seq_putc(seq, '\n');
@@ -513,29 +513,29 @@ static int netstat_seq_show(struct seq_file *seq, void *v)
 	memset(buff64, 0, sizeof(u64) * IPSTATS_MIB_MAX);
 
 	seq_puts(seq, "TcpExt:");
-	for (i = 0; snmp4_net_list[i].name != NULL; i++)
+	for (i = 0; snmp4_net_list[i].name; i++)
 		seq_printf(seq, " %s", snmp4_net_list[i].name);
 
 	seq_puts(seq, "\nTcpExt:");
 	for_each_possible_cpu(c)
-		for (i = 0; snmp4_net_list[i].name != NULL; i++)
+		for (i = 0; snmp4_net_list[i].name; i++)
 			buff[i] += snmp_get_cpu_field(net->mib.net_statistics,
 						c, snmp4_net_list[i].entry);
-	for (i = 0; snmp4_net_list[i].name != NULL; i++)
+	for (i = 0; snmp4_net_list[i].name; i++)
 		seq_printf(seq, " %lu", buff[i]);
 
 	seq_puts(seq, "\nIpExt:");
-	for (i = 0; snmp4_ipextstats_list[i].name != NULL; i++)
+	for (i = 0; snmp4_ipextstats_list[i].name; i++)
 		seq_printf(seq, " %s", snmp4_ipextstats_list[i].name);
 
 	seq_puts(seq, "\nIpExt:");
 	for_each_possible_cpu(c)
-		for (i = 0; snmp4_ipextstats_list[i].name != NULL; i++)
+		for (i = 0; snmp4_ipextstats_list[i].name; i++)
 			buff64[i] += snmp_get_cpu_field64(
 					net->mib.ip_statistics,
 					c, snmp4_ipextstats_list[i].entry,
 					offsetof(struct ipstats_mib, syncp));
-	for (i = 0; snmp4_ipextstats_list[i].name != NULL; i++)
+	for (i = 0; snmp4_ipextstats_list[i].name; i++)
 		seq_printf(seq, " %llu", buff64[i]);
 
 	seq_putc(seq, '\n');
diff --git a/net/sctp/proc.c b/net/sctp/proc.c
index 085fb95..816a5e8 100644
--- a/net/sctp/proc.c
+++ b/net/sctp/proc.c
@@ -80,11 +80,11 @@ static int sctp_snmp_seq_show(struct seq_file *seq, void *v)
 	memset(buff, 0, sizeof(unsigned long) * SCTP_MIB_MAX);
 
 	for_each_possible_cpu(c)
-		for (i = 0; sctp_snmp_list[i].name != NULL; i++)
+		for (i = 0; sctp_snmp_list[i].name; i++)
 			buff[i] += snmp_get_cpu_field(
 						net->sctp.sctp_statistics,
 						c, sctp_snmp_list[i].entry);
-	for (i = 0; sctp_snmp_list[i].name != NULL; i++)
+	for (i = 0; sctp_snmp_list[i].name; i++)
 		seq_printf(seq, "%-32s\t%ld\n", sctp_snmp_list[i].name,
 						buff[i]);
 
diff --git a/net/xfrm/xfrm_proc.c b/net/xfrm/xfrm_proc.c
index c9df546..2f1da2d 100644
--- a/net/xfrm/xfrm_proc.c
+++ b/net/xfrm/xfrm_proc.c
@@ -57,11 +57,11 @@ static int xfrm_statistics_seq_show(struct seq_file *seq, void *v)
 	memset(buff, 0, sizeof(unsigned long) * LINUX_MIB_XFRMMAX);
 
 	for_each_possible_cpu(c)
-		for (i = 0; xfrm_mib_list[i].name != NULL; i++)
+		for (i = 0; xfrm_mib_list[i].name; i++)
 			buff[i] += snmp_get_cpu_field(
 						net->mib.xfrm_statistics,
 						c, xfrm_mib_list[i].entry);
-	for (i = 0; xfrm_mib_list[i].name != NULL; i++)
+	for (i = 0; xfrm_mib_list[i].name; i++)
 		seq_printf(seq, "%-24s\t%lu\n", xfrm_mib_list[i].name,
 						buff[i]);
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [RFC PATCH v2 0/6] Reduce cache miss for snmp_fold_field
  2016-09-06  2:30 ` Jia He
@ 2016-09-06 12:44   ` Marcelo Ricardo Leitner
  -1 siblings, 0 replies; 22+ messages in thread
From: Marcelo Ricardo Leitner @ 2016-09-06 12:44 UTC (permalink / raw)
  To: Jia He
  Cc: netdev, linux-sctp, linux-kernel, davem, Alexey Kuznetsov,
	James Morris, Hideaki YOSHIFUJI, Patrick McHardy, Vlad Yasevich,
	Neil Horman, Steffen Klassert, Herbert Xu

On Tue, Sep 06, 2016 at 10:30:03AM +0800, Jia He wrote:
...
> v2:
> - 1/6 fix bug in udplite statistics. 
> - 1/6 snmp_seq_show is split into 2 parts
> 
> Jia He (6):
>   proc: Reduce cache miss in {snmp,netstat}_seq_show
>   proc: Reduce cache miss in snmp6_seq_show
>   proc: Reduce cache miss in sctp_snmp_seq_show
>   proc: Reduce cache miss in xfrm_statistics_seq_show
>   ipv6: Remove useless parameter in __snmp6_fill_statsdev
>   net: Suppress the "Comparison to NULL could be written" warning

Hi Jia,

Did you try to come up with a generic interface for this, like
snmp_fold_fields64() (note the fieldS) or snmp_fold_field64_batch() ?

Sounds like we have the same code in several places and seems they all
operate very similarly. They have a percpu table, an identified max, a
destination buffer.. 

If this is possible, this would reduce the possibility of hiccups in a
particular code.

  Marcelo

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [RFC PATCH v2 0/6] Reduce cache miss for snmp_fold_field
@ 2016-09-06 12:44   ` Marcelo Ricardo Leitner
  0 siblings, 0 replies; 22+ messages in thread
From: Marcelo Ricardo Leitner @ 2016-09-06 12:44 UTC (permalink / raw)
  To: Jia He
  Cc: netdev, linux-sctp, linux-kernel, davem, Alexey Kuznetsov,
	James Morris, Hideaki YOSHIFUJI, Patrick McHardy, Vlad Yasevich,
	Neil Horman, Steffen Klassert, Herbert Xu

On Tue, Sep 06, 2016 at 10:30:03AM +0800, Jia He wrote:
...
> v2:
> - 1/6 fix bug in udplite statistics. 
> - 1/6 snmp_seq_show is split into 2 parts
> 
> Jia He (6):
>   proc: Reduce cache miss in {snmp,netstat}_seq_show
>   proc: Reduce cache miss in snmp6_seq_show
>   proc: Reduce cache miss in sctp_snmp_seq_show
>   proc: Reduce cache miss in xfrm_statistics_seq_show
>   ipv6: Remove useless parameter in __snmp6_fill_statsdev
>   net: Suppress the "Comparison to NULL could be written" warning

Hi Jia,

Did you try to come up with a generic interface for this, like
snmp_fold_fields64() (note the fieldS) or snmp_fold_field64_batch() ?

Sounds like we have the same code in several places and seems they all
operate very similarly. They have a percpu table, an identified max, a
destination buffer.. 

If this is possible, this would reduce the possibility of hiccups in a
particular code.

  Marcelo


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [RFC PATCH v2 1/6] proc: Reduce cache miss in {snmp,netstat}_seq_show
  2016-09-06  2:30   ` Jia He
@ 2016-09-06 22:57     ` David Miller
  -1 siblings, 0 replies; 22+ messages in thread
From: David Miller @ 2016-09-06 22:57 UTC (permalink / raw)
  To: hejianet
  Cc: netdev, linux-sctp, linux-kernel, kuznet, jmorris, yoshfuji,
	kaber, vyasevich, nhorman, steffen.klassert, herbert

From: Jia He <hejianet@gmail.com>
Date: Tue,  6 Sep 2016 10:30:04 +0800

> +#define MAX(a, b) ((u32)(a) >= (u32)(b) ? (a) : (b))

Please do not define private min/max macros, use the existing max_t()
or similar as needed.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [RFC PATCH v2 1/6] proc: Reduce cache miss in {snmp,netstat}_seq_show
@ 2016-09-06 22:57     ` David Miller
  0 siblings, 0 replies; 22+ messages in thread
From: David Miller @ 2016-09-06 22:57 UTC (permalink / raw)
  To: hejianet
  Cc: netdev, linux-sctp, linux-kernel, kuznet, jmorris, yoshfuji,
	kaber, vyasevich, nhorman, steffen.klassert, herbert

From: Jia He <hejianet@gmail.com>
Date: Tue,  6 Sep 2016 10:30:04 +0800

> +#define MAX(a, b) ((u32)(a) >= (u32)(b) ? (a) : (b))

Please do not define private min/max macros, use the existing max_t()
or similar as needed.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [RFC PATCH v2 1/6] proc: Reduce cache miss in {snmp,netstat}_seq_show
  2016-09-06 22:57     ` David Miller
@ 2016-09-07  2:29       ` hejianet
  -1 siblings, 0 replies; 22+ messages in thread
From: hejianet @ 2016-09-07  2:29 UTC (permalink / raw)
  To: David Miller
  Cc: netdev, linux-sctp, linux-kernel, kuznet, jmorris, yoshfuji,
	kaber, vyasevich, nhorman, steffen.klassert, herbert



On 9/7/16 6:57 AM, David Miller wrote:
> From: Jia He <hejianet@gmail.com>
> Date: Tue,  6 Sep 2016 10:30:04 +0800
>
>> +#define MAX(a, b) ((u32)(a) >= (u32)(b) ? (a) : (b))
Thanks
B.R.
Jia

> Please do not define private min/max macros, use the existing max_t()
> or similar as needed.
>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [RFC PATCH v2 1/6] proc: Reduce cache miss in {snmp,netstat}_seq_show
@ 2016-09-07  2:29       ` hejianet
  0 siblings, 0 replies; 22+ messages in thread
From: hejianet @ 2016-09-07  2:29 UTC (permalink / raw)
  To: David Miller
  Cc: netdev, linux-sctp, linux-kernel, kuznet, jmorris, yoshfuji,
	kaber, vyasevich, nhorman, steffen.klassert, herbert



On 9/7/16 6:57 AM, David Miller wrote:
> From: Jia He <hejianet@gmail.com>
> Date: Tue,  6 Sep 2016 10:30:04 +0800
>
>> +#define MAX(a, b) ((u32)(a) >= (u32)(b) ? (a) : (b))
Thanks
B.R.
Jia

> Please do not define private min/max macros, use the existing max_t()
> or similar as needed.
>


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [RFC PATCH v2 0/6] Reduce cache miss for snmp_fold_field
  2016-09-06 12:44   ` Marcelo Ricardo Leitner
@ 2016-09-07  2:30     ` hejianet
  -1 siblings, 0 replies; 22+ messages in thread
From: hejianet @ 2016-09-07  2:30 UTC (permalink / raw)
  To: Marcelo Ricardo Leitner
  Cc: netdev, linux-sctp, linux-kernel, davem, Alexey Kuznetsov,
	James Morris, Hideaki YOSHIFUJI, Patrick McHardy, Vlad Yasevich,
	Neil Horman, Steffen Klassert, Herbert Xu

Hi Marcelo

Thanks for the suggestion

Will consider that

B.R.

Jia


On 9/6/16 8:44 PM, Marcelo Ricardo Leitner wrote:
> On Tue, Sep 06, 2016 at 10:30:03AM +0800, Jia He wrote:
> ...
>> v2:
>> - 1/6 fix bug in udplite statistics.
>> - 1/6 snmp_seq_show is split into 2 parts
>>
>> Jia He (6):
>>    proc: Reduce cache miss in {snmp,netstat}_seq_show
>>    proc: Reduce cache miss in snmp6_seq_show
>>    proc: Reduce cache miss in sctp_snmp_seq_show
>>    proc: Reduce cache miss in xfrm_statistics_seq_show
>>    ipv6: Remove useless parameter in __snmp6_fill_statsdev
>>    net: Suppress the "Comparison to NULL could be written" warning
> Hi Jia,
>
> Did you try to come up with a generic interface for this, like
> snmp_fold_fields64() (note the fieldS) or snmp_fold_field64_batch() ?
>
> Sounds like we have the same code in several places and seems they all
> operate very similarly. They have a percpu table, an identified max, a
> destination buffer..
>
> If this is possible, this would reduce the possibility of hiccups in a
> particular code.
>
>    Marcelo
>
>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [RFC PATCH v2 0/6] Reduce cache miss for snmp_fold_field
@ 2016-09-07  2:30     ` hejianet
  0 siblings, 0 replies; 22+ messages in thread
From: hejianet @ 2016-09-07  2:30 UTC (permalink / raw)
  To: Marcelo Ricardo Leitner
  Cc: netdev, linux-sctp, linux-kernel, davem, Alexey Kuznetsov,
	James Morris, Hideaki YOSHIFUJI, Patrick McHardy, Vlad Yasevich,
	Neil Horman, Steffen Klassert, Herbert Xu

Hi Marcelo

Thanks for the suggestion

Will consider that

B.R.

Jia


On 9/6/16 8:44 PM, Marcelo Ricardo Leitner wrote:
> On Tue, Sep 06, 2016 at 10:30:03AM +0800, Jia He wrote:
> ...
>> v2:
>> - 1/6 fix bug in udplite statistics.
>> - 1/6 snmp_seq_show is split into 2 parts
>>
>> Jia He (6):
>>    proc: Reduce cache miss in {snmp,netstat}_seq_show
>>    proc: Reduce cache miss in snmp6_seq_show
>>    proc: Reduce cache miss in sctp_snmp_seq_show
>>    proc: Reduce cache miss in xfrm_statistics_seq_show
>>    ipv6: Remove useless parameter in __snmp6_fill_statsdev
>>    net: Suppress the "Comparison to NULL could be written" warning
> Hi Jia,
>
> Did you try to come up with a generic interface for this, like
> snmp_fold_fields64() (note the fieldS) or snmp_fold_field64_batch() ?
>
> Sounds like we have the same code in several places and seems they all
> operate very similarly. They have a percpu table, an identified max, a
> destination buffer..
>
> If this is possible, this would reduce the possibility of hiccups in a
> particular code.
>
>    Marcelo
>
>


^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2016-09-07  2:31 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-09-06  2:30 [RFC PATCH v2 0/6] Reduce cache miss for snmp_fold_field Jia He
2016-09-06  2:30 ` Jia He
2016-09-06  2:30 ` [RFC PATCH v2 1/6] proc: Reduce cache miss in {snmp,netstat}_seq_show Jia He
2016-09-06  2:30   ` Jia He
2016-09-06 22:57   ` David Miller
2016-09-06 22:57     ` David Miller
2016-09-07  2:29     ` hejianet
2016-09-07  2:29       ` hejianet
2016-09-06  2:30 ` [RFC PATCH v2 2/6] proc: Reduce cache miss in snmp6_seq_show Jia He
2016-09-06  2:30   ` Jia He
2016-09-06  2:30 ` [RFC PATCH v2 3/6] proc: Reduce cache miss in sctp_snmp_seq_show Jia He
2016-09-06  2:30   ` Jia He
2016-09-06  2:30 ` [RFC PATCH v2 4/6] proc: Reduce cache miss in xfrm_statistics_seq_show Jia He
2016-09-06  2:30   ` Jia He
2016-09-06  2:30 ` [RFC PATCH v2 5/6] ipv6: Remove useless parameter in __snmp6_fill_statsdev Jia He
2016-09-06  2:30   ` Jia He
2016-09-06  2:30 ` [RFC PATCH v2 6/6] net: Suppress the "Comparison to NULL could be written" warning Jia He
2016-09-06  2:30   ` Jia He
2016-09-06 12:44 ` [RFC PATCH v2 0/6] Reduce cache miss for snmp_fold_field Marcelo Ricardo Leitner
2016-09-06 12:44   ` Marcelo Ricardo Leitner
2016-09-07  2:30   ` hejianet
2016-09-07  2:30     ` hejianet

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.