All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v5 0/7] Reduce cache miss for snmp_fold_field
@ 2016-09-28  6:22 ` Jia He
  0 siblings, 0 replies; 22+ messages in thread
From: Jia He @ 2016-09-28  6:22 UTC (permalink / raw)
  To: netdev
  Cc: linux-sctp, linux-kernel, davem, Alexey Kuznetsov, James Morris,
	Hideaki YOSHIFUJI, Patrick McHardy, Vlad Yasevich, Neil Horman,
	Steffen Klassert, Herbert Xu, marcelo.leitner, Jia He

In a PowerPc server with large cpu number(160), besides commit
a3a773726c9f ("net: Optimize snmp stat aggregation by walking all
the percpu data at once"), I watched several other snmp_fold_field
callsites which would cause high cache miss rate.

test source code:
================
My simple test case, which read from the procfs items endlessly:
/***********************************************************/
#include <stdlib.h>
#include <stdio.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#define LINELEN  2560
int main(int argc, char **argv)
{
        int i;
        int fd = -1 ;
        int rdsize = 0;
        char buf[LINELEN+1];

        buf[LINELEN] = 0;
        memset(buf,0,LINELEN);

        if(1 >= argc) {
                printf("file name empty\n");
                return -1;
        }

        fd = open(argv[1], O_RDWR, 0644);
        if(0 > fd){
                printf("open error\n");
                return -2;
        }

        for(i=0;i<0xffffffff;i++) {
                while(0 < (rdsize = read(fd,buf,LINELEN))){
                        //nothing here
                }

                lseek(fd, 0, SEEK_SET);
        }

        close(fd);
        return 0;
}
/**********************************************************/

compile and run:
================
gcc test.c -o test

perf stat -d -e cache-misses ./test /proc/net/snmp
perf stat -d -e cache-misses ./test /proc/net/snmp6
perf stat -d -e cache-misses ./test /proc/net/sctp/snmp
perf stat -d -e cache-misses ./test /proc/net/xfrm_stat

before the patch set:
====================
 Performance counter stats for 'system wide':

         355911097      cache-misses                                                 [40.08%]
        2356829300      L1-dcache-loads                                              [60.04%]
         355642645      L1-dcache-load-misses     #   15.09% of all L1-dcache hits   [60.02%]
         346544541      LLC-loads                                                    [59.97%]
            389763      LLC-load-misses           #    0.11% of all LL-cache hits    [40.02%]

       6.245162638 seconds time elapsed

After the patch set:
===================
 Performance counter stats for 'system wide':

         194992476      cache-misses                                                 [40.03%]
        6718051877      L1-dcache-loads                                              [60.07%]
         194871921      L1-dcache-load-misses     #    2.90% of all L1-dcache hits   [60.11%]
         187632232      LLC-loads                                                    [60.04%]
            464466      LLC-load-misses           #    0.25% of all LL-cache hits    [39.89%]

       6.868422769 seconds time elapsed
The cache-miss rate can be reduced from 15% to 2.9%

changelog
=========
v5:
- order local variables from longest to shortest line
v4:
- move memset into one block of if statement in snmp6_seq_show_item
- remove the changes in netstat_seq_show considerred the stack usage is too large
v3:
- introduce generic interface (suggested by Marcelo Ricardo Leitner)
- use max_t instead of self defined macro (suggested by David Miller)
v2:
- fix bug in udplite statistics.
- snmp_seq_show is split into 2 parts

Jia He (7):
  net:snmp: Introduce generic interfaces for snmp_get_cpu_field{,64}
  proc: Reduce cache miss in snmp_seq_show
  proc: Reduce cache miss in snmp6_seq_show
  proc: Reduce cache miss in sctp_snmp_seq_show
  proc: Reduce cache miss in xfrm_statistics_seq_show
  ipv6: Remove useless parameter in __snmp6_fill_statsdev
  net: Suppress the "Comparison to NULL could be written" warnings

 include/net/ip.h     |  23 ++++++++++++
 net/ipv4/proc.c      | 100 +++++++++++++++++++++++++++++++--------------------
 net/ipv6/addrconf.c  |  12 +++----
 net/ipv6/proc.c      |  32 ++++++++++++-----
 net/sctp/proc.c      |  12 ++++---
 net/xfrm/xfrm_proc.c |  12 +++++--
 6 files changed, 131 insertions(+), 60 deletions(-)

-- 
2.5.5

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v5 0/7] Reduce cache miss for snmp_fold_field
@ 2016-09-28  6:22 ` Jia He
  0 siblings, 0 replies; 22+ messages in thread
From: Jia He @ 2016-09-28  6:22 UTC (permalink / raw)
  To: netdev
  Cc: linux-sctp, linux-kernel, davem, Alexey Kuznetsov, James Morris,
	Hideaki YOSHIFUJI, Patrick McHardy, Vlad Yasevich, Neil Horman,
	Steffen Klassert, Herbert Xu, marcelo.leitner, Jia He

In a PowerPc server with large cpu number(160), besides commit
a3a773726c9f ("net: Optimize snmp stat aggregation by walking all
the percpu data at once"), I watched several other snmp_fold_field
callsites which would cause high cache miss rate.

test source code:
========
My simple test case, which read from the procfs items endlessly:
/***********************************************************/
#include <stdlib.h>
#include <stdio.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#define LINELEN  2560
int main(int argc, char **argv)
{
        int i;
        int fd = -1 ;
        int rdsize = 0;
        char buf[LINELEN+1];

        buf[LINELEN] = 0;
        memset(buf,0,LINELEN);

        if(1 >= argc) {
                printf("file name empty\n");
                return -1;
        }

        fd = open(argv[1], O_RDWR, 0644);
        if(0 > fd){
                printf("open error\n");
                return -2;
        }

        for(i=0;i<0xffffffff;i++) {
                while(0 < (rdsize = read(fd,buf,LINELEN))){
                        //nothing here
                }

                lseek(fd, 0, SEEK_SET);
        }

        close(fd);
        return 0;
}
/**********************************************************/

compile and run:
========
gcc test.c -o test

perf stat -d -e cache-misses ./test /proc/net/snmp
perf stat -d -e cache-misses ./test /proc/net/snmp6
perf stat -d -e cache-misses ./test /proc/net/sctp/snmp
perf stat -d -e cache-misses ./test /proc/net/xfrm_stat

before the patch set:
==========
 Performance counter stats for 'system wide':

         355911097      cache-misses                                                 [40.08%]
        2356829300      L1-dcache-loads                                              [60.04%]
         355642645      L1-dcache-load-misses     #   15.09% of all L1-dcache hits   [60.02%]
         346544541      LLC-loads                                                    [59.97%]
            389763      LLC-load-misses           #    0.11% of all LL-cache hits    [40.02%]

       6.245162638 seconds time elapsed

After the patch set:
========= Performance counter stats for 'system wide':

         194992476      cache-misses                                                 [40.03%]
        6718051877      L1-dcache-loads                                              [60.07%]
         194871921      L1-dcache-load-misses     #    2.90% of all L1-dcache hits   [60.11%]
         187632232      LLC-loads                                                    [60.04%]
            464466      LLC-load-misses           #    0.25% of all LL-cache hits    [39.89%]

       6.868422769 seconds time elapsed
The cache-miss rate can be reduced from 15% to 2.9%

changelog
====v5:
- order local variables from longest to shortest line
v4:
- move memset into one block of if statement in snmp6_seq_show_item
- remove the changes in netstat_seq_show considerred the stack usage is too large
v3:
- introduce generic interface (suggested by Marcelo Ricardo Leitner)
- use max_t instead of self defined macro (suggested by David Miller)
v2:
- fix bug in udplite statistics.
- snmp_seq_show is split into 2 parts

Jia He (7):
  net:snmp: Introduce generic interfaces for snmp_get_cpu_field{,64}
  proc: Reduce cache miss in snmp_seq_show
  proc: Reduce cache miss in snmp6_seq_show
  proc: Reduce cache miss in sctp_snmp_seq_show
  proc: Reduce cache miss in xfrm_statistics_seq_show
  ipv6: Remove useless parameter in __snmp6_fill_statsdev
  net: Suppress the "Comparison to NULL could be written" warnings

 include/net/ip.h     |  23 ++++++++++++
 net/ipv4/proc.c      | 100 +++++++++++++++++++++++++++++++--------------------
 net/ipv6/addrconf.c  |  12 +++----
 net/ipv6/proc.c      |  32 ++++++++++++-----
 net/sctp/proc.c      |  12 ++++---
 net/xfrm/xfrm_proc.c |  12 +++++--
 6 files changed, 131 insertions(+), 60 deletions(-)

-- 
2.5.5


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v5 1/7] net:snmp: Introduce generic interfaces for snmp_get_cpu_field{,64}
  2016-09-28  6:22 ` Jia He
@ 2016-09-28  6:22   ` Jia He
  -1 siblings, 0 replies; 22+ messages in thread
From: Jia He @ 2016-09-28  6:22 UTC (permalink / raw)
  To: netdev
  Cc: linux-sctp, linux-kernel, davem, Alexey Kuznetsov, James Morris,
	Hideaki YOSHIFUJI, Patrick McHardy, Vlad Yasevich, Neil Horman,
	Steffen Klassert, Herbert Xu, marcelo.leitner, Jia He

This is to introduce the generic interfaces for snmp_get_cpu_field{,64}.
It exchanges the two for-loops for collecting the percpu statistics data.
This can aggregate the data by going through all the items of each cpu
sequentially.

Signed-off-by: Jia He <hejianet@gmail.com>
Suggested-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
---
 include/net/ip.h | 23 +++++++++++++++++++++++
 1 file changed, 23 insertions(+)

diff --git a/include/net/ip.h b/include/net/ip.h
index 9742b92..bc43c0f 100644
--- a/include/net/ip.h
+++ b/include/net/ip.h
@@ -219,6 +219,29 @@ static inline u64 snmp_fold_field64(void __percpu *mib, int offt, size_t syncp_o
 }
 #endif
 
+#define snmp_get_cpu_field64_batch(buff64, stats_list, mib_statistic, offset) \
+{ \
+	int i, c; \
+	for_each_possible_cpu(c) { \
+		for (i = 0; stats_list[i].name; i++) \
+			buff64[i] += snmp_get_cpu_field64( \
+					mib_statistic, \
+					c, stats_list[i].entry, \
+					offset); \
+	} \
+}
+
+#define snmp_get_cpu_field_batch(buff, stats_list, mib_statistic) \
+{ \
+	int i, c; \
+	for_each_possible_cpu(c) { \
+		for (i = 0; stats_list[i].name; i++) \
+			buff[i] += snmp_get_cpu_field( \
+						mib_statistic, \
+						c, stats_list[i].entry); \
+	} \
+}
+
 void inet_get_local_port_range(struct net *net, int *low, int *high);
 
 #ifdef CONFIG_SYSCTL
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v5 1/7] net:snmp: Introduce generic interfaces for snmp_get_cpu_field{,64}
@ 2016-09-28  6:22   ` Jia He
  0 siblings, 0 replies; 22+ messages in thread
From: Jia He @ 2016-09-28  6:22 UTC (permalink / raw)
  To: netdev
  Cc: linux-sctp, linux-kernel, davem, Alexey Kuznetsov, James Morris,
	Hideaki YOSHIFUJI, Patrick McHardy, Vlad Yasevich, Neil Horman,
	Steffen Klassert, Herbert Xu, marcelo.leitner, Jia He

This is to introduce the generic interfaces for snmp_get_cpu_field{,64}.
It exchanges the two for-loops for collecting the percpu statistics data.
This can aggregate the data by going through all the items of each cpu
sequentially.

Signed-off-by: Jia He <hejianet@gmail.com>
Suggested-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
---
 include/net/ip.h | 23 +++++++++++++++++++++++
 1 file changed, 23 insertions(+)

diff --git a/include/net/ip.h b/include/net/ip.h
index 9742b92..bc43c0f 100644
--- a/include/net/ip.h
+++ b/include/net/ip.h
@@ -219,6 +219,29 @@ static inline u64 snmp_fold_field64(void __percpu *mib, int offt, size_t syncp_o
 }
 #endif
 
+#define snmp_get_cpu_field64_batch(buff64, stats_list, mib_statistic, offset) \
+{ \
+	int i, c; \
+	for_each_possible_cpu(c) { \
+		for (i = 0; stats_list[i].name; i++) \
+			buff64[i] += snmp_get_cpu_field64( \
+					mib_statistic, \
+					c, stats_list[i].entry, \
+					offset); \
+	} \
+}
+
+#define snmp_get_cpu_field_batch(buff, stats_list, mib_statistic) \
+{ \
+	int i, c; \
+	for_each_possible_cpu(c) { \
+		for (i = 0; stats_list[i].name; i++) \
+			buff[i] += snmp_get_cpu_field( \
+						mib_statistic, \
+						c, stats_list[i].entry); \
+	} \
+}
+
 void inet_get_local_port_range(struct net *net, int *low, int *high);
 
 #ifdef CONFIG_SYSCTL
-- 
2.5.5


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v5 2/7] proc: Reduce cache miss in snmp_seq_show
  2016-09-28  6:22 ` Jia He
@ 2016-09-28  6:22   ` Jia He
  -1 siblings, 0 replies; 22+ messages in thread
From: Jia He @ 2016-09-28  6:22 UTC (permalink / raw)
  To: netdev
  Cc: linux-sctp, linux-kernel, davem, Alexey Kuznetsov, James Morris,
	Hideaki YOSHIFUJI, Patrick McHardy, Vlad Yasevich, Neil Horman,
	Steffen Klassert, Herbert Xu, marcelo.leitner, Jia He

This is to use the generic interfaces snmp_get_cpu_field{,64}_batch to
aggregate the data by going through all the items of each cpu sequentially.
Then snmp_seq_show is split into 2 parts to avoid build warning "the frame
size" larger than 1024.

Signed-off-by: Jia He <hejianet@gmail.com>
---
 net/ipv4/proc.c | 68 ++++++++++++++++++++++++++++++++++++++-------------------
 1 file changed, 46 insertions(+), 22 deletions(-)

diff --git a/net/ipv4/proc.c b/net/ipv4/proc.c
index 9f665b6..e01f4df 100644
--- a/net/ipv4/proc.c
+++ b/net/ipv4/proc.c
@@ -46,6 +46,8 @@
 #include <net/sock.h>
 #include <net/raw.h>
 
+#define TCPUDP_MIB_MAX max_t(u32, UDP_MIB_MAX, TCP_MIB_MAX)
+
 /*
  *	Report socket allocation statistics [mea@utu.fi]
  */
@@ -378,13 +380,15 @@ static void icmp_put(struct seq_file *seq)
 /*
  *	Called from the PROCfs module. This outputs /proc/net/snmp.
  */
-static int snmp_seq_show(struct seq_file *seq, void *v)
+static int snmp_seq_show_ipstats(struct seq_file *seq, void *v)
 {
 	int i;
+	u64 buff64[IPSTATS_MIB_MAX];
 	struct net *net = seq->private;
 
-	seq_puts(seq, "Ip: Forwarding DefaultTTL");
+	memset(buff64, 0, IPSTATS_MIB_MAX * sizeof(u64));
 
+	seq_puts(seq, "Ip: Forwarding DefaultTTL");
 	for (i = 0; snmp4_ipstats_list[i].name != NULL; i++)
 		seq_printf(seq, " %s", snmp4_ipstats_list[i].name);
 
@@ -393,57 +397,77 @@ static int snmp_seq_show(struct seq_file *seq, void *v)
 		   net->ipv4.sysctl_ip_default_ttl);
 
 	BUILD_BUG_ON(offsetof(struct ipstats_mib, mibs) != 0);
+	snmp_get_cpu_field64_batch(buff64, snmp4_ipstats_list,
+				   net->mib.ip_statistics,
+				   offsetof(struct ipstats_mib, syncp));
 	for (i = 0; snmp4_ipstats_list[i].name != NULL; i++)
-		seq_printf(seq, " %llu",
-			   snmp_fold_field64(net->mib.ip_statistics,
-					     snmp4_ipstats_list[i].entry,
-					     offsetof(struct ipstats_mib, syncp)));
+		seq_printf(seq, " %llu", buff64[i]);
 
-	icmp_put(seq);	/* RFC 2011 compatibility */
-	icmpmsg_put(seq);
+	return 0;
+}
+
+static int snmp_seq_show_tcp_udp(struct seq_file *seq, void *v)
+{
+	int i;
+	struct net *net = seq->private;
+	unsigned long buff[TCPUDP_MIB_MAX];
+
+	memset(buff, 0, TCPUDP_MIB_MAX * sizeof(unsigned long));
 
 	seq_puts(seq, "\nTcp:");
 	for (i = 0; snmp4_tcp_list[i].name != NULL; i++)
 		seq_printf(seq, " %s", snmp4_tcp_list[i].name);
 
 	seq_puts(seq, "\nTcp:");
+	snmp_get_cpu_field_batch(buff, snmp4_tcp_list,
+				 net->mib.tcp_statistics);
 	for (i = 0; snmp4_tcp_list[i].name != NULL; i++) {
 		/* MaxConn field is signed, RFC 2012 */
 		if (snmp4_tcp_list[i].entry == TCP_MIB_MAXCONN)
-			seq_printf(seq, " %ld",
-				   snmp_fold_field(net->mib.tcp_statistics,
-						   snmp4_tcp_list[i].entry));
+			seq_printf(seq, " %ld", buff[i]);
 		else
-			seq_printf(seq, " %lu",
-				   snmp_fold_field(net->mib.tcp_statistics,
-						   snmp4_tcp_list[i].entry));
+			seq_printf(seq, " %lu", buff[i]);
 	}
 
+	memset(buff, 0, TCPUDP_MIB_MAX * sizeof(unsigned long));
+
+	snmp_get_cpu_field_batch(buff, snmp4_udp_list,
+				 net->mib.udp_statistics);
 	seq_puts(seq, "\nUdp:");
 	for (i = 0; snmp4_udp_list[i].name != NULL; i++)
 		seq_printf(seq, " %s", snmp4_udp_list[i].name);
-
 	seq_puts(seq, "\nUdp:");
 	for (i = 0; snmp4_udp_list[i].name != NULL; i++)
-		seq_printf(seq, " %lu",
-			   snmp_fold_field(net->mib.udp_statistics,
-					   snmp4_udp_list[i].entry));
+		seq_printf(seq, " %lu", buff[i]);
+
+	memset(buff, 0, TCPUDP_MIB_MAX * sizeof(unsigned long));
 
 	/* the UDP and UDP-Lite MIBs are the same */
 	seq_puts(seq, "\nUdpLite:");
+	snmp_get_cpu_field_batch(buff, snmp4_udp_list,
+				 net->mib.udplite_statistics);
 	for (i = 0; snmp4_udp_list[i].name != NULL; i++)
 		seq_printf(seq, " %s", snmp4_udp_list[i].name);
-
 	seq_puts(seq, "\nUdpLite:");
 	for (i = 0; snmp4_udp_list[i].name != NULL; i++)
-		seq_printf(seq, " %lu",
-			   snmp_fold_field(net->mib.udplite_statistics,
-					   snmp4_udp_list[i].entry));
+		seq_printf(seq, " %lu", buff[i]);
 
 	seq_putc(seq, '\n');
 	return 0;
 }
 
+static int snmp_seq_show(struct seq_file *seq, void *v)
+{
+	snmp_seq_show_ipstats(seq, v);
+
+	icmp_put(seq);	/* RFC 2011 compatibility */
+	icmpmsg_put(seq);
+
+	snmp_seq_show_tcp_udp(seq, v);
+
+	return 0;
+}
+
 static int snmp_seq_open(struct inode *inode, struct file *file)
 {
 	return single_open_net(inode, file, snmp_seq_show);
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v5 2/7] proc: Reduce cache miss in snmp_seq_show
@ 2016-09-28  6:22   ` Jia He
  0 siblings, 0 replies; 22+ messages in thread
From: Jia He @ 2016-09-28  6:22 UTC (permalink / raw)
  To: netdev
  Cc: linux-sctp, linux-kernel, davem, Alexey Kuznetsov, James Morris,
	Hideaki YOSHIFUJI, Patrick McHardy, Vlad Yasevich, Neil Horman,
	Steffen Klassert, Herbert Xu, marcelo.leitner, Jia He

This is to use the generic interfaces snmp_get_cpu_field{,64}_batch to
aggregate the data by going through all the items of each cpu sequentially.
Then snmp_seq_show is split into 2 parts to avoid build warning "the frame
size" larger than 1024.

Signed-off-by: Jia He <hejianet@gmail.com>
---
 net/ipv4/proc.c | 68 ++++++++++++++++++++++++++++++++++++++-------------------
 1 file changed, 46 insertions(+), 22 deletions(-)

diff --git a/net/ipv4/proc.c b/net/ipv4/proc.c
index 9f665b6..e01f4df 100644
--- a/net/ipv4/proc.c
+++ b/net/ipv4/proc.c
@@ -46,6 +46,8 @@
 #include <net/sock.h>
 #include <net/raw.h>
 
+#define TCPUDP_MIB_MAX max_t(u32, UDP_MIB_MAX, TCP_MIB_MAX)
+
 /*
  *	Report socket allocation statistics [mea@utu.fi]
  */
@@ -378,13 +380,15 @@ static void icmp_put(struct seq_file *seq)
 /*
  *	Called from the PROCfs module. This outputs /proc/net/snmp.
  */
-static int snmp_seq_show(struct seq_file *seq, void *v)
+static int snmp_seq_show_ipstats(struct seq_file *seq, void *v)
 {
 	int i;
+	u64 buff64[IPSTATS_MIB_MAX];
 	struct net *net = seq->private;
 
-	seq_puts(seq, "Ip: Forwarding DefaultTTL");
+	memset(buff64, 0, IPSTATS_MIB_MAX * sizeof(u64));
 
+	seq_puts(seq, "Ip: Forwarding DefaultTTL");
 	for (i = 0; snmp4_ipstats_list[i].name != NULL; i++)
 		seq_printf(seq, " %s", snmp4_ipstats_list[i].name);
 
@@ -393,57 +397,77 @@ static int snmp_seq_show(struct seq_file *seq, void *v)
 		   net->ipv4.sysctl_ip_default_ttl);
 
 	BUILD_BUG_ON(offsetof(struct ipstats_mib, mibs) != 0);
+	snmp_get_cpu_field64_batch(buff64, snmp4_ipstats_list,
+				   net->mib.ip_statistics,
+				   offsetof(struct ipstats_mib, syncp));
 	for (i = 0; snmp4_ipstats_list[i].name != NULL; i++)
-		seq_printf(seq, " %llu",
-			   snmp_fold_field64(net->mib.ip_statistics,
-					     snmp4_ipstats_list[i].entry,
-					     offsetof(struct ipstats_mib, syncp)));
+		seq_printf(seq, " %llu", buff64[i]);
 
-	icmp_put(seq);	/* RFC 2011 compatibility */
-	icmpmsg_put(seq);
+	return 0;
+}
+
+static int snmp_seq_show_tcp_udp(struct seq_file *seq, void *v)
+{
+	int i;
+	struct net *net = seq->private;
+	unsigned long buff[TCPUDP_MIB_MAX];
+
+	memset(buff, 0, TCPUDP_MIB_MAX * sizeof(unsigned long));
 
 	seq_puts(seq, "\nTcp:");
 	for (i = 0; snmp4_tcp_list[i].name != NULL; i++)
 		seq_printf(seq, " %s", snmp4_tcp_list[i].name);
 
 	seq_puts(seq, "\nTcp:");
+	snmp_get_cpu_field_batch(buff, snmp4_tcp_list,
+				 net->mib.tcp_statistics);
 	for (i = 0; snmp4_tcp_list[i].name != NULL; i++) {
 		/* MaxConn field is signed, RFC 2012 */
 		if (snmp4_tcp_list[i].entry = TCP_MIB_MAXCONN)
-			seq_printf(seq, " %ld",
-				   snmp_fold_field(net->mib.tcp_statistics,
-						   snmp4_tcp_list[i].entry));
+			seq_printf(seq, " %ld", buff[i]);
 		else
-			seq_printf(seq, " %lu",
-				   snmp_fold_field(net->mib.tcp_statistics,
-						   snmp4_tcp_list[i].entry));
+			seq_printf(seq, " %lu", buff[i]);
 	}
 
+	memset(buff, 0, TCPUDP_MIB_MAX * sizeof(unsigned long));
+
+	snmp_get_cpu_field_batch(buff, snmp4_udp_list,
+				 net->mib.udp_statistics);
 	seq_puts(seq, "\nUdp:");
 	for (i = 0; snmp4_udp_list[i].name != NULL; i++)
 		seq_printf(seq, " %s", snmp4_udp_list[i].name);
-
 	seq_puts(seq, "\nUdp:");
 	for (i = 0; snmp4_udp_list[i].name != NULL; i++)
-		seq_printf(seq, " %lu",
-			   snmp_fold_field(net->mib.udp_statistics,
-					   snmp4_udp_list[i].entry));
+		seq_printf(seq, " %lu", buff[i]);
+
+	memset(buff, 0, TCPUDP_MIB_MAX * sizeof(unsigned long));
 
 	/* the UDP and UDP-Lite MIBs are the same */
 	seq_puts(seq, "\nUdpLite:");
+	snmp_get_cpu_field_batch(buff, snmp4_udp_list,
+				 net->mib.udplite_statistics);
 	for (i = 0; snmp4_udp_list[i].name != NULL; i++)
 		seq_printf(seq, " %s", snmp4_udp_list[i].name);
-
 	seq_puts(seq, "\nUdpLite:");
 	for (i = 0; snmp4_udp_list[i].name != NULL; i++)
-		seq_printf(seq, " %lu",
-			   snmp_fold_field(net->mib.udplite_statistics,
-					   snmp4_udp_list[i].entry));
+		seq_printf(seq, " %lu", buff[i]);
 
 	seq_putc(seq, '\n');
 	return 0;
 }
 
+static int snmp_seq_show(struct seq_file *seq, void *v)
+{
+	snmp_seq_show_ipstats(seq, v);
+
+	icmp_put(seq);	/* RFC 2011 compatibility */
+	icmpmsg_put(seq);
+
+	snmp_seq_show_tcp_udp(seq, v);
+
+	return 0;
+}
+
 static int snmp_seq_open(struct inode *inode, struct file *file)
 {
 	return single_open_net(inode, file, snmp_seq_show);
-- 
2.5.5


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v5 3/7] proc: Reduce cache miss in snmp6_seq_show
  2016-09-28  6:22 ` Jia He
@ 2016-09-28  6:22   ` Jia He
  -1 siblings, 0 replies; 22+ messages in thread
From: Jia He @ 2016-09-28  6:22 UTC (permalink / raw)
  To: netdev
  Cc: linux-sctp, linux-kernel, davem, Alexey Kuznetsov, James Morris,
	Hideaki YOSHIFUJI, Patrick McHardy, Vlad Yasevich, Neil Horman,
	Steffen Klassert, Herbert Xu, marcelo.leitner, Jia He

This is to use the generic interfaces snmp_get_cpu_field{,64}_batch to
aggregate the data by going through all the items of each cpu sequentially.

Signed-off-by: Jia He <hejianet@gmail.com>
---
 net/ipv6/proc.c | 32 +++++++++++++++++++++++---------
 1 file changed, 23 insertions(+), 9 deletions(-)

diff --git a/net/ipv6/proc.c b/net/ipv6/proc.c
index 679253d0..e047701 100644
--- a/net/ipv6/proc.c
+++ b/net/ipv6/proc.c
@@ -30,6 +30,11 @@
 #include <net/transp_v6.h>
 #include <net/ipv6.h>
 
+#define MAX4(a, b, c, d) \
+	max_t(u32, max_t(u32, a, b), max_t(u32, c, d))
+#define SNMP_MIB_MAX MAX4(UDP_MIB_MAX, TCP_MIB_MAX, \
+			IPSTATS_MIB_MAX, ICMP_MIB_MAX)
+
 static int sockstat6_seq_show(struct seq_file *seq, void *v)
 {
 	struct net *net = seq->private;
@@ -192,13 +197,19 @@ static void snmp6_seq_show_item(struct seq_file *seq, void __percpu *pcpumib,
 				const struct snmp_mib *itemlist)
 {
 	int i;
-	unsigned long val;
-
-	for (i = 0; itemlist[i].name; i++) {
-		val = pcpumib ?
-			snmp_fold_field(pcpumib, itemlist[i].entry) :
-			atomic_long_read(smib + itemlist[i].entry);
-		seq_printf(seq, "%-32s\t%lu\n", itemlist[i].name, val);
+	unsigned long buff[SNMP_MIB_MAX];
+
+	if (pcpumib) {
+		memset(buff, 0, sizeof(unsigned long) * SNMP_MIB_MAX);
+
+		snmp_get_cpu_field_batch(buff, itemlist, pcpumib);
+		for (i = 0; itemlist[i].name; i++)
+			seq_printf(seq, "%-32s\t%lu\n",
+				   itemlist[i].name, buff[i]);
+	} else {
+		for (i = 0; itemlist[i].name; i++)
+			seq_printf(seq, "%-32s\t%lu\n", itemlist[i].name,
+				   atomic_long_read(smib + itemlist[i].entry));
 	}
 }
 
@@ -206,10 +217,13 @@ static void snmp6_seq_show_item64(struct seq_file *seq, void __percpu *mib,
 				  const struct snmp_mib *itemlist, size_t syncpoff)
 {
 	int i;
+	u64 buff64[SNMP_MIB_MAX];
+
+	memset(buff64, 0, sizeof(unsigned long) * SNMP_MIB_MAX);
 
+	snmp_get_cpu_field64_batch(buff64, itemlist, mib, syncpoff);
 	for (i = 0; itemlist[i].name; i++)
-		seq_printf(seq, "%-32s\t%llu\n", itemlist[i].name,
-			   snmp_fold_field64(mib, itemlist[i].entry, syncpoff));
+		seq_printf(seq, "%-32s\t%llu\n", itemlist[i].name, buff64[i]);
 }
 
 static int snmp6_seq_show(struct seq_file *seq, void *v)
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v5 3/7] proc: Reduce cache miss in snmp6_seq_show
@ 2016-09-28  6:22   ` Jia He
  0 siblings, 0 replies; 22+ messages in thread
From: Jia He @ 2016-09-28  6:22 UTC (permalink / raw)
  To: netdev
  Cc: linux-sctp, linux-kernel, davem, Alexey Kuznetsov, James Morris,
	Hideaki YOSHIFUJI, Patrick McHardy, Vlad Yasevich, Neil Horman,
	Steffen Klassert, Herbert Xu, marcelo.leitner, Jia He

This is to use the generic interfaces snmp_get_cpu_field{,64}_batch to
aggregate the data by going through all the items of each cpu sequentially.

Signed-off-by: Jia He <hejianet@gmail.com>
---
 net/ipv6/proc.c | 32 +++++++++++++++++++++++---------
 1 file changed, 23 insertions(+), 9 deletions(-)

diff --git a/net/ipv6/proc.c b/net/ipv6/proc.c
index 679253d0..e047701 100644
--- a/net/ipv6/proc.c
+++ b/net/ipv6/proc.c
@@ -30,6 +30,11 @@
 #include <net/transp_v6.h>
 #include <net/ipv6.h>
 
+#define MAX4(a, b, c, d) \
+	max_t(u32, max_t(u32, a, b), max_t(u32, c, d))
+#define SNMP_MIB_MAX MAX4(UDP_MIB_MAX, TCP_MIB_MAX, \
+			IPSTATS_MIB_MAX, ICMP_MIB_MAX)
+
 static int sockstat6_seq_show(struct seq_file *seq, void *v)
 {
 	struct net *net = seq->private;
@@ -192,13 +197,19 @@ static void snmp6_seq_show_item(struct seq_file *seq, void __percpu *pcpumib,
 				const struct snmp_mib *itemlist)
 {
 	int i;
-	unsigned long val;
-
-	for (i = 0; itemlist[i].name; i++) {
-		val = pcpumib ?
-			snmp_fold_field(pcpumib, itemlist[i].entry) :
-			atomic_long_read(smib + itemlist[i].entry);
-		seq_printf(seq, "%-32s\t%lu\n", itemlist[i].name, val);
+	unsigned long buff[SNMP_MIB_MAX];
+
+	if (pcpumib) {
+		memset(buff, 0, sizeof(unsigned long) * SNMP_MIB_MAX);
+
+		snmp_get_cpu_field_batch(buff, itemlist, pcpumib);
+		for (i = 0; itemlist[i].name; i++)
+			seq_printf(seq, "%-32s\t%lu\n",
+				   itemlist[i].name, buff[i]);
+	} else {
+		for (i = 0; itemlist[i].name; i++)
+			seq_printf(seq, "%-32s\t%lu\n", itemlist[i].name,
+				   atomic_long_read(smib + itemlist[i].entry));
 	}
 }
 
@@ -206,10 +217,13 @@ static void snmp6_seq_show_item64(struct seq_file *seq, void __percpu *mib,
 				  const struct snmp_mib *itemlist, size_t syncpoff)
 {
 	int i;
+	u64 buff64[SNMP_MIB_MAX];
+
+	memset(buff64, 0, sizeof(unsigned long) * SNMP_MIB_MAX);
 
+	snmp_get_cpu_field64_batch(buff64, itemlist, mib, syncpoff);
 	for (i = 0; itemlist[i].name; i++)
-		seq_printf(seq, "%-32s\t%llu\n", itemlist[i].name,
-			   snmp_fold_field64(mib, itemlist[i].entry, syncpoff));
+		seq_printf(seq, "%-32s\t%llu\n", itemlist[i].name, buff64[i]);
 }
 
 static int snmp6_seq_show(struct seq_file *seq, void *v)
-- 
2.5.5


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v5 4/7] proc: Reduce cache miss in sctp_snmp_seq_show
  2016-09-28  6:22 ` Jia He
@ 2016-09-28  6:22   ` Jia He
  -1 siblings, 0 replies; 22+ messages in thread
From: Jia He @ 2016-09-28  6:22 UTC (permalink / raw)
  To: netdev
  Cc: linux-sctp, linux-kernel, davem, Alexey Kuznetsov, James Morris,
	Hideaki YOSHIFUJI, Patrick McHardy, Vlad Yasevich, Neil Horman,
	Steffen Klassert, Herbert Xu, marcelo.leitner, Jia He

This is to use the generic interfaces snmp_get_cpu_field{,64}_batch to
aggregate the data by going through all the items of each cpu sequentially.

Signed-off-by: Jia He <hejianet@gmail.com>
---
 net/sctp/proc.c | 10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/net/sctp/proc.c b/net/sctp/proc.c
index ef8ba77..9e7f4a7 100644
--- a/net/sctp/proc.c
+++ b/net/sctp/proc.c
@@ -73,13 +73,17 @@ static const struct snmp_mib sctp_snmp_list[] = {
 /* Display sctp snmp mib statistics(/proc/net/sctp/snmp). */
 static int sctp_snmp_seq_show(struct seq_file *seq, void *v)
 {
-	struct net *net = seq->private;
 	int i;
+	struct net *net = seq->private;
+	unsigned long buff[SCTP_MIB_MAX];
+
+	memset(buff, 0, sizeof(unsigned long) * SCTP_MIB_MAX);
 
+	snmp_get_cpu_field_batch(buff, sctp_snmp_list,
+				 net->sctp.sctp_statistics);
 	for (i = 0; sctp_snmp_list[i].name != NULL; i++)
 		seq_printf(seq, "%-32s\t%ld\n", sctp_snmp_list[i].name,
-			   snmp_fold_field(net->sctp.sctp_statistics,
-				      sctp_snmp_list[i].entry));
+						buff[i]);
 
 	return 0;
 }
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v5 4/7] proc: Reduce cache miss in sctp_snmp_seq_show
@ 2016-09-28  6:22   ` Jia He
  0 siblings, 0 replies; 22+ messages in thread
From: Jia He @ 2016-09-28  6:22 UTC (permalink / raw)
  To: netdev
  Cc: linux-sctp, linux-kernel, davem, Alexey Kuznetsov, James Morris,
	Hideaki YOSHIFUJI, Patrick McHardy, Vlad Yasevich, Neil Horman,
	Steffen Klassert, Herbert Xu, marcelo.leitner, Jia He

This is to use the generic interfaces snmp_get_cpu_field{,64}_batch to
aggregate the data by going through all the items of each cpu sequentially.

Signed-off-by: Jia He <hejianet@gmail.com>
---
 net/sctp/proc.c | 10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/net/sctp/proc.c b/net/sctp/proc.c
index ef8ba77..9e7f4a7 100644
--- a/net/sctp/proc.c
+++ b/net/sctp/proc.c
@@ -73,13 +73,17 @@ static const struct snmp_mib sctp_snmp_list[] = {
 /* Display sctp snmp mib statistics(/proc/net/sctp/snmp). */
 static int sctp_snmp_seq_show(struct seq_file *seq, void *v)
 {
-	struct net *net = seq->private;
 	int i;
+	struct net *net = seq->private;
+	unsigned long buff[SCTP_MIB_MAX];
+
+	memset(buff, 0, sizeof(unsigned long) * SCTP_MIB_MAX);
 
+	snmp_get_cpu_field_batch(buff, sctp_snmp_list,
+				 net->sctp.sctp_statistics);
 	for (i = 0; sctp_snmp_list[i].name != NULL; i++)
 		seq_printf(seq, "%-32s\t%ld\n", sctp_snmp_list[i].name,
-			   snmp_fold_field(net->sctp.sctp_statistics,
-				      sctp_snmp_list[i].entry));
+						buff[i]);
 
 	return 0;
 }
-- 
2.5.5


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v5 5/7] proc: Reduce cache miss in xfrm_statistics_seq_show
  2016-09-28  6:22 ` Jia He
@ 2016-09-28  6:22   ` Jia He
  -1 siblings, 0 replies; 22+ messages in thread
From: Jia He @ 2016-09-28  6:22 UTC (permalink / raw)
  To: netdev
  Cc: linux-sctp, linux-kernel, davem, Alexey Kuznetsov, James Morris,
	Hideaki YOSHIFUJI, Patrick McHardy, Vlad Yasevich, Neil Horman,
	Steffen Klassert, Herbert Xu, marcelo.leitner, Jia He

This is to use the generic interfaces snmp_get_cpu_field{,64}_batch to
aggregate the data by going through all the items of each cpu sequentially.

Signed-off-by: Jia He <hejianet@gmail.com>
---
 net/xfrm/xfrm_proc.c | 12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/net/xfrm/xfrm_proc.c b/net/xfrm/xfrm_proc.c
index 9c4fbd8..dc0cea6 100644
--- a/net/xfrm/xfrm_proc.c
+++ b/net/xfrm/xfrm_proc.c
@@ -50,12 +50,18 @@ static const struct snmp_mib xfrm_mib_list[] = {
 
 static int xfrm_statistics_seq_show(struct seq_file *seq, void *v)
 {
-	struct net *net = seq->private;
 	int i;
+	struct net *net = seq->private;
+	unsigned long buff[LINUX_MIB_XFRMMAX];
+
+	memset(buff, 0, sizeof(unsigned long) * LINUX_MIB_XFRMMAX);
+
+	snmp_get_cpu_field_batch(buff, xfrm_mib_list,
+				 net->mib.xfrm_statistics);
 	for (i = 0; xfrm_mib_list[i].name; i++)
 		seq_printf(seq, "%-24s\t%lu\n", xfrm_mib_list[i].name,
-			   snmp_fold_field(net->mib.xfrm_statistics,
-					   xfrm_mib_list[i].entry));
+						buff[i]);
+
 	return 0;
 }
 
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v5 5/7] proc: Reduce cache miss in xfrm_statistics_seq_show
@ 2016-09-28  6:22   ` Jia He
  0 siblings, 0 replies; 22+ messages in thread
From: Jia He @ 2016-09-28  6:22 UTC (permalink / raw)
  To: netdev
  Cc: linux-sctp, linux-kernel, davem, Alexey Kuznetsov, James Morris,
	Hideaki YOSHIFUJI, Patrick McHardy, Vlad Yasevich, Neil Horman,
	Steffen Klassert, Herbert Xu, marcelo.leitner, Jia He

This is to use the generic interfaces snmp_get_cpu_field{,64}_batch to
aggregate the data by going through all the items of each cpu sequentially.

Signed-off-by: Jia He <hejianet@gmail.com>
---
 net/xfrm/xfrm_proc.c | 12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/net/xfrm/xfrm_proc.c b/net/xfrm/xfrm_proc.c
index 9c4fbd8..dc0cea6 100644
--- a/net/xfrm/xfrm_proc.c
+++ b/net/xfrm/xfrm_proc.c
@@ -50,12 +50,18 @@ static const struct snmp_mib xfrm_mib_list[] = {
 
 static int xfrm_statistics_seq_show(struct seq_file *seq, void *v)
 {
-	struct net *net = seq->private;
 	int i;
+	struct net *net = seq->private;
+	unsigned long buff[LINUX_MIB_XFRMMAX];
+
+	memset(buff, 0, sizeof(unsigned long) * LINUX_MIB_XFRMMAX);
+
+	snmp_get_cpu_field_batch(buff, xfrm_mib_list,
+				 net->mib.xfrm_statistics);
 	for (i = 0; xfrm_mib_list[i].name; i++)
 		seq_printf(seq, "%-24s\t%lu\n", xfrm_mib_list[i].name,
-			   snmp_fold_field(net->mib.xfrm_statistics,
-					   xfrm_mib_list[i].entry));
+						buff[i]);
+
 	return 0;
 }
 
-- 
2.5.5


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v5 6/7] ipv6: Remove useless parameter in __snmp6_fill_statsdev
  2016-09-28  6:22 ` Jia He
@ 2016-09-28  6:22   ` Jia He
  -1 siblings, 0 replies; 22+ messages in thread
From: Jia He @ 2016-09-28  6:22 UTC (permalink / raw)
  To: netdev
  Cc: linux-sctp, linux-kernel, davem, Alexey Kuznetsov, James Morris,
	Hideaki YOSHIFUJI, Patrick McHardy, Vlad Yasevich, Neil Horman,
	Steffen Klassert, Herbert Xu, marcelo.leitner, Jia He

The parameter items(always ICMP6_MIB_MAX) is useless for __snmp6_fill_statsdev.

Signed-off-by: Jia He <hejianet@gmail.com>
---
 net/ipv6/addrconf.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
index 2f1f5d4..35d4baa 100644
--- a/net/ipv6/addrconf.c
+++ b/net/ipv6/addrconf.c
@@ -4961,18 +4961,18 @@ static inline size_t inet6_if_nlmsg_size(void)
 }
 
 static inline void __snmp6_fill_statsdev(u64 *stats, atomic_long_t *mib,
-				      int items, int bytes)
+					int bytes)
 {
 	int i;
-	int pad = bytes - sizeof(u64) * items;
+	int pad = bytes - sizeof(u64) * ICMP6_MIB_MAX;
 	BUG_ON(pad < 0);
 
 	/* Use put_unaligned() because stats may not be aligned for u64. */
-	put_unaligned(items, &stats[0]);
-	for (i = 1; i < items; i++)
+	put_unaligned(ICMP6_MIB_MAX, &stats[0]);
+	for (i = 1; i < ICMP6_MIB_MAX; i++)
 		put_unaligned(atomic_long_read(&mib[i]), &stats[i]);
 
-	memset(&stats[items], 0, pad);
+	memset(&stats[ICMP6_MIB_MAX], 0, pad);
 }
 
 static inline void __snmp6_fill_stats64(u64 *stats, void __percpu *mib,
@@ -5005,7 +5005,7 @@ static void snmp6_fill_stats(u64 *stats, struct inet6_dev *idev, int attrtype,
 				     offsetof(struct ipstats_mib, syncp));
 		break;
 	case IFLA_INET6_ICMP6STATS:
-		__snmp6_fill_statsdev(stats, idev->stats.icmpv6dev->mibs, ICMP6_MIB_MAX, bytes);
+		__snmp6_fill_statsdev(stats, idev->stats.icmpv6dev->mibs, bytes);
 		break;
 	}
 }
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v5 6/7] ipv6: Remove useless parameter in __snmp6_fill_statsdev
@ 2016-09-28  6:22   ` Jia He
  0 siblings, 0 replies; 22+ messages in thread
From: Jia He @ 2016-09-28  6:22 UTC (permalink / raw)
  To: netdev
  Cc: linux-sctp, linux-kernel, davem, Alexey Kuznetsov, James Morris,
	Hideaki YOSHIFUJI, Patrick McHardy, Vlad Yasevich, Neil Horman,
	Steffen Klassert, Herbert Xu, marcelo.leitner, Jia He

The parameter items(always ICMP6_MIB_MAX) is useless for __snmp6_fill_statsdev.

Signed-off-by: Jia He <hejianet@gmail.com>
---
 net/ipv6/addrconf.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
index 2f1f5d4..35d4baa 100644
--- a/net/ipv6/addrconf.c
+++ b/net/ipv6/addrconf.c
@@ -4961,18 +4961,18 @@ static inline size_t inet6_if_nlmsg_size(void)
 }
 
 static inline void __snmp6_fill_statsdev(u64 *stats, atomic_long_t *mib,
-				      int items, int bytes)
+					int bytes)
 {
 	int i;
-	int pad = bytes - sizeof(u64) * items;
+	int pad = bytes - sizeof(u64) * ICMP6_MIB_MAX;
 	BUG_ON(pad < 0);
 
 	/* Use put_unaligned() because stats may not be aligned for u64. */
-	put_unaligned(items, &stats[0]);
-	for (i = 1; i < items; i++)
+	put_unaligned(ICMP6_MIB_MAX, &stats[0]);
+	for (i = 1; i < ICMP6_MIB_MAX; i++)
 		put_unaligned(atomic_long_read(&mib[i]), &stats[i]);
 
-	memset(&stats[items], 0, pad);
+	memset(&stats[ICMP6_MIB_MAX], 0, pad);
 }
 
 static inline void __snmp6_fill_stats64(u64 *stats, void __percpu *mib,
@@ -5005,7 +5005,7 @@ static void snmp6_fill_stats(u64 *stats, struct inet6_dev *idev, int attrtype,
 				     offsetof(struct ipstats_mib, syncp));
 		break;
 	case IFLA_INET6_ICMP6STATS:
-		__snmp6_fill_statsdev(stats, idev->stats.icmpv6dev->mibs, ICMP6_MIB_MAX, bytes);
+		__snmp6_fill_statsdev(stats, idev->stats.icmpv6dev->mibs, bytes);
 		break;
 	}
 }
-- 
2.5.5


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v5 7/7] net: Suppress the "Comparison to NULL could be written" warnings
  2016-09-28  6:22 ` Jia He
@ 2016-09-28  6:22   ` Jia He
  -1 siblings, 0 replies; 22+ messages in thread
From: Jia He @ 2016-09-28  6:22 UTC (permalink / raw)
  To: netdev
  Cc: linux-sctp, linux-kernel, davem, Alexey Kuznetsov, James Morris,
	Hideaki YOSHIFUJI, Patrick McHardy, Vlad Yasevich, Neil Horman,
	Steffen Klassert, Herbert Xu, marcelo.leitner, Jia He

This is to suppress the checkpatch.pl warning "Comparison to NULL
could be written". No functional changes here.

Signed-off-by: Jia He <hejianet@gmail.com>
---
 net/ipv4/proc.c | 32 ++++++++++++++++----------------
 net/sctp/proc.c |  2 +-
 2 files changed, 17 insertions(+), 17 deletions(-)

diff --git a/net/ipv4/proc.c b/net/ipv4/proc.c
index e01f4df..1a7a773 100644
--- a/net/ipv4/proc.c
+++ b/net/ipv4/proc.c
@@ -357,22 +357,22 @@ static void icmp_put(struct seq_file *seq)
 	atomic_long_t *ptr = net->mib.icmpmsg_statistics->mibs;
 
 	seq_puts(seq, "\nIcmp: InMsgs InErrors InCsumErrors");
-	for (i = 0; icmpmibmap[i].name != NULL; i++)
+	for (i = 0; icmpmibmap[i].name; i++)
 		seq_printf(seq, " In%s", icmpmibmap[i].name);
 	seq_puts(seq, " OutMsgs OutErrors");
-	for (i = 0; icmpmibmap[i].name != NULL; i++)
+	for (i = 0; icmpmibmap[i].name; i++)
 		seq_printf(seq, " Out%s", icmpmibmap[i].name);
 	seq_printf(seq, "\nIcmp: %lu %lu %lu",
 		snmp_fold_field(net->mib.icmp_statistics, ICMP_MIB_INMSGS),
 		snmp_fold_field(net->mib.icmp_statistics, ICMP_MIB_INERRORS),
 		snmp_fold_field(net->mib.icmp_statistics, ICMP_MIB_CSUMERRORS));
-	for (i = 0; icmpmibmap[i].name != NULL; i++)
+	for (i = 0; icmpmibmap[i].name; i++)
 		seq_printf(seq, " %lu",
 			   atomic_long_read(ptr + icmpmibmap[i].index));
 	seq_printf(seq, " %lu %lu",
 		snmp_fold_field(net->mib.icmp_statistics, ICMP_MIB_OUTMSGS),
 		snmp_fold_field(net->mib.icmp_statistics, ICMP_MIB_OUTERRORS));
-	for (i = 0; icmpmibmap[i].name != NULL; i++)
+	for (i = 0; icmpmibmap[i].name; i++)
 		seq_printf(seq, " %lu",
 			   atomic_long_read(ptr + (icmpmibmap[i].index | 0x100)));
 }
@@ -389,7 +389,7 @@ static int snmp_seq_show_ipstats(struct seq_file *seq, void *v)
 	memset(buff64, 0, IPSTATS_MIB_MAX * sizeof(u64));
 
 	seq_puts(seq, "Ip: Forwarding DefaultTTL");
-	for (i = 0; snmp4_ipstats_list[i].name != NULL; i++)
+	for (i = 0; snmp4_ipstats_list[i].name; i++)
 		seq_printf(seq, " %s", snmp4_ipstats_list[i].name);
 
 	seq_printf(seq, "\nIp: %d %d",
@@ -400,7 +400,7 @@ static int snmp_seq_show_ipstats(struct seq_file *seq, void *v)
 	snmp_get_cpu_field64_batch(buff64, snmp4_ipstats_list,
 				   net->mib.ip_statistics,
 				   offsetof(struct ipstats_mib, syncp));
-	for (i = 0; snmp4_ipstats_list[i].name != NULL; i++)
+	for (i = 0; snmp4_ipstats_list[i].name; i++)
 		seq_printf(seq, " %llu", buff64[i]);
 
 	return 0;
@@ -415,13 +415,13 @@ static int snmp_seq_show_tcp_udp(struct seq_file *seq, void *v)
 	memset(buff, 0, TCPUDP_MIB_MAX * sizeof(unsigned long));
 
 	seq_puts(seq, "\nTcp:");
-	for (i = 0; snmp4_tcp_list[i].name != NULL; i++)
+	for (i = 0; snmp4_tcp_list[i].name; i++)
 		seq_printf(seq, " %s", snmp4_tcp_list[i].name);
 
 	seq_puts(seq, "\nTcp:");
 	snmp_get_cpu_field_batch(buff, snmp4_tcp_list,
 				 net->mib.tcp_statistics);
-	for (i = 0; snmp4_tcp_list[i].name != NULL; i++) {
+	for (i = 0; snmp4_tcp_list[i].name; i++) {
 		/* MaxConn field is signed, RFC 2012 */
 		if (snmp4_tcp_list[i].entry == TCP_MIB_MAXCONN)
 			seq_printf(seq, " %ld", buff[i]);
@@ -434,10 +434,10 @@ static int snmp_seq_show_tcp_udp(struct seq_file *seq, void *v)
 	snmp_get_cpu_field_batch(buff, snmp4_udp_list,
 				 net->mib.udp_statistics);
 	seq_puts(seq, "\nUdp:");
-	for (i = 0; snmp4_udp_list[i].name != NULL; i++)
+	for (i = 0; snmp4_udp_list[i].name; i++)
 		seq_printf(seq, " %s", snmp4_udp_list[i].name);
 	seq_puts(seq, "\nUdp:");
-	for (i = 0; snmp4_udp_list[i].name != NULL; i++)
+	for (i = 0; snmp4_udp_list[i].name; i++)
 		seq_printf(seq, " %lu", buff[i]);
 
 	memset(buff, 0, TCPUDP_MIB_MAX * sizeof(unsigned long));
@@ -446,10 +446,10 @@ static int snmp_seq_show_tcp_udp(struct seq_file *seq, void *v)
 	seq_puts(seq, "\nUdpLite:");
 	snmp_get_cpu_field_batch(buff, snmp4_udp_list,
 				 net->mib.udplite_statistics);
-	for (i = 0; snmp4_udp_list[i].name != NULL; i++)
+	for (i = 0; snmp4_udp_list[i].name; i++)
 		seq_printf(seq, " %s", snmp4_udp_list[i].name);
 	seq_puts(seq, "\nUdpLite:");
-	for (i = 0; snmp4_udp_list[i].name != NULL; i++)
+	for (i = 0; snmp4_udp_list[i].name; i++)
 		seq_printf(seq, " %lu", buff[i]);
 
 	seq_putc(seq, '\n');
@@ -492,21 +492,21 @@ static int netstat_seq_show(struct seq_file *seq, void *v)
 	struct net *net = seq->private;
 
 	seq_puts(seq, "TcpExt:");
-	for (i = 0; snmp4_net_list[i].name != NULL; i++)
+	for (i = 0; snmp4_net_list[i].name; i++)
 		seq_printf(seq, " %s", snmp4_net_list[i].name);
 
 	seq_puts(seq, "\nTcpExt:");
-	for (i = 0; snmp4_net_list[i].name != NULL; i++)
+	for (i = 0; snmp4_net_list[i].name; i++)
 		seq_printf(seq, " %lu",
 			   snmp_fold_field(net->mib.net_statistics,
 					   snmp4_net_list[i].entry));
 
 	seq_puts(seq, "\nIpExt:");
-	for (i = 0; snmp4_ipextstats_list[i].name != NULL; i++)
+	for (i = 0; snmp4_ipextstats_list[i].name; i++)
 		seq_printf(seq, " %s", snmp4_ipextstats_list[i].name);
 
 	seq_puts(seq, "\nIpExt:");
-	for (i = 0; snmp4_ipextstats_list[i].name != NULL; i++)
+	for (i = 0; snmp4_ipextstats_list[i].name; i++)
 		seq_printf(seq, " %llu",
 			   snmp_fold_field64(net->mib.ip_statistics,
 					     snmp4_ipextstats_list[i].entry,
diff --git a/net/sctp/proc.c b/net/sctp/proc.c
index 9e7f4a7..16d7f3f 100644
--- a/net/sctp/proc.c
+++ b/net/sctp/proc.c
@@ -81,7 +81,7 @@ static int sctp_snmp_seq_show(struct seq_file *seq, void *v)
 
 	snmp_get_cpu_field_batch(buff, sctp_snmp_list,
 				 net->sctp.sctp_statistics);
-	for (i = 0; sctp_snmp_list[i].name != NULL; i++)
+	for (i = 0; sctp_snmp_list[i].name; i++)
 		seq_printf(seq, "%-32s\t%ld\n", sctp_snmp_list[i].name,
 						buff[i]);
 
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v5 7/7] net: Suppress the "Comparison to NULL could be written" warnings
@ 2016-09-28  6:22   ` Jia He
  0 siblings, 0 replies; 22+ messages in thread
From: Jia He @ 2016-09-28  6:22 UTC (permalink / raw)
  To: netdev
  Cc: linux-sctp, linux-kernel, davem, Alexey Kuznetsov, James Morris,
	Hideaki YOSHIFUJI, Patrick McHardy, Vlad Yasevich, Neil Horman,
	Steffen Klassert, Herbert Xu, marcelo.leitner, Jia He

This is to suppress the checkpatch.pl warning "Comparison to NULL
could be written". No functional changes here.

Signed-off-by: Jia He <hejianet@gmail.com>
---
 net/ipv4/proc.c | 32 ++++++++++++++++----------------
 net/sctp/proc.c |  2 +-
 2 files changed, 17 insertions(+), 17 deletions(-)

diff --git a/net/ipv4/proc.c b/net/ipv4/proc.c
index e01f4df..1a7a773 100644
--- a/net/ipv4/proc.c
+++ b/net/ipv4/proc.c
@@ -357,22 +357,22 @@ static void icmp_put(struct seq_file *seq)
 	atomic_long_t *ptr = net->mib.icmpmsg_statistics->mibs;
 
 	seq_puts(seq, "\nIcmp: InMsgs InErrors InCsumErrors");
-	for (i = 0; icmpmibmap[i].name != NULL; i++)
+	for (i = 0; icmpmibmap[i].name; i++)
 		seq_printf(seq, " In%s", icmpmibmap[i].name);
 	seq_puts(seq, " OutMsgs OutErrors");
-	for (i = 0; icmpmibmap[i].name != NULL; i++)
+	for (i = 0; icmpmibmap[i].name; i++)
 		seq_printf(seq, " Out%s", icmpmibmap[i].name);
 	seq_printf(seq, "\nIcmp: %lu %lu %lu",
 		snmp_fold_field(net->mib.icmp_statistics, ICMP_MIB_INMSGS),
 		snmp_fold_field(net->mib.icmp_statistics, ICMP_MIB_INERRORS),
 		snmp_fold_field(net->mib.icmp_statistics, ICMP_MIB_CSUMERRORS));
-	for (i = 0; icmpmibmap[i].name != NULL; i++)
+	for (i = 0; icmpmibmap[i].name; i++)
 		seq_printf(seq, " %lu",
 			   atomic_long_read(ptr + icmpmibmap[i].index));
 	seq_printf(seq, " %lu %lu",
 		snmp_fold_field(net->mib.icmp_statistics, ICMP_MIB_OUTMSGS),
 		snmp_fold_field(net->mib.icmp_statistics, ICMP_MIB_OUTERRORS));
-	for (i = 0; icmpmibmap[i].name != NULL; i++)
+	for (i = 0; icmpmibmap[i].name; i++)
 		seq_printf(seq, " %lu",
 			   atomic_long_read(ptr + (icmpmibmap[i].index | 0x100)));
 }
@@ -389,7 +389,7 @@ static int snmp_seq_show_ipstats(struct seq_file *seq, void *v)
 	memset(buff64, 0, IPSTATS_MIB_MAX * sizeof(u64));
 
 	seq_puts(seq, "Ip: Forwarding DefaultTTL");
-	for (i = 0; snmp4_ipstats_list[i].name != NULL; i++)
+	for (i = 0; snmp4_ipstats_list[i].name; i++)
 		seq_printf(seq, " %s", snmp4_ipstats_list[i].name);
 
 	seq_printf(seq, "\nIp: %d %d",
@@ -400,7 +400,7 @@ static int snmp_seq_show_ipstats(struct seq_file *seq, void *v)
 	snmp_get_cpu_field64_batch(buff64, snmp4_ipstats_list,
 				   net->mib.ip_statistics,
 				   offsetof(struct ipstats_mib, syncp));
-	for (i = 0; snmp4_ipstats_list[i].name != NULL; i++)
+	for (i = 0; snmp4_ipstats_list[i].name; i++)
 		seq_printf(seq, " %llu", buff64[i]);
 
 	return 0;
@@ -415,13 +415,13 @@ static int snmp_seq_show_tcp_udp(struct seq_file *seq, void *v)
 	memset(buff, 0, TCPUDP_MIB_MAX * sizeof(unsigned long));
 
 	seq_puts(seq, "\nTcp:");
-	for (i = 0; snmp4_tcp_list[i].name != NULL; i++)
+	for (i = 0; snmp4_tcp_list[i].name; i++)
 		seq_printf(seq, " %s", snmp4_tcp_list[i].name);
 
 	seq_puts(seq, "\nTcp:");
 	snmp_get_cpu_field_batch(buff, snmp4_tcp_list,
 				 net->mib.tcp_statistics);
-	for (i = 0; snmp4_tcp_list[i].name != NULL; i++) {
+	for (i = 0; snmp4_tcp_list[i].name; i++) {
 		/* MaxConn field is signed, RFC 2012 */
 		if (snmp4_tcp_list[i].entry = TCP_MIB_MAXCONN)
 			seq_printf(seq, " %ld", buff[i]);
@@ -434,10 +434,10 @@ static int snmp_seq_show_tcp_udp(struct seq_file *seq, void *v)
 	snmp_get_cpu_field_batch(buff, snmp4_udp_list,
 				 net->mib.udp_statistics);
 	seq_puts(seq, "\nUdp:");
-	for (i = 0; snmp4_udp_list[i].name != NULL; i++)
+	for (i = 0; snmp4_udp_list[i].name; i++)
 		seq_printf(seq, " %s", snmp4_udp_list[i].name);
 	seq_puts(seq, "\nUdp:");
-	for (i = 0; snmp4_udp_list[i].name != NULL; i++)
+	for (i = 0; snmp4_udp_list[i].name; i++)
 		seq_printf(seq, " %lu", buff[i]);
 
 	memset(buff, 0, TCPUDP_MIB_MAX * sizeof(unsigned long));
@@ -446,10 +446,10 @@ static int snmp_seq_show_tcp_udp(struct seq_file *seq, void *v)
 	seq_puts(seq, "\nUdpLite:");
 	snmp_get_cpu_field_batch(buff, snmp4_udp_list,
 				 net->mib.udplite_statistics);
-	for (i = 0; snmp4_udp_list[i].name != NULL; i++)
+	for (i = 0; snmp4_udp_list[i].name; i++)
 		seq_printf(seq, " %s", snmp4_udp_list[i].name);
 	seq_puts(seq, "\nUdpLite:");
-	for (i = 0; snmp4_udp_list[i].name != NULL; i++)
+	for (i = 0; snmp4_udp_list[i].name; i++)
 		seq_printf(seq, " %lu", buff[i]);
 
 	seq_putc(seq, '\n');
@@ -492,21 +492,21 @@ static int netstat_seq_show(struct seq_file *seq, void *v)
 	struct net *net = seq->private;
 
 	seq_puts(seq, "TcpExt:");
-	for (i = 0; snmp4_net_list[i].name != NULL; i++)
+	for (i = 0; snmp4_net_list[i].name; i++)
 		seq_printf(seq, " %s", snmp4_net_list[i].name);
 
 	seq_puts(seq, "\nTcpExt:");
-	for (i = 0; snmp4_net_list[i].name != NULL; i++)
+	for (i = 0; snmp4_net_list[i].name; i++)
 		seq_printf(seq, " %lu",
 			   snmp_fold_field(net->mib.net_statistics,
 					   snmp4_net_list[i].entry));
 
 	seq_puts(seq, "\nIpExt:");
-	for (i = 0; snmp4_ipextstats_list[i].name != NULL; i++)
+	for (i = 0; snmp4_ipextstats_list[i].name; i++)
 		seq_printf(seq, " %s", snmp4_ipextstats_list[i].name);
 
 	seq_puts(seq, "\nIpExt:");
-	for (i = 0; snmp4_ipextstats_list[i].name != NULL; i++)
+	for (i = 0; snmp4_ipextstats_list[i].name; i++)
 		seq_printf(seq, " %llu",
 			   snmp_fold_field64(net->mib.ip_statistics,
 					     snmp4_ipextstats_list[i].entry,
diff --git a/net/sctp/proc.c b/net/sctp/proc.c
index 9e7f4a7..16d7f3f 100644
--- a/net/sctp/proc.c
+++ b/net/sctp/proc.c
@@ -81,7 +81,7 @@ static int sctp_snmp_seq_show(struct seq_file *seq, void *v)
 
 	snmp_get_cpu_field_batch(buff, sctp_snmp_list,
 				 net->sctp.sctp_statistics);
-	for (i = 0; sctp_snmp_list[i].name != NULL; i++)
+	for (i = 0; sctp_snmp_list[i].name; i++)
 		seq_printf(seq, "%-32s\t%ld\n", sctp_snmp_list[i].name,
 						buff[i]);
 
-- 
2.5.5


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH v5 0/7] Reduce cache miss for snmp_fold_field
  2016-09-28  6:22 ` Jia He
@ 2016-09-28  9:08   ` David Miller
  -1 siblings, 0 replies; 22+ messages in thread
From: David Miller @ 2016-09-28  9:08 UTC (permalink / raw)
  To: hejianet
  Cc: netdev, linux-sctp, linux-kernel, kuznet, jmorris, yoshfuji,
	kaber, vyasevich, nhorman, steffen.klassert, herbert,
	marcelo.leitner

From: Jia He <hejianet@gmail.com>
Date: Wed, 28 Sep 2016 14:22:21 +0800

> v5:
> - order local variables from longest to shortest line

I still see many cases where this problem still exists.  Please
do not resubmit this patch series until you fix all of them.

Patch #2:

-static int snmp_seq_show(struct seq_file *seq, void *v)
+static int snmp_seq_show_ipstats(struct seq_file *seq, void *v)
 {
 	int i;
+	u64 buff64[IPSTATS_MIB_MAX];
 	struct net *net = seq->private;

The order should be "net" then "buff64" then "i".

+static int snmp_seq_show_tcp_udp(struct seq_file *seq, void *v)
+{
+	int i;
+	struct net *net = seq->private;
+	unsigned long buff[TCPUDP_MIB_MAX];

The order should be "buff", "net", then "i".

Patch #3:

@@ -192,13 +197,19 @@ static void snmp6_seq_show_item(struct seq_file *seq, void __percpu *pcpumib,
 				const struct snmp_mib *itemlist)
 {
 	int i;
-	unsigned long val;
-
 ...
+	unsigned long buff[SNMP_MIB_MAX];

The order should be "buff" then "i".

@@ -206,10 +217,13 @@ static void snmp6_seq_show_item64(struct seq_file *seq, void __percpu *mib,
 				  const struct snmp_mib *itemlist, size_t syncpoff)
 {
 	int i;
+	u64 buff64[SNMP_MIB_MAX];

Likewise.

I cannot be any more explicit in my request than this.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v5 0/7] Reduce cache miss for snmp_fold_field
@ 2016-09-28  9:08   ` David Miller
  0 siblings, 0 replies; 22+ messages in thread
From: David Miller @ 2016-09-28  9:08 UTC (permalink / raw)
  To: hejianet
  Cc: netdev, linux-sctp, linux-kernel, kuznet, jmorris, yoshfuji,
	kaber, vyasevich, nhorman, steffen.klassert, herbert,
	marcelo.leitner

From: Jia He <hejianet@gmail.com>
Date: Wed, 28 Sep 2016 14:22:21 +0800

> v5:
> - order local variables from longest to shortest line

I still see many cases where this problem still exists.  Please
do not resubmit this patch series until you fix all of them.

Patch #2:

-static int snmp_seq_show(struct seq_file *seq, void *v)
+static int snmp_seq_show_ipstats(struct seq_file *seq, void *v)
 {
 	int i;
+	u64 buff64[IPSTATS_MIB_MAX];
 	struct net *net = seq->private;

The order should be "net" then "buff64" then "i".

+static int snmp_seq_show_tcp_udp(struct seq_file *seq, void *v)
+{
+	int i;
+	struct net *net = seq->private;
+	unsigned long buff[TCPUDP_MIB_MAX];

The order should be "buff", "net", then "i".

Patch #3:

@@ -192,13 +197,19 @@ static void snmp6_seq_show_item(struct seq_file *seq, void __percpu *pcpumib,
 				const struct snmp_mib *itemlist)
 {
 	int i;
-	unsigned long val;
-
 ...
+	unsigned long buff[SNMP_MIB_MAX];

The order should be "buff" then "i".

@@ -206,10 +217,13 @@ static void snmp6_seq_show_item64(struct seq_file *seq, void __percpu *mib,
 				  const struct snmp_mib *itemlist, size_t syncpoff)
 {
 	int i;
+	u64 buff64[SNMP_MIB_MAX];

Likewise.

I cannot be any more explicit in my request than this.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v5 0/7] Reduce cache miss for snmp_fold_field
  2016-09-28  9:08   ` David Miller
@ 2016-09-28 13:45     ` hejianet
  -1 siblings, 0 replies; 22+ messages in thread
From: hejianet @ 2016-09-28 13:45 UTC (permalink / raw)
  To: David Miller
  Cc: netdev, linux-sctp, linux-kernel, kuznet, jmorris, yoshfuji,
	kaber, vyasevich, nhorman, steffen.klassert, herbert,
	marcelo.leitner



On 9/28/16 5:08 PM, David Miller wrote:
> From: Jia He <hejianet@gmail.com>
> Date: Wed, 28 Sep 2016 14:22:21 +0800
>
>> v5:
>> - order local variables from longest to shortest line
> I still see many cases where this problem still exists.  Please
> do not resubmit this patch series until you fix all of them.
>
> Patch #2:
>
> -static int snmp_seq_show(struct seq_file *seq, void *v)
> +static int snmp_seq_show_ipstats(struct seq_file *seq, void *v)
>   {
>   	int i;
> +	u64 buff64[IPSTATS_MIB_MAX];
>   	struct net *net = seq->private;
>
> The order should be "net" then "buff64" then "i".
Sorry for my bad eyesight and quick hand :(
B.R.
Jia
>
> +static int snmp_seq_show_tcp_udp(struct seq_file *seq, void *v)
> +{
> +	int i;
> +	struct net *net = seq->private;
> +	unsigned long buff[TCPUDP_MIB_MAX];
>
> The order should be "buff", "net", then "i".
>
> Patch #3:
>
> @@ -192,13 +197,19 @@ static void snmp6_seq_show_item(struct seq_file *seq, void __percpu *pcpumib,
>   				const struct snmp_mib *itemlist)
>   {
>   	int i;
> -	unsigned long val;
> -
>   ...
> +	unsigned long buff[SNMP_MIB_MAX];
>
> The order should be "buff" then "i".
>
> @@ -206,10 +217,13 @@ static void snmp6_seq_show_item64(struct seq_file *seq, void __percpu *mib,
>   				  const struct snmp_mib *itemlist, size_t syncpoff)
>   {
>   	int i;
> +	u64 buff64[SNMP_MIB_MAX];
>
> Likewise.
>
> I cannot be any more explicit in my request than this.
>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v5 0/7] Reduce cache miss for snmp_fold_field
@ 2016-09-28 13:45     ` hejianet
  0 siblings, 0 replies; 22+ messages in thread
From: hejianet @ 2016-09-28 13:45 UTC (permalink / raw)
  To: David Miller
  Cc: netdev, linux-sctp, linux-kernel, kuznet, jmorris, yoshfuji,
	kaber, vyasevich, nhorman, steffen.klassert, herbert,
	marcelo.leitner



On 9/28/16 5:08 PM, David Miller wrote:
> From: Jia He <hejianet@gmail.com>
> Date: Wed, 28 Sep 2016 14:22:21 +0800
>
>> v5:
>> - order local variables from longest to shortest line
> I still see many cases where this problem still exists.  Please
> do not resubmit this patch series until you fix all of them.
>
> Patch #2:
>
> -static int snmp_seq_show(struct seq_file *seq, void *v)
> +static int snmp_seq_show_ipstats(struct seq_file *seq, void *v)
>   {
>   	int i;
> +	u64 buff64[IPSTATS_MIB_MAX];
>   	struct net *net = seq->private;
>
> The order should be "net" then "buff64" then "i".
Sorry for my bad eyesight and quick hand :(
B.R.
Jia
>
> +static int snmp_seq_show_tcp_udp(struct seq_file *seq, void *v)
> +{
> +	int i;
> +	struct net *net = seq->private;
> +	unsigned long buff[TCPUDP_MIB_MAX];
>
> The order should be "buff", "net", then "i".
>
> Patch #3:
>
> @@ -192,13 +197,19 @@ static void snmp6_seq_show_item(struct seq_file *seq, void __percpu *pcpumib,
>   				const struct snmp_mib *itemlist)
>   {
>   	int i;
> -	unsigned long val;
> -
>   ...
> +	unsigned long buff[SNMP_MIB_MAX];
>
> The order should be "buff" then "i".
>
> @@ -206,10 +217,13 @@ static void snmp6_seq_show_item64(struct seq_file *seq, void __percpu *mib,
>   				  const struct snmp_mib *itemlist, size_t syncpoff)
>   {
>   	int i;
> +	u64 buff64[SNMP_MIB_MAX];
>
> Likewise.
>
> I cannot be any more explicit in my request than this.
>


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Local variable ordering (was Re: [PATCH v5 0/7] Reduce cache miss for snmp_fold_field)
  2016-09-28 13:45     ` hejianet
@ 2016-09-28 15:58       ` Edward Cree
  -1 siblings, 0 replies; 22+ messages in thread
From: Edward Cree @ 2016-09-28 15:58 UTC (permalink / raw)
  To: hejianet
  Cc: netdev, linux-sctp, linux-kernel, kuznet, jmorris, yoshfuji,
	kaber, vyasevich, nhorman, steffen.klassert, herbert,
	marcelo.leitner

On 28/09/16 14:45, hejianet wrote:
>
>
> On 9/28/16 5:08 PM, David Miller wrote:
>> From: Jia He <hejianet@gmail.com>
>> Date: Wed, 28 Sep 2016 14:22:21 +0800
>>
>>> v5:
>>> - order local variables from longest to shortest line
>> I still see many cases where this problem still exists.  Please
>> do not resubmit this patch series until you fix all of them.
>>
>> Patch #2:
>>
>> -static int snmp_seq_show(struct seq_file *seq, void *v)
>> +static int snmp_seq_show_ipstats(struct seq_file *seq, void *v)
>>   {
>>       int i;
>> +    u64 buff64[IPSTATS_MIB_MAX];
>>       struct net *net = seq->private;
>>
>> The order should be "net" then "buff64" then "i".
> Sorry for my bad eyesight and quick hand :(
> B.R.
> Jia
A few months back I wrote a script to help find these:
https://github.com/ecree-solarflare/xmastree
It can check a source file or a patch.
Posting in the hope that people will find it useful.

-Ed

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Local variable ordering (was Re: [PATCH v5 0/7] Reduce cache miss for snmp_fold_field)
@ 2016-09-28 15:58       ` Edward Cree
  0 siblings, 0 replies; 22+ messages in thread
From: Edward Cree @ 2016-09-28 15:58 UTC (permalink / raw)
  To: hejianet
  Cc: netdev, linux-sctp, linux-kernel, kuznet, jmorris, yoshfuji,
	kaber, vyasevich, nhorman, steffen.klassert, herbert,
	marcelo.leitner

On 28/09/16 14:45, hejianet wrote:
>
>
> On 9/28/16 5:08 PM, David Miller wrote:
>> From: Jia He <hejianet@gmail.com>
>> Date: Wed, 28 Sep 2016 14:22:21 +0800
>>
>>> v5:
>>> - order local variables from longest to shortest line
>> I still see many cases where this problem still exists.  Please
>> do not resubmit this patch series until you fix all of them.
>>
>> Patch #2:
>>
>> -static int snmp_seq_show(struct seq_file *seq, void *v)
>> +static int snmp_seq_show_ipstats(struct seq_file *seq, void *v)
>>   {
>>       int i;
>> +    u64 buff64[IPSTATS_MIB_MAX];
>>       struct net *net = seq->private;
>>
>> The order should be "net" then "buff64" then "i".
> Sorry for my bad eyesight and quick hand :(
> B.R.
> Jia
A few months back I wrote a script to help find these:
https://github.com/ecree-solarflare/xmastree
It can check a source file or a patch.
Posting in the hope that people will find it useful.

-Ed
The information contained in this message is confidential and is intended for the addressee(s) only. If you have received this message in error, please notify the sender immediately and delete the message. Unless you are an addressee (or authorized to receive for an addressee), you may not use, copy or disclose to anyone this message or any information contained in this message. The unauthorized use, disclosure, copying or alteration of this message is strictly prohibited.

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2016-09-28 15:58 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-09-28  6:22 [PATCH v5 0/7] Reduce cache miss for snmp_fold_field Jia He
2016-09-28  6:22 ` Jia He
2016-09-28  6:22 ` [PATCH v5 1/7] net:snmp: Introduce generic interfaces for snmp_get_cpu_field{,64} Jia He
2016-09-28  6:22   ` Jia He
2016-09-28  6:22 ` [PATCH v5 2/7] proc: Reduce cache miss in snmp_seq_show Jia He
2016-09-28  6:22   ` Jia He
2016-09-28  6:22 ` [PATCH v5 3/7] proc: Reduce cache miss in snmp6_seq_show Jia He
2016-09-28  6:22   ` Jia He
2016-09-28  6:22 ` [PATCH v5 4/7] proc: Reduce cache miss in sctp_snmp_seq_show Jia He
2016-09-28  6:22   ` Jia He
2016-09-28  6:22 ` [PATCH v5 5/7] proc: Reduce cache miss in xfrm_statistics_seq_show Jia He
2016-09-28  6:22   ` Jia He
2016-09-28  6:22 ` [PATCH v5 6/7] ipv6: Remove useless parameter in __snmp6_fill_statsdev Jia He
2016-09-28  6:22   ` Jia He
2016-09-28  6:22 ` [PATCH v5 7/7] net: Suppress the "Comparison to NULL could be written" warnings Jia He
2016-09-28  6:22   ` Jia He
2016-09-28  9:08 ` [PATCH v5 0/7] Reduce cache miss for snmp_fold_field David Miller
2016-09-28  9:08   ` David Miller
2016-09-28 13:45   ` hejianet
2016-09-28 13:45     ` hejianet
2016-09-28 15:58     ` Local variable ordering (was Re: [PATCH v5 0/7] Reduce cache miss for snmp_fold_field) Edward Cree
2016-09-28 15:58       ` Edward Cree

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.