All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH bpf-next 0/4] tools/bpf: add bpftool net support
@ 2018-09-05 23:58 Yonghong Song
  2018-09-05 23:58 ` [PATCH bpf-next 1/4] tools/bpf: sync kernel uapi header if_link.h to tools Yonghong Song
                   ` (4 more replies)
  0 siblings, 5 replies; 10+ messages in thread
From: Yonghong Song @ 2018-09-05 23:58 UTC (permalink / raw)
  To: ast, daniel, netdev; +Cc: kernel-team

As bpf usage becomes more pervasive, people starts to worry
about their cpu and memory cost. On a particular host,
people often wanted to know all running bpf programs
and their attachment context. So they can relate
a performance/memory anormly quickly to a particular bpf
program or an application.

bpftool already provides a pretty good coverage for perf
and cgroup related attachments. This patch set enabled
to dump attachment info for xdp and tc bpf programs.

Currently, users can already use "ip link show <dev>" and
"tc filter show dev <dev> ..." to dump bpf program attachment
information for xdp and tc bpf programs. The main reason
to implement such functionality in bpftool as well is for
better user experience. We want the bpftool to be the
ultimate tool for bpf introspection. The bpftool net
implementation will only present necessary bpf attachment
information to the user, ignoring most other ip/tc
specific information.

For example, the below is a pretty json print for xdp
and tc_filters.

  $ ./bpftool -jp net
  [{
        "xdp": [{
                "ifindex": 2,
                "devname": "eth0",
                "prog_id": 198
            }
        ],
        "tc_filters": [{
                "ifindex": 2,
                "kind": "qdisc_htb",
                "name": "prefix_matcher.o:[cls_prefix_matcher_htb]",
                "prog_id": 111727,
                "tag": "d08fe3b4319bc2fd",
                "act": []
            },{
                "ifindex": 2,
                "kind": "qdisc_clsact_ingress",
                "name": "fbflow_icmp",
                "prog_id": 130246,
                "tag": "3f265c7f26db62c9",
                "act": []
            },{
                "ifindex": 2,
                "kind": "qdisc_clsact_egress",
                "name": "prefix_matcher.o:[cls_prefix_matcher_clsact]",
                "prog_id": 111726,
                "tag": "99a197826974c876"
            },{
                "ifindex": 2,
                "kind": "qdisc_clsact_egress",
                "name": "cls_fg_dscp",
                "prog_id": 108619,
                "tag": "dc4630674fd72dcc",
                "act": []
            },{
                "ifindex": 2,
                "kind": "qdisc_clsact_egress",
                "name": "fbflow_egress",
                "prog_id": 130245,
                "tag": "72d2d830d6888d2c"
            }
        ]
    }
  ]

Patch #1 synced kernel uapi header if_link.h to tools directory.
Patch #2 moved tools/bpf/lib/bpf.c netlink related functions to
a new file. Patch #3 implemented additional functions
in libbpf which will be used in Patch #4.
Patch #4 implemented bpftool net support to dump
xdp and tc bpf program attachments.

Yonghong Song (4):
  tools/bpf: sync kernel uapi header if_link.h to tools
  tools/bpf: move bpf/lib netlink related functions into a new file
  tools/bpf: add more netlink functionalities in lib/bpf
  tools/bpf: bpftool: add net support

 .../bpf/bpftool/Documentation/bpftool-net.rst | 133 +++++++
 tools/bpf/bpftool/Documentation/bpftool.rst   |   6 +-
 tools/bpf/bpftool/bash-completion/bpftool     |  17 +-
 tools/bpf/bpftool/main.c                      |   3 +-
 tools/bpf/bpftool/main.h                      |   7 +
 tools/bpf/bpftool/net.c                       | 233 +++++++++++++
 tools/bpf/bpftool/netlink_dumper.c            | 181 ++++++++++
 tools/bpf/bpftool/netlink_dumper.h            | 103 ++++++
 tools/include/uapi/linux/if_link.h            |  17 +
 tools/lib/bpf/Build                           |   2 +-
 tools/lib/bpf/bpf.c                           | 129 -------
 tools/lib/bpf/libbpf.h                        |  16 +
 tools/lib/bpf/libbpf_errno.c                  |   1 +
 tools/lib/bpf/netlink.c                       | 324 ++++++++++++++++++
 tools/lib/bpf/nlattr.c                        |  33 +-
 tools/lib/bpf/nlattr.h                        |  38 ++
 16 files changed, 1094 insertions(+), 149 deletions(-)
 create mode 100644 tools/bpf/bpftool/Documentation/bpftool-net.rst
 create mode 100644 tools/bpf/bpftool/net.c
 create mode 100644 tools/bpf/bpftool/netlink_dumper.c
 create mode 100644 tools/bpf/bpftool/netlink_dumper.h
 create mode 100644 tools/lib/bpf/netlink.c

-- 
2.17.1

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH bpf-next 1/4] tools/bpf: sync kernel uapi header if_link.h to tools
  2018-09-05 23:58 [PATCH bpf-next 0/4] tools/bpf: add bpftool net support Yonghong Song
@ 2018-09-05 23:58 ` Yonghong Song
  2018-09-05 23:58 ` [PATCH bpf-next 2/4] tools/bpf: move bpf/lib netlink related functions into a new file Yonghong Song
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 10+ messages in thread
From: Yonghong Song @ 2018-09-05 23:58 UTC (permalink / raw)
  To: ast, daniel, netdev; +Cc: kernel-team

Among others, this header will be used later for
bpftool net support.

Signed-off-by: Yonghong Song <yhs@fb.com>
---
 tools/include/uapi/linux/if_link.h | 17 +++++++++++++++++
 1 file changed, 17 insertions(+)

diff --git a/tools/include/uapi/linux/if_link.h b/tools/include/uapi/linux/if_link.h
index cf01b6824244..43391e2d1153 100644
--- a/tools/include/uapi/linux/if_link.h
+++ b/tools/include/uapi/linux/if_link.h
@@ -164,6 +164,8 @@ enum {
 	IFLA_CARRIER_UP_COUNT,
 	IFLA_CARRIER_DOWN_COUNT,
 	IFLA_NEW_IFINDEX,
+	IFLA_MIN_MTU,
+	IFLA_MAX_MTU,
 	__IFLA_MAX
 };
 
@@ -334,6 +336,7 @@ enum {
 	IFLA_BRPORT_GROUP_FWD_MASK,
 	IFLA_BRPORT_NEIGH_SUPPRESS,
 	IFLA_BRPORT_ISOLATED,
+	IFLA_BRPORT_BACKUP_PORT,
 	__IFLA_BRPORT_MAX
 };
 #define IFLA_BRPORT_MAX (__IFLA_BRPORT_MAX - 1)
@@ -459,6 +462,16 @@ enum {
 
 #define IFLA_MACSEC_MAX (__IFLA_MACSEC_MAX - 1)
 
+/* XFRM section */
+enum {
+	IFLA_XFRM_UNSPEC,
+	IFLA_XFRM_LINK,
+	IFLA_XFRM_IF_ID,
+	__IFLA_XFRM_MAX
+};
+
+#define IFLA_XFRM_MAX (__IFLA_XFRM_MAX - 1)
+
 enum macsec_validation_type {
 	MACSEC_VALIDATE_DISABLED = 0,
 	MACSEC_VALIDATE_CHECK = 1,
@@ -920,6 +933,7 @@ enum {
 	XDP_ATTACHED_DRV,
 	XDP_ATTACHED_SKB,
 	XDP_ATTACHED_HW,
+	XDP_ATTACHED_MULTI,
 };
 
 enum {
@@ -928,6 +942,9 @@ enum {
 	IFLA_XDP_ATTACHED,
 	IFLA_XDP_FLAGS,
 	IFLA_XDP_PROG_ID,
+	IFLA_XDP_DRV_PROG_ID,
+	IFLA_XDP_SKB_PROG_ID,
+	IFLA_XDP_HW_PROG_ID,
 	__IFLA_XDP_MAX,
 };
 
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH bpf-next 2/4] tools/bpf: move bpf/lib netlink related functions into a new file
  2018-09-05 23:58 [PATCH bpf-next 0/4] tools/bpf: add bpftool net support Yonghong Song
  2018-09-05 23:58 ` [PATCH bpf-next 1/4] tools/bpf: sync kernel uapi header if_link.h to tools Yonghong Song
@ 2018-09-05 23:58 ` Yonghong Song
  2018-09-05 23:58 ` [PATCH bpf-next 3/4] tools/bpf: add more netlink functionalities in lib/bpf Yonghong Song
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 10+ messages in thread
From: Yonghong Song @ 2018-09-05 23:58 UTC (permalink / raw)
  To: ast, daniel, netdev; +Cc: kernel-team

There are no functionality change for this patch.

In the subsequent patches, more netlink related library functions
will be added and a separate file is better than cluttering bpf.c.

Signed-off-by: Yonghong Song <yhs@fb.com>
---
 tools/lib/bpf/Build     |   2 +-
 tools/lib/bpf/bpf.c     | 129 -------------------------------
 tools/lib/bpf/netlink.c | 165 ++++++++++++++++++++++++++++++++++++++++
 3 files changed, 166 insertions(+), 130 deletions(-)
 create mode 100644 tools/lib/bpf/netlink.c

diff --git a/tools/lib/bpf/Build b/tools/lib/bpf/Build
index 13a861135127..512b2c0ba0d2 100644
--- a/tools/lib/bpf/Build
+++ b/tools/lib/bpf/Build
@@ -1 +1 @@
-libbpf-y := libbpf.o bpf.o nlattr.o btf.o libbpf_errno.o
+libbpf-y := libbpf.o bpf.o nlattr.o btf.o libbpf_errno.o netlink.o
diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
index 60aa4ca8b2c5..3878a26a2071 100644
--- a/tools/lib/bpf/bpf.c
+++ b/tools/lib/bpf/bpf.c
@@ -28,16 +28,8 @@
 #include <linux/bpf.h>
 #include "bpf.h"
 #include "libbpf.h"
-#include "nlattr.h"
-#include <linux/rtnetlink.h>
-#include <linux/if_link.h>
-#include <sys/socket.h>
 #include <errno.h>
 
-#ifndef SOL_NETLINK
-#define SOL_NETLINK 270
-#endif
-
 /*
  * When building perf, unistd.h is overridden. __NR_bpf is
  * required to be defined explicitly.
@@ -499,127 +491,6 @@ int bpf_raw_tracepoint_open(const char *name, int prog_fd)
 	return sys_bpf(BPF_RAW_TRACEPOINT_OPEN, &attr, sizeof(attr));
 }
 
-int bpf_set_link_xdp_fd(int ifindex, int fd, __u32 flags)
-{
-	struct sockaddr_nl sa;
-	int sock, seq = 0, len, ret = -1;
-	char buf[4096];
-	struct nlattr *nla, *nla_xdp;
-	struct {
-		struct nlmsghdr  nh;
-		struct ifinfomsg ifinfo;
-		char             attrbuf[64];
-	} req;
-	struct nlmsghdr *nh;
-	struct nlmsgerr *err;
-	socklen_t addrlen;
-	int one = 1;
-
-	memset(&sa, 0, sizeof(sa));
-	sa.nl_family = AF_NETLINK;
-
-	sock = socket(AF_NETLINK, SOCK_RAW, NETLINK_ROUTE);
-	if (sock < 0) {
-		return -errno;
-	}
-
-	if (setsockopt(sock, SOL_NETLINK, NETLINK_EXT_ACK,
-		       &one, sizeof(one)) < 0) {
-		fprintf(stderr, "Netlink error reporting not supported\n");
-	}
-
-	if (bind(sock, (struct sockaddr *)&sa, sizeof(sa)) < 0) {
-		ret = -errno;
-		goto cleanup;
-	}
-
-	addrlen = sizeof(sa);
-	if (getsockname(sock, (struct sockaddr *)&sa, &addrlen) < 0) {
-		ret = -errno;
-		goto cleanup;
-	}
-
-	if (addrlen != sizeof(sa)) {
-		ret = -LIBBPF_ERRNO__INTERNAL;
-		goto cleanup;
-	}
-
-	memset(&req, 0, sizeof(req));
-	req.nh.nlmsg_len = NLMSG_LENGTH(sizeof(struct ifinfomsg));
-	req.nh.nlmsg_flags = NLM_F_REQUEST | NLM_F_ACK;
-	req.nh.nlmsg_type = RTM_SETLINK;
-	req.nh.nlmsg_pid = 0;
-	req.nh.nlmsg_seq = ++seq;
-	req.ifinfo.ifi_family = AF_UNSPEC;
-	req.ifinfo.ifi_index = ifindex;
-
-	/* started nested attribute for XDP */
-	nla = (struct nlattr *)(((char *)&req)
-				+ NLMSG_ALIGN(req.nh.nlmsg_len));
-	nla->nla_type = NLA_F_NESTED | IFLA_XDP;
-	nla->nla_len = NLA_HDRLEN;
-
-	/* add XDP fd */
-	nla_xdp = (struct nlattr *)((char *)nla + nla->nla_len);
-	nla_xdp->nla_type = IFLA_XDP_FD;
-	nla_xdp->nla_len = NLA_HDRLEN + sizeof(int);
-	memcpy((char *)nla_xdp + NLA_HDRLEN, &fd, sizeof(fd));
-	nla->nla_len += nla_xdp->nla_len;
-
-	/* if user passed in any flags, add those too */
-	if (flags) {
-		nla_xdp = (struct nlattr *)((char *)nla + nla->nla_len);
-		nla_xdp->nla_type = IFLA_XDP_FLAGS;
-		nla_xdp->nla_len = NLA_HDRLEN + sizeof(flags);
-		memcpy((char *)nla_xdp + NLA_HDRLEN, &flags, sizeof(flags));
-		nla->nla_len += nla_xdp->nla_len;
-	}
-
-	req.nh.nlmsg_len += NLA_ALIGN(nla->nla_len);
-
-	if (send(sock, &req, req.nh.nlmsg_len, 0) < 0) {
-		ret = -errno;
-		goto cleanup;
-	}
-
-	len = recv(sock, buf, sizeof(buf), 0);
-	if (len < 0) {
-		ret = -errno;
-		goto cleanup;
-	}
-
-	for (nh = (struct nlmsghdr *)buf; NLMSG_OK(nh, len);
-	     nh = NLMSG_NEXT(nh, len)) {
-		if (nh->nlmsg_pid != sa.nl_pid) {
-			ret = -LIBBPF_ERRNO__WRNGPID;
-			goto cleanup;
-		}
-		if (nh->nlmsg_seq != seq) {
-			ret = -LIBBPF_ERRNO__INVSEQ;
-			goto cleanup;
-		}
-		switch (nh->nlmsg_type) {
-		case NLMSG_ERROR:
-			err = (struct nlmsgerr *)NLMSG_DATA(nh);
-			if (!err->error)
-				continue;
-			ret = err->error;
-			nla_dump_errormsg(nh);
-			goto cleanup;
-		case NLMSG_DONE:
-			break;
-		default:
-			break;
-		}
-	}
-
-	ret = 0;
-
-cleanup:
-	close(sock);
-	return ret;
-}
-
 int bpf_load_btf(void *btf, __u32 btf_size, char *log_buf, __u32 log_buf_size,
 		 bool do_log)
 {
diff --git a/tools/lib/bpf/netlink.c b/tools/lib/bpf/netlink.c
new file mode 100644
index 000000000000..ccaa991fe9d8
--- /dev/null
+++ b/tools/lib/bpf/netlink.c
@@ -0,0 +1,165 @@
+// SPDX-License-Identifier: LGPL-2.1
+/* Copyright (c) 2018 Facebook */
+
+#include <stdlib.h>
+#include <memory.h>
+#include <unistd.h>
+#include <linux/bpf.h>
+#include <linux/rtnetlink.h>
+#include <sys/socket.h>
+#include <errno.h>
+#include <time.h>
+
+#include "bpf.h"
+#include "libbpf.h"
+#include "nlattr.h"
+
+#ifndef SOL_NETLINK
+#define SOL_NETLINK 270
+#endif
+
+static int bpf_netlink_open(__u32 *nl_pid)
+{
+	struct sockaddr_nl sa;
+	socklen_t addrlen;
+	int one = 1, ret;
+	int sock;
+
+	memset(&sa, 0, sizeof(sa));
+	sa.nl_family = AF_NETLINK;
+
+	sock = socket(AF_NETLINK, SOCK_RAW, NETLINK_ROUTE);
+	if (sock < 0)
+		return -errno;
+
+	if (setsockopt(sock, SOL_NETLINK, NETLINK_EXT_ACK,
+		       &one, sizeof(one)) < 0) {
+		fprintf(stderr, "Netlink error reporting not supported\n");
+	}
+
+	if (bind(sock, (struct sockaddr *)&sa, sizeof(sa)) < 0) {
+		ret = -errno;
+		goto cleanup;
+	}
+
+	addrlen = sizeof(sa);
+	if (getsockname(sock, (struct sockaddr *)&sa, &addrlen) < 0) {
+		ret = -errno;
+		goto cleanup;
+	}
+
+	if (addrlen != sizeof(sa)) {
+		ret = -LIBBPF_ERRNO__INTERNAL;
+		goto cleanup;
+	}
+
+	*nl_pid = sa.nl_pid;
+	return sock;
+
+cleanup:
+	close(sock);
+	return ret;
+}
+
+static int bpf_netlink_recv(int sock, __u32 nl_pid, int seq)
+{
+	struct nlmsgerr *err;
+	struct nlmsghdr *nh;
+	char buf[4096];
+	int len, ret;
+
+	while (1) {
+		len = recv(sock, buf, sizeof(buf), 0);
+		if (len < 0) {
+			ret = -errno;
+			goto done;
+		}
+
+		for (nh = (struct nlmsghdr *)buf; NLMSG_OK(nh, len);
+		     nh = NLMSG_NEXT(nh, len)) {
+			if (nh->nlmsg_pid != nl_pid) {
+				ret = -LIBBPF_ERRNO__WRNGPID;
+				goto done;
+			}
+			if (nh->nlmsg_seq != seq) {
+				ret = -LIBBPF_ERRNO__INVSEQ;
+				goto done;
+			}
+			switch (nh->nlmsg_type) {
+			case NLMSG_ERROR:
+				err = (struct nlmsgerr *)NLMSG_DATA(nh);
+				if (!err->error)
+					continue;
+				ret = err->error;
+				nla_dump_errormsg(nh);
+				goto done;
+			case NLMSG_DONE:
+				return 0;
+			default:
+				break;
+			}
+		}
+	}
+	ret = 0;
+done:
+	return ret;
+}
+
+int bpf_set_link_xdp_fd(int ifindex, int fd, __u32 flags)
+{
+	int sock, seq = 0, ret;
+	struct nlattr *nla, *nla_xdp;
+	struct {
+		struct nlmsghdr  nh;
+		struct ifinfomsg ifinfo;
+		char             attrbuf[64];
+	} req;
+	__u32 nl_pid;
+
+	sock = bpf_netlink_open(&nl_pid);
+	if (sock < 0)
+		return sock;
+
+	memset(&req, 0, sizeof(req));
+	req.nh.nlmsg_len = NLMSG_LENGTH(sizeof(struct ifinfomsg));
+	req.nh.nlmsg_flags = NLM_F_REQUEST | NLM_F_ACK;
+	req.nh.nlmsg_type = RTM_SETLINK;
+	req.nh.nlmsg_pid = 0;
+	req.nh.nlmsg_seq = ++seq;
+	req.ifinfo.ifi_family = AF_UNSPEC;
+	req.ifinfo.ifi_index = ifindex;
+
+	/* started nested attribute for XDP */
+	nla = (struct nlattr *)(((char *)&req)
+				+ NLMSG_ALIGN(req.nh.nlmsg_len));
+	nla->nla_type = NLA_F_NESTED | IFLA_XDP;
+	nla->nla_len = NLA_HDRLEN;
+
+	/* add XDP fd */
+	nla_xdp = (struct nlattr *)((char *)nla + nla->nla_len);
+	nla_xdp->nla_type = IFLA_XDP_FD;
+	nla_xdp->nla_len = NLA_HDRLEN + sizeof(int);
+	memcpy((char *)nla_xdp + NLA_HDRLEN, &fd, sizeof(fd));
+	nla->nla_len += nla_xdp->nla_len;
+
+	/* if user passed in any flags, add those too */
+	if (flags) {
+		nla_xdp = (struct nlattr *)((char *)nla + nla->nla_len);
+		nla_xdp->nla_type = IFLA_XDP_FLAGS;
+		nla_xdp->nla_len = NLA_HDRLEN + sizeof(flags);
+		memcpy((char *)nla_xdp + NLA_HDRLEN, &flags, sizeof(flags));
+		nla->nla_len += nla_xdp->nla_len;
+	}
+
+	req.nh.nlmsg_len += NLA_ALIGN(nla->nla_len);
+
+	if (send(sock, &req, req.nh.nlmsg_len, 0) < 0) {
+		ret = -errno;
+		goto cleanup;
+	}
+	ret = bpf_netlink_recv(sock, nl_pid, seq);
+
+cleanup:
+	close(sock);
+	return ret;
+}
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH bpf-next 3/4] tools/bpf: add more netlink functionalities in lib/bpf
  2018-09-05 23:58 [PATCH bpf-next 0/4] tools/bpf: add bpftool net support Yonghong Song
  2018-09-05 23:58 ` [PATCH bpf-next 1/4] tools/bpf: sync kernel uapi header if_link.h to tools Yonghong Song
  2018-09-05 23:58 ` [PATCH bpf-next 2/4] tools/bpf: move bpf/lib netlink related functions into a new file Yonghong Song
@ 2018-09-05 23:58 ` Yonghong Song
  2018-09-05 23:58 ` [PATCH bpf-next 4/4] tools/bpf: bpftool: add net support Yonghong Song
  2018-09-06 18:11 ` [PATCH bpf-next 0/4] tools/bpf: add bpftool " Alexei Starovoitov
  4 siblings, 0 replies; 10+ messages in thread
From: Yonghong Song @ 2018-09-05 23:58 UTC (permalink / raw)
  To: ast, daniel, netdev; +Cc: kernel-team

This patch added a few netlink attribute parsing functions
and the netlink API functions to query networking links, tc classes,
tc qdiscs and tc filters. For example, the following API is
to get networking links:
  int nl_get_link(int sock, unsigned int nl_pid,
                  dump_nlmsg_t dump_link_nlmsg,
                  void *cookie);

Note that when the API is called, the user also provided a
callback function with the following signature:
  int (*dump_nlmsg_t)(void *cookie, void *msg, struct nlattr **tb);

The "cookie" is the parameter the user passed to the API and will
be available for the callback function.
The "msg" is the information about the result, e.g., ifinfomsg or
tcmsg. The "tb" is the parsed netlink attributes.

Signed-off-by: Yonghong Song <yhs@fb.com>
---
 tools/lib/bpf/libbpf.h       |  16 ++++
 tools/lib/bpf/libbpf_errno.c |   1 +
 tools/lib/bpf/netlink.c      | 165 ++++++++++++++++++++++++++++++++++-
 tools/lib/bpf/nlattr.c       |  33 ++++---
 tools/lib/bpf/nlattr.h       |  38 ++++++++
 5 files changed, 238 insertions(+), 15 deletions(-)

diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
index 96c55fac54c3..e3b00e23e181 100644
--- a/tools/lib/bpf/libbpf.h
+++ b/tools/lib/bpf/libbpf.h
@@ -46,6 +46,7 @@ enum libbpf_errno {
 	LIBBPF_ERRNO__PROGTYPE,	/* Kernel doesn't support this program type */
 	LIBBPF_ERRNO__WRNGPID,	/* Wrong pid in netlink message */
 	LIBBPF_ERRNO__INVSEQ,	/* Invalid netlink sequence */
+	LIBBPF_ERRNO__NLPARSE,	/* netlink parsing error */
 	__LIBBPF_ERRNO__END,
 };
 
@@ -297,4 +298,19 @@ int bpf_perf_event_read_simple(void *mem, unsigned long size,
 			       unsigned long page_size,
 			       void **buf, size_t *buf_len,
 			       bpf_perf_event_print_t fn, void *priv);
+
+struct nlmsghdr;
+struct nlattr;
+typedef int (*dump_nlmsg_t)(void *cookie, void *msg, struct nlattr **tb);
+typedef int (*__dump_nlmsg_t)(struct nlmsghdr *nlmsg, dump_nlmsg_t,
+			      void *cookie);
+int bpf_netlink_open(unsigned int *nl_pid);
+int nl_get_link(int sock, unsigned int nl_pid, dump_nlmsg_t dump_link_nlmsg,
+		void *cookie);
+int nl_get_class(int sock, unsigned int nl_pid, int ifindex,
+		 dump_nlmsg_t dump_class_nlmsg, void *cookie);
+int nl_get_qdisc(int sock, unsigned int nl_pid, int ifindex,
+		 dump_nlmsg_t dump_qdisc_nlmsg, void *cookie);
+int nl_get_filter(int sock, unsigned int nl_pid, int ifindex, int handle,
+		  dump_nlmsg_t dump_filter_nlmsg, void *cookie);
 #endif
diff --git a/tools/lib/bpf/libbpf_errno.c b/tools/lib/bpf/libbpf_errno.c
index d9ba851bd7f9..2464ade3b326 100644
--- a/tools/lib/bpf/libbpf_errno.c
+++ b/tools/lib/bpf/libbpf_errno.c
@@ -42,6 +42,7 @@ static const char *libbpf_strerror_table[NR_ERRNO] = {
 	[ERRCODE_OFFSET(PROGTYPE)]	= "Kernel doesn't support this program type",
 	[ERRCODE_OFFSET(WRNGPID)]	= "Wrong pid in netlink message",
 	[ERRCODE_OFFSET(INVSEQ)]	= "Invalid netlink sequence",
+	[ERRCODE_OFFSET(NLPARSE)]	= "Incorrect netlink message parsing",
 };
 
 int libbpf_strerror(int err, char *buf, size_t size)
diff --git a/tools/lib/bpf/netlink.c b/tools/lib/bpf/netlink.c
index ccaa991fe9d8..469e068dd0c5 100644
--- a/tools/lib/bpf/netlink.c
+++ b/tools/lib/bpf/netlink.c
@@ -18,7 +18,7 @@
 #define SOL_NETLINK 270
 #endif
 
-static int bpf_netlink_open(__u32 *nl_pid)
+int bpf_netlink_open(__u32 *nl_pid)
 {
 	struct sockaddr_nl sa;
 	socklen_t addrlen;
@@ -61,7 +61,9 @@ static int bpf_netlink_open(__u32 *nl_pid)
 	return ret;
 }
 
-static int bpf_netlink_recv(int sock, __u32 nl_pid, int seq)
+static int bpf_netlink_recv(int sock, __u32 nl_pid, int seq,
+			    __dump_nlmsg_t _fn, dump_nlmsg_t fn,
+			    void *cookie)
 {
 	struct nlmsgerr *err;
 	struct nlmsghdr *nh;
@@ -98,6 +100,11 @@ static int bpf_netlink_recv(int sock, __u32 nl_pid, int seq)
 			default:
 				break;
 			}
+			if (_fn) {
+				ret = _fn(nh, fn, cookie);
+				if (ret)
+					return ret;
+			}
 		}
 	}
 	ret = 0;
@@ -157,9 +164,161 @@ int bpf_set_link_xdp_fd(int ifindex, int fd, __u32 flags)
 		ret = -errno;
 		goto cleanup;
 	}
-	ret = bpf_netlink_recv(sock, nl_pid, seq);
+	ret = bpf_netlink_recv(sock, nl_pid, seq, NULL, NULL, NULL);
 
 cleanup:
 	close(sock);
 	return ret;
 }
+
+static int __dump_link_nlmsg(struct nlmsghdr *nlh, dump_nlmsg_t dump_link_nlmsg,
+			     void *cookie)
+{
+	struct nlattr *tb[IFLA_MAX + 1], *attr;
+	struct ifinfomsg *ifi = NLMSG_DATA(nlh);
+	int len;
+
+	len = nlh->nlmsg_len - NLMSG_LENGTH(sizeof(*ifi));
+	attr = (struct nlattr *) ((void *) ifi + NLMSG_ALIGN(sizeof(*ifi)));
+	if (nla_parse(tb, IFLA_MAX, attr, len, NULL) != 0)
+		return -LIBBPF_ERRNO__NLPARSE;
+
+	return dump_link_nlmsg(cookie, ifi, tb);
+}
+
+int nl_get_link(int sock, unsigned int nl_pid, dump_nlmsg_t dump_link_nlmsg,
+		void *cookie)
+{
+	struct {
+		struct nlmsghdr nlh;
+		struct ifinfomsg ifm;
+	} req = {
+		.nlh.nlmsg_len = NLMSG_LENGTH(sizeof(struct ifinfomsg)),
+		.nlh.nlmsg_type = RTM_GETLINK,
+		.nlh.nlmsg_flags = NLM_F_DUMP | NLM_F_REQUEST,
+		.ifm.ifi_family = AF_PACKET,
+	};
+	int seq = time(NULL);
+
+	req.nlh.nlmsg_seq = seq;
+	if (send(sock, &req, req.nlh.nlmsg_len, 0) < 0)
+		return -errno;
+
+	return bpf_netlink_recv(sock, nl_pid, seq, __dump_link_nlmsg,
+				dump_link_nlmsg, cookie);
+}
+
+static int __dump_class_nlmsg(struct nlmsghdr *nlh,
+			      dump_nlmsg_t dump_class_nlmsg, void *cookie)
+{
+	struct nlattr *tb[TCA_MAX + 1], *attr;
+	struct tcmsg *t = NLMSG_DATA(nlh);
+	int len;
+
+	len = nlh->nlmsg_len - NLMSG_LENGTH(sizeof(*t));
+	attr = (struct nlattr *) ((void *) t + NLMSG_ALIGN(sizeof(*t)));
+	if (nla_parse(tb, TCA_MAX, attr, len, NULL) != 0)
+		return -LIBBPF_ERRNO__NLPARSE;
+
+	return dump_class_nlmsg(cookie, t, tb);
+}
+
+int nl_get_class(int sock, unsigned int nl_pid, int ifindex,
+		 dump_nlmsg_t dump_class_nlmsg, void *cookie)
+{
+	struct {
+		struct nlmsghdr nlh;
+		struct tcmsg t;
+	} req = {
+		.nlh.nlmsg_len = NLMSG_LENGTH(sizeof(struct tcmsg)),
+		.nlh.nlmsg_type = RTM_GETTCLASS,
+		.nlh.nlmsg_flags = NLM_F_DUMP | NLM_F_REQUEST,
+		.t.tcm_family = AF_UNSPEC,
+		.t.tcm_ifindex = ifindex,
+	};
+	int seq = time(NULL);
+
+	req.nlh.nlmsg_seq = seq;
+	if (send(sock, &req, req.nlh.nlmsg_len, 0) < 0)
+		return -errno;
+
+	return bpf_netlink_recv(sock, nl_pid, seq, __dump_class_nlmsg,
+				dump_class_nlmsg, cookie);
+}
+
+static int __dump_qdisc_nlmsg(struct nlmsghdr *nlh,
+			      dump_nlmsg_t dump_qdisc_nlmsg, void *cookie)
+{
+	struct nlattr *tb[TCA_MAX + 1], *attr;
+	struct tcmsg *t = NLMSG_DATA(nlh);
+	int len;
+
+	len = nlh->nlmsg_len - NLMSG_LENGTH(sizeof(*t));
+	attr = (struct nlattr *) ((void *) t + NLMSG_ALIGN(sizeof(*t)));
+	if (nla_parse(tb, TCA_MAX, attr, len, NULL) != 0)
+		return -LIBBPF_ERRNO__NLPARSE;
+
+	return dump_qdisc_nlmsg(cookie, t, tb);
+}
+
+int nl_get_qdisc(int sock, unsigned int nl_pid, int ifindex,
+		 dump_nlmsg_t dump_qdisc_nlmsg, void *cookie)
+{
+	struct {
+		struct nlmsghdr nlh;
+		struct tcmsg t;
+	} req = {
+		.nlh.nlmsg_len = NLMSG_LENGTH(sizeof(struct tcmsg)),
+		.nlh.nlmsg_type = RTM_GETQDISC,
+		.nlh.nlmsg_flags = NLM_F_DUMP | NLM_F_REQUEST,
+		.t.tcm_family = AF_UNSPEC,
+		.t.tcm_ifindex = ifindex,
+	};
+	int seq = time(NULL);
+
+	req.nlh.nlmsg_seq = seq;
+	if (send(sock, &req, req.nlh.nlmsg_len, 0) < 0)
+		return -errno;
+
+	return bpf_netlink_recv(sock, nl_pid, seq, __dump_qdisc_nlmsg,
+				dump_qdisc_nlmsg, cookie);
+}
+
+static int __dump_filter_nlmsg(struct nlmsghdr *nlh,
+			       dump_nlmsg_t dump_filter_nlmsg, void *cookie)
+{
+	struct nlattr *tb[TCA_MAX + 1], *attr;
+	struct tcmsg *t = NLMSG_DATA(nlh);
+	int len;
+
+	len = nlh->nlmsg_len - NLMSG_LENGTH(sizeof(*t));
+	attr = (struct nlattr *) ((void *) t + NLMSG_ALIGN(sizeof(*t)));
+	if (nla_parse(tb, TCA_MAX, attr, len, NULL) != 0)
+		return -LIBBPF_ERRNO__NLPARSE;
+
+	return dump_filter_nlmsg(cookie, t, tb);
+}
+
+int nl_get_filter(int sock, unsigned int nl_pid, int ifindex, int handle,
+		  dump_nlmsg_t dump_filter_nlmsg, void *cookie)
+{
+	struct {
+		struct nlmsghdr nlh;
+		struct tcmsg t;
+	} req = {
+		.nlh.nlmsg_len = NLMSG_LENGTH(sizeof(struct tcmsg)),
+		.nlh.nlmsg_type = RTM_GETTFILTER,
+		.nlh.nlmsg_flags = NLM_F_DUMP | NLM_F_REQUEST,
+		.t.tcm_family = AF_UNSPEC,
+		.t.tcm_ifindex = ifindex,
+		.t.tcm_parent = handle,
+	};
+	int seq = time(NULL);
+
+	req.nlh.nlmsg_seq = seq;
+	if (send(sock, &req, req.nlh.nlmsg_len, 0) < 0)
+		return -errno;
+
+	return bpf_netlink_recv(sock, nl_pid, seq, __dump_filter_nlmsg,
+				dump_filter_nlmsg, cookie);
+}
diff --git a/tools/lib/bpf/nlattr.c b/tools/lib/bpf/nlattr.c
index 4719434278b2..49f514119bdb 100644
--- a/tools/lib/bpf/nlattr.c
+++ b/tools/lib/bpf/nlattr.c
@@ -26,11 +26,6 @@ static uint16_t nla_attr_minlen[NLA_TYPE_MAX+1] = {
 	[NLA_FLAG]	= 0,
 };
 
-static int nla_len(const struct nlattr *nla)
-{
-	return nla->nla_len - NLA_HDRLEN;
-}
-
 static struct nlattr *nla_next(const struct nlattr *nla, int *remaining)
 {
 	int totlen = NLA_ALIGN(nla->nla_len);
@@ -46,11 +41,6 @@ static int nla_ok(const struct nlattr *nla, int remaining)
 	       nla->nla_len <= remaining;
 }
 
-static void *nla_data(const struct nlattr *nla)
-{
-	return (char *) nla + NLA_HDRLEN;
-}
-
 static int nla_type(const struct nlattr *nla)
 {
 	return nla->nla_type & NLA_TYPE_MASK;
@@ -114,8 +104,8 @@ static inline int nlmsg_len(const struct nlmsghdr *nlh)
  * @see nla_validate
  * @return 0 on success or a negative error code.
  */
-static int nla_parse(struct nlattr *tb[], int maxtype, struct nlattr *head, int len,
-		     struct nla_policy *policy)
+int nla_parse(struct nlattr *tb[], int maxtype, struct nlattr *head, int len,
+	      struct nla_policy *policy)
 {
 	struct nlattr *nla;
 	int rem, err;
@@ -146,6 +136,25 @@ static int nla_parse(struct nlattr *tb[], int maxtype, struct nlattr *head, int
 	return err;
 }
 
+/**
+ * Create attribute index based on nested attribute
+ * @arg tb              Index array to be filled (maxtype+1 elements).
+ * @arg maxtype         Maximum attribute type expected and accepted.
+ * @arg nla             Nested Attribute.
+ * @arg policy          Attribute validation policy.
+ *
+ * Feeds the stream of attributes nested into the specified attribute
+ * to nla_parse().
+ *
+ * @see nla_parse
+ * @return 0 on success or a negative error code.
+ */
+int nla_parse_nested(struct nlattr *tb[], int maxtype, struct nlattr *nla,
+		     struct nla_policy *policy)
+{
+	return nla_parse(tb, maxtype, nla_data(nla), nla_len(nla), policy);
+}
+
 /* dump netlink extended ack error message */
 int nla_dump_errormsg(struct nlmsghdr *nlh)
 {
diff --git a/tools/lib/bpf/nlattr.h b/tools/lib/bpf/nlattr.h
index 931a71f68f93..a6e2396bce7c 100644
--- a/tools/lib/bpf/nlattr.h
+++ b/tools/lib/bpf/nlattr.h
@@ -67,6 +67,44 @@ struct nla_policy {
 	     nla_ok(pos, rem); \
 	     pos = nla_next(pos, &(rem)))
 
+/**
+ * nla_data - head of payload
+ * @nla: netlink attribute
+ */
+static inline void *nla_data(const struct nlattr *nla)
+{
+	return (char *) nla + NLA_HDRLEN;
+}
+
+static inline uint8_t nla_getattr_u8(const struct nlattr *nla)
+{
+	return *(uint8_t *)nla_data(nla);
+}
+
+static inline uint32_t nla_getattr_u32(const struct nlattr *nla)
+{
+	return *(uint32_t *)nla_data(nla);
+}
+
+static inline const char *nla_getattr_str(const struct nlattr *nla)
+{
+	return (const char *)nla_data(nla);
+}
+
+/**
+ * nla_len - length of payload
+ * @nla: netlink attribute
+ */
+static inline int nla_len(const struct nlattr *nla)
+{
+	return nla->nla_len - NLA_HDRLEN;
+}
+
+int nla_parse(struct nlattr *tb[], int maxtype, struct nlattr *head, int len,
+	      struct nla_policy *policy);
+int nla_parse_nested(struct nlattr *tb[], int maxtype, struct nlattr *nla,
+		     struct nla_policy *policy);
+
 int nla_dump_errormsg(struct nlmsghdr *nlh);
 
 #endif /* __NLATTR_H */
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH bpf-next 4/4] tools/bpf: bpftool: add net support
  2018-09-05 23:58 [PATCH bpf-next 0/4] tools/bpf: add bpftool net support Yonghong Song
                   ` (2 preceding siblings ...)
  2018-09-05 23:58 ` [PATCH bpf-next 3/4] tools/bpf: add more netlink functionalities in lib/bpf Yonghong Song
@ 2018-09-05 23:58 ` Yonghong Song
  2018-09-12 22:29   ` Daniel Borkmann
  2018-09-06 18:11 ` [PATCH bpf-next 0/4] tools/bpf: add bpftool " Alexei Starovoitov
  4 siblings, 1 reply; 10+ messages in thread
From: Yonghong Song @ 2018-09-05 23:58 UTC (permalink / raw)
  To: ast, daniel, netdev; +Cc: kernel-team

Add "bpftool net" support. Networking devices are enumerated
to dump device index/name associated with xdp progs.

For each networking device, tc classes and qdiscs are enumerated
in order to check their bpf filters.
In addition, root handle and clsact ingress/egress are also checked for
bpf filters.  Not all filter information is printed out. Only ifindex,
kind, filter name, prog_id and tag are printed out, which are good
enough to show attachment information. If the filter action
is a bpf action, its bpf program id, bpf name and tag will be
printed out as well.

For example,
  $ ./bpftool net
  xdp [
  ifindex 2 devname eth0 prog_id 198
  ]
  tc_filters [
  ifindex 2 kind qdisc_htb name prefix_matcher.o:[cls_prefix_matcher_htb]
            prog_id 111727 tag d08fe3b4319bc2fd act []
  ifindex 2 kind qdisc_clsact_ingress name fbflow_icmp
            prog_id 130246 tag 3f265c7f26db62c9 act []
  ifindex 2 kind qdisc_clsact_egress name prefix_matcher.o:[cls_prefix_matcher_clsact]
            prog_id 111726 tag 99a197826974c876
  ifindex 2 kind qdisc_clsact_egress name cls_fg_dscp
            prog_id 108619 tag dc4630674fd72dcc act []
  ifindex 2 kind qdisc_clsact_egress name fbflow_egress
            prog_id 130245 tag 72d2d830d6888d2c
  ]
  $ ./bpftool -jp net
  [{
        "xdp": [{
                "ifindex": 2,
                "devname": "eth0",
                "prog_id": 198
            }
        ],
        "tc_filters": [{
                "ifindex": 2,
                "kind": "qdisc_htb",
                "name": "prefix_matcher.o:[cls_prefix_matcher_htb]",
                "prog_id": 111727,
                "tag": "d08fe3b4319bc2fd",
                "act": []
            },{
                "ifindex": 2,
                "kind": "qdisc_clsact_ingress",
                "name": "fbflow_icmp",
                "prog_id": 130246,
                "tag": "3f265c7f26db62c9",
                "act": []
            },{
                "ifindex": 2,
                "kind": "qdisc_clsact_egress",
                "name": "prefix_matcher.o:[cls_prefix_matcher_clsact]",
                "prog_id": 111726,
                "tag": "99a197826974c876"
            },{
                "ifindex": 2,
                "kind": "qdisc_clsact_egress",
                "name": "cls_fg_dscp",
                "prog_id": 108619,
                "tag": "dc4630674fd72dcc",
                "act": []
            },{
                "ifindex": 2,
                "kind": "qdisc_clsact_egress",
                "name": "fbflow_egress",
                "prog_id": 130245,
                "tag": "72d2d830d6888d2c"
            }
        ]
    }
  ]

Signed-off-by: Yonghong Song <yhs@fb.com>
---
 .../bpf/bpftool/Documentation/bpftool-net.rst | 133 ++++++++++
 tools/bpf/bpftool/Documentation/bpftool.rst   |   6 +-
 tools/bpf/bpftool/bash-completion/bpftool     |  17 +-
 tools/bpf/bpftool/main.c                      |   3 +-
 tools/bpf/bpftool/main.h                      |   7 +
 tools/bpf/bpftool/net.c                       | 233 ++++++++++++++++++
 tools/bpf/bpftool/netlink_dumper.c            | 181 ++++++++++++++
 tools/bpf/bpftool/netlink_dumper.h            | 103 ++++++++
 8 files changed, 676 insertions(+), 7 deletions(-)
 create mode 100644 tools/bpf/bpftool/Documentation/bpftool-net.rst
 create mode 100644 tools/bpf/bpftool/net.c
 create mode 100644 tools/bpf/bpftool/netlink_dumper.c
 create mode 100644 tools/bpf/bpftool/netlink_dumper.h

diff --git a/tools/bpf/bpftool/Documentation/bpftool-net.rst b/tools/bpf/bpftool/Documentation/bpftool-net.rst
new file mode 100644
index 000000000000..48a61837a264
--- /dev/null
+++ b/tools/bpf/bpftool/Documentation/bpftool-net.rst
@@ -0,0 +1,133 @@
+================
+bpftool-net
+================
+-------------------------------------------------------------------------------
+tool for inspection of netdev/tc related bpf prog attachments
+-------------------------------------------------------------------------------
+
+:Manual section: 8
+
+SYNOPSIS
+========
+
+	**bpftool** [*OPTIONS*] **net** *COMMAND*
+
+	*OPTIONS* := { [{ **-j** | **--json** }] [{ **-p** | **--pretty** }] }
+
+	*COMMANDS* :=
+	{ **show** | **list** } [ **dev** name ] | **help**
+
+NET COMMANDS
+============
+
+|	**bpftool** **net { show | list } [ dev name ]**
+|	**bpftool** **net help**
+
+DESCRIPTION
+===========
+	**bpftool net { show | list } [ dev name ]**
+		  List all networking device driver and tc attachment in the system.
+
+                  Output will start with all xdp program attachment, followed by
+                  all tc class/qdisc bpf program attachments. Both xdp programs and
+                  tc programs are ordered based on ifindex number. If multiple bpf
+                  programs attached to the same networking device through **tc filter**,
+                  the order will be first all bpf programs attached to tc classes, then
+                  all bpf programs attached to non clsact qdiscs, and finally all
+                  bpf programs attached to root and clsact qdisc.
+
+	**bpftool net help**
+		  Print short help message.
+
+OPTIONS
+=======
+	-h, --help
+		  Print short generic help message (similar to **bpftool help**).
+
+	-v, --version
+		  Print version number (similar to **bpftool version**).
+
+	-j, --json
+		  Generate JSON output. For commands that cannot produce JSON, this
+		  option has no effect.
+
+	-p, --pretty
+		  Generate human-readable JSON output. Implies **-j**.
+
+EXAMPLES
+========
+
+| **# bpftool net**
+
+::
+
+      xdp [
+      ifindex 2 devname eth0 prog_id 198
+      ]
+      tc_filters [
+      ifindex 2 kind qdisc_htb name prefix_matcher.o:[cls_prefix_matcher_htb]
+                prog_id 111727 tag d08fe3b4319bc2fd act []
+      ifindex 2 kind qdisc_clsact_ingress name fbflow_icmp
+                prog_id 130246 tag 3f265c7f26db62c9 act []
+      ifindex 2 kind qdisc_clsact_egress name prefix_matcher.o:[cls_prefix_matcher_clsact]
+                prog_id 111726 tag 99a197826974c876
+      ifindex 2 kind qdisc_clsact_egress name cls_fg_dscp
+                prog_id 108619 tag dc4630674fd72dcc act []
+      ifindex 2 kind qdisc_clsact_egress name fbflow_egress
+                prog_id 130245 tag 72d2d830d6888d2c
+      ]
+
+|
+| **# bpftool -jp net**
+
+::
+
+    [{
+            "xdp": [{
+                    "ifindex": 2,
+                    "devname": "eth0",
+                    "prog_id": 198
+                }
+            ],
+            "tc_filters": [{
+                    "ifindex": 2,
+                    "kind": "qdisc_htb",
+                    "name": "prefix_matcher.o:[cls_prefix_matcher_htb]",
+                    "prog_id": 111727,
+                    "tag": "d08fe3b4319bc2fd",
+                    "act": []
+                },{
+                    "ifindex": 2,
+                    "kind": "qdisc_clsact_ingress",
+                    "name": "fbflow_icmp",
+                    "prog_id": 130246,
+                    "tag": "3f265c7f26db62c9",
+                    "act": []
+                },{
+                    "ifindex": 2,
+                    "kind": "qdisc_clsact_egress",
+                    "name": "prefix_matcher.o:[cls_prefix_matcher_clsact]",
+                    "prog_id": 111726,
+                    "tag": "99a197826974c876"
+                },{
+                    "ifindex": 2,
+                    "kind": "qdisc_clsact_egress",
+                    "name": "cls_fg_dscp",
+                    "prog_id": 108619,
+                    "tag": "dc4630674fd72dcc",
+                    "act": []
+                },{
+                    "ifindex": 2,
+                    "kind": "qdisc_clsact_egress",
+                    "name": "fbflow_egress",
+                    "prog_id": 130245,
+                    "tag": "72d2d830d6888d2c"
+                }
+            ]
+        }
+    ]
+
+
+SEE ALSO
+========
+	**bpftool**\ (8), **bpftool-prog**\ (8), **bpftool-map**\ (8)
diff --git a/tools/bpf/bpftool/Documentation/bpftool.rst b/tools/bpf/bpftool/Documentation/bpftool.rst
index b6f5d560460d..8dda77daeda9 100644
--- a/tools/bpf/bpftool/Documentation/bpftool.rst
+++ b/tools/bpf/bpftool/Documentation/bpftool.rst
@@ -16,7 +16,7 @@ SYNOPSIS
 
 	**bpftool** **version**
 
-	*OBJECT* := { **map** | **program** | **cgroup** | **perf** }
+	*OBJECT* := { **map** | **program** | **cgroup** | **perf** | **net** }
 
 	*OPTIONS* := { { **-V** | **--version** } | { **-h** | **--help** }
 	| { **-j** | **--json** } [{ **-p** | **--pretty** }] }
@@ -32,6 +32,8 @@ SYNOPSIS
 
 	*PERF-COMMANDS* := { **show** | **list** | **help** }
 
+	*NET-COMMANDS* := { **show** | **list** | **help** }
+
 DESCRIPTION
 ===========
 	*bpftool* allows for inspection and simple modification of BPF objects
@@ -58,4 +60,4 @@ OPTIONS
 SEE ALSO
 ========
 	**bpftool-map**\ (8), **bpftool-prog**\ (8), **bpftool-cgroup**\ (8)
-        **bpftool-perf**\ (8)
+        **bpftool-perf**\ (8), **bpftool-net**\ (8)
diff --git a/tools/bpf/bpftool/bash-completion/bpftool b/tools/bpf/bpftool/bash-completion/bpftool
index 598066c40191..df1060b852c1 100644
--- a/tools/bpf/bpftool/bash-completion/bpftool
+++ b/tools/bpf/bpftool/bash-completion/bpftool
@@ -494,10 +494,10 @@ _bpftool()
                     _filedir
                     return 0
                     ;;
-		tree)
-		    _filedir
-		    return 0
-		    ;;
+                tree)
+                    _filedir
+                    return 0
+                    ;;
                 attach|detach)
                     local ATTACH_TYPES='ingress egress sock_create sock_ops \
                         device bind4 bind6 post_bind4 post_bind6 connect4 \
@@ -552,6 +552,15 @@ _bpftool()
                     ;;
             esac
             ;;
+        net)
+            case $command in
+                *)
+                    [[ $prev == $object ]] && \
+                        COMPREPLY=( $( compgen -W 'help \
+                            show list' -- "$cur" ) )
+                    ;;
+            esac
+            ;;
     esac
 } &&
 complete -F _bpftool bpftool
diff --git a/tools/bpf/bpftool/main.c b/tools/bpf/bpftool/main.c
index d15a62be6cf0..79dc3f193547 100644
--- a/tools/bpf/bpftool/main.c
+++ b/tools/bpf/bpftool/main.c
@@ -85,7 +85,7 @@ static int do_help(int argc, char **argv)
 		"       %s batch file FILE\n"
 		"       %s version\n"
 		"\n"
-		"       OBJECT := { prog | map | cgroup | perf }\n"
+		"       OBJECT := { prog | map | cgroup | perf | net }\n"
 		"       " HELP_SPEC_OPTIONS "\n"
 		"",
 		bin_name, bin_name, bin_name);
@@ -215,6 +215,7 @@ static const struct cmd cmds[] = {
 	{ "map",	do_map },
 	{ "cgroup",	do_cgroup },
 	{ "perf",	do_perf },
+	{ "net",	do_net },
 	{ "version",	do_version },
 	{ 0 }
 };
diff --git a/tools/bpf/bpftool/main.h b/tools/bpf/bpftool/main.h
index 238e734d75b3..02dfbcb92a23 100644
--- a/tools/bpf/bpftool/main.h
+++ b/tools/bpf/bpftool/main.h
@@ -136,6 +136,7 @@ int do_map(int argc, char **arg);
 int do_event_pipe(int argc, char **argv);
 int do_cgroup(int argc, char **arg);
 int do_perf(int argc, char **arg);
+int do_net(int argc, char **arg);
 
 int prog_parse_fd(int *argc, char ***argv);
 int map_parse_fd(int *argc, char ***argv);
@@ -165,4 +166,10 @@ struct btf_dumper {
  */
 int btf_dumper_type(const struct btf_dumper *d, __u32 type_id,
 		    const void *data);
+
+struct nlattr;
+struct ifinfomsg;
+struct tcmsg;
+int do_xdp_dump(struct ifinfomsg *ifinfo, struct nlattr **tb);
+int do_filter_dump(struct tcmsg *ifinfo, struct nlattr **tb, const char *kind);
 #endif
diff --git a/tools/bpf/bpftool/net.c b/tools/bpf/bpftool/net.c
new file mode 100644
index 000000000000..77dd73dd9ade
--- /dev/null
+++ b/tools/bpf/bpftool/net.c
@@ -0,0 +1,233 @@
+// SPDX-License-Identifier: GPL-2.0+
+// Copyright (C) 2018 Facebook
+
+#define _GNU_SOURCE
+#include <stdlib.h>
+#include <string.h>
+#include <unistd.h>
+#include <libbpf.h>
+#include <net/if.h>
+#include <linux/if.h>
+#include <linux/rtnetlink.h>
+#include <linux/tc_act/tc_bpf.h>
+#include <sys/socket.h>
+
+#include <bpf.h>
+#include <nlattr.h>
+#include "main.h"
+#include "netlink_dumper.h"
+
+struct bpf_netdev_t {
+	int	*ifindex_array;
+	int	used_len;
+	int	array_len;
+	int	filter_idx;
+};
+
+struct tc_kind_handle {
+	char	kind[64];
+	int	handle;
+};
+
+struct bpf_tcinfo_t {
+	struct tc_kind_handle	*handle_array;
+	int			used_len;
+	int			array_len;
+	bool			is_qdisc;
+};
+
+static int dump_link_nlmsg(void *cookie, void *msg, struct nlattr **tb)
+{
+	struct bpf_netdev_t *netinfo = cookie;
+	struct ifinfomsg *ifinfo = msg;
+
+	if (netinfo->filter_idx > 0 && netinfo->filter_idx != ifinfo->ifi_index)
+		return 0;
+
+	if (netinfo->used_len == netinfo->array_len) {
+		netinfo->ifindex_array = realloc(netinfo->ifindex_array,
+			(netinfo->array_len + 16) * sizeof(int));
+		netinfo->array_len += 16;
+	}
+	netinfo->ifindex_array[netinfo->used_len++] = ifinfo->ifi_index;
+
+	return do_xdp_dump(ifinfo, tb);
+}
+
+static int dump_class_qdisc_nlmsg(void *cookie, void *msg, struct nlattr **tb)
+{
+	struct bpf_tcinfo_t *tcinfo = cookie;
+	struct tcmsg *info = msg;
+
+	if (tcinfo->is_qdisc) {
+		/* skip clsact qdisc */
+		if (tb[TCA_KIND] &&
+		    strcmp(nla_data(tb[TCA_KIND]), "clsact") == 0)
+			return 0;
+		if (info->tcm_handle == 0)
+			return 0;
+	}
+
+	if (tcinfo->used_len == tcinfo->array_len) {
+		tcinfo->handle_array = realloc(tcinfo->handle_array,
+			(tcinfo->array_len + 16) * sizeof(struct tc_kind_handle));
+		tcinfo->array_len += 16;
+	}
+	tcinfo->handle_array[tcinfo->used_len].handle = info->tcm_handle;
+	snprintf(tcinfo->handle_array[tcinfo->used_len].kind,
+		 sizeof(tcinfo->handle_array[tcinfo->used_len].kind),
+		 "%s_%s",
+		 tcinfo->is_qdisc ? "qdisc" : "class",
+		 tb[TCA_KIND] ? nla_getattr_str(tb[TCA_KIND]) : "unknown");
+	tcinfo->used_len++;
+
+	return 0;
+}
+
+static int dump_filter_nlmsg(void *cookie, void *msg, struct nlattr **tb)
+{
+	const char *kind = cookie;
+
+	return do_filter_dump((struct tcmsg *)msg, tb, kind);
+}
+
+static int show_dev_tc_bpf(int sock, unsigned int nl_pid, int ifindex)
+{
+	struct bpf_tcinfo_t tcinfo;
+	int i, handle, ret;
+
+	tcinfo.handle_array = NULL;
+	tcinfo.used_len = 0;
+	tcinfo.array_len = 0;
+
+	tcinfo.is_qdisc = false;
+	ret = nl_get_class(sock, nl_pid, ifindex, dump_class_qdisc_nlmsg,
+			   &tcinfo);
+	if (ret)
+		return ret;
+
+	tcinfo.is_qdisc = true;
+	ret = nl_get_qdisc(sock, nl_pid, ifindex, dump_class_qdisc_nlmsg,
+			   &tcinfo);
+	if (ret)
+		return ret;
+
+	for (i = 0; i < tcinfo.used_len; i++) {
+		ret = nl_get_filter(sock, nl_pid, ifindex,
+				    tcinfo.handle_array[i].handle,
+				    dump_filter_nlmsg,
+				    tcinfo.handle_array[i].kind);
+		if (ret)
+			return ret;
+	}
+
+	/* root, ingress and egress handle */
+	handle = TC_H_ROOT;
+	ret = nl_get_filter(sock, nl_pid, ifindex, handle, dump_filter_nlmsg,
+			    "root");
+	if (ret)
+		return ret;
+
+	handle = TC_H_MAKE(TC_H_CLSACT, TC_H_MIN_INGRESS);
+	ret = nl_get_filter(sock, nl_pid, ifindex, handle, dump_filter_nlmsg,
+			    "qdisc_clsact_ingress");
+	if (ret)
+		return ret;
+
+	handle = TC_H_MAKE(TC_H_CLSACT, TC_H_MIN_EGRESS);
+	ret = nl_get_filter(sock, nl_pid, ifindex, handle, dump_filter_nlmsg,
+			    "qdisc_clsact_egress");
+	if (ret)
+		return ret;
+
+	return 0;
+}
+
+static int do_show(int argc, char **argv)
+{
+	int i, sock, ret, filter_idx = -1;
+	struct bpf_netdev_t dev_array;
+	unsigned int nl_pid;
+	char err_buf[256];
+
+	if (argc == 2) {
+		if (strcmp(argv[0], "dev") != 0)
+			usage();
+		filter_idx = if_nametoindex(argv[1]);
+		if (filter_idx == 0) {
+			fprintf(stderr, "invalid dev name %s\n", argv[1]);
+			return -1;
+		}
+	} else if (argc != 0) {
+		usage();
+	}
+
+	sock = bpf_netlink_open(&nl_pid);
+	if (sock < 0) {
+		fprintf(stderr, "failed to open netlink sock\n");
+		return -1;
+	}
+
+	dev_array.ifindex_array = NULL;
+	dev_array.used_len = 0;
+	dev_array.array_len = 0;
+	dev_array.filter_idx = filter_idx;
+
+	if (json_output)
+		jsonw_start_array(json_wtr);
+	NET_START_OBJECT;
+	NET_START_ARRAY("xdp", "\n");
+	ret = nl_get_link(sock, nl_pid, dump_link_nlmsg, &dev_array);
+	NET_END_ARRAY("\n");
+
+	if (!ret) {
+		NET_START_ARRAY("tc_filters", "\n");
+		for (i = 0; i < dev_array.used_len; i++) {
+			ret = show_dev_tc_bpf(sock, nl_pid,
+					      dev_array.ifindex_array[i]);
+			if (ret)
+				break;
+		}
+		NET_END_ARRAY("\n");
+	}
+	NET_END_OBJECT;
+	if (json_output)
+		jsonw_end_array(json_wtr);
+
+	if (ret) {
+		if (json_output)
+			jsonw_null(json_wtr);
+		libbpf_strerror(ret, err_buf, sizeof(err_buf));
+		fprintf(stderr, "Error: %s\n", err_buf);
+	}
+	free(dev_array.ifindex_array);
+	close(sock);
+	return ret;
+}
+
+static int do_help(int argc, char **argv)
+{
+	if (json_output) {
+		jsonw_null(json_wtr);
+		return 0;
+	}
+
+	fprintf(stderr,
+		"Usage: %s %s { show | list } [dev <devname>]\n"
+		"       %s %s help\n",
+		bin_name, argv[-2], bin_name, argv[-2]);
+
+	return 0;
+}
+
+static const struct cmd cmds[] = {
+	{ "show",	do_show },
+	{ "list",	do_show },
+	{ "help",	do_help },
+	{ 0 }
+};
+
+int do_net(int argc, char **argv)
+{
+	return cmd_select(cmds, argc, argv, do_help);
+}
diff --git a/tools/bpf/bpftool/netlink_dumper.c b/tools/bpf/bpftool/netlink_dumper.c
new file mode 100644
index 000000000000..e12494fd1d2e
--- /dev/null
+++ b/tools/bpf/bpftool/netlink_dumper.c
@@ -0,0 +1,181 @@
+// SPDX-License-Identifier: GPL-2.0+
+// Copyright (C) 2018 Facebook
+
+#include <stdlib.h>
+#include <string.h>
+#include <libbpf.h>
+#include <linux/rtnetlink.h>
+#include <linux/tc_act/tc_bpf.h>
+
+#include <nlattr.h>
+#include "main.h"
+#include "netlink_dumper.h"
+
+static void xdp_dump_prog_id(struct nlattr **tb, int attr,
+			     const char *type)
+{
+	if (!tb[attr])
+		return;
+
+	NET_DUMP_UINT(type, nla_getattr_u32(tb[attr]))
+}
+
+static int do_xdp_dump_one(struct nlattr *attr, unsigned int ifindex,
+			   const char *name)
+{
+	struct nlattr *tb[IFLA_XDP_MAX + 1];
+	unsigned char mode;
+
+	if (nla_parse_nested(tb, IFLA_XDP_MAX, attr, NULL) < 0)
+		return -1;
+
+	if (!tb[IFLA_XDP_ATTACHED])
+		return 0;
+
+	mode = nla_getattr_u8(tb[IFLA_XDP_ATTACHED]);
+	if (mode == XDP_ATTACHED_NONE)
+		return 0;
+
+	NET_START_OBJECT;
+	NET_DUMP_UINT("ifindex", ifindex);
+
+	if (name)
+		NET_DUMP_STR("devname", name);
+
+	if (tb[IFLA_XDP_PROG_ID])
+		NET_DUMP_UINT("prog_id", nla_getattr_u32(tb[IFLA_XDP_PROG_ID]));
+
+	if (mode == XDP_ATTACHED_MULTI) {
+		xdp_dump_prog_id(tb, IFLA_XDP_SKB_PROG_ID, "generic_prog_id");
+		xdp_dump_prog_id(tb, IFLA_XDP_DRV_PROG_ID, "drv_prog_id");
+		xdp_dump_prog_id(tb, IFLA_XDP_HW_PROG_ID, "offload_prog_id");
+	}
+
+	NET_END_OBJECT_FINAL;
+	return 0;
+}
+
+int do_xdp_dump(struct ifinfomsg *ifinfo, struct nlattr **tb)
+{
+	if (!tb[IFLA_XDP])
+		return 0;
+
+	return do_xdp_dump_one(tb[IFLA_XDP], ifinfo->ifi_index,
+			       nla_getattr_str(tb[IFLA_IFNAME]));
+}
+
+static char *hexstring_n2a(const unsigned char *str, int len,
+			   char *buf, int blen)
+{
+	char *ptr = buf;
+	int i;
+
+	for (i = 0; i < len; i++) {
+		if (blen < 3)
+			break;
+		sprintf(ptr, "%02x", str[i]);
+		ptr += 2;
+		blen -= 2;
+	}
+	return buf;
+}
+
+static int do_bpf_dump_one_act(struct nlattr *attr)
+{
+	struct nlattr *tb[TCA_ACT_BPF_MAX + 1];
+	char buf[256];
+
+	if (nla_parse_nested(tb, TCA_ACT_BPF_MAX, attr, NULL) < 0)
+		return -LIBBPF_ERRNO__NLPARSE;
+
+	if (!tb[TCA_ACT_BPF_PARMS])
+		return -LIBBPF_ERRNO__NLPARSE;
+
+	NET_START_OBJECT_NESTED2;
+	if (tb[TCA_ACT_BPF_NAME])
+		NET_DUMP_STR("name", nla_getattr_str(tb[TCA_ACT_BPF_NAME]));
+	if (tb[TCA_ACT_BPF_ID])
+		NET_DUMP_UINT("bpf_id", nla_getattr_u32(tb[TCA_ACT_BPF_ID]));
+	if (tb[TCA_ACT_BPF_TAG])
+		NET_DUMP_STR("tag", hexstring_n2a(nla_data(tb[TCA_ACT_BPF_TAG]),
+						  nla_len(tb[TCA_ACT_BPF_TAG]),
+						  buf, sizeof(buf)));
+	NET_END_OBJECT_NESTED;
+	return 0;
+}
+
+static int do_dump_one_act(struct nlattr *attr)
+{
+	struct nlattr *tb[TCA_ACT_MAX + 1];
+
+	if (!attr)
+		return 0;
+
+	if (nla_parse_nested(tb, TCA_ACT_MAX, attr, NULL) < 0)
+		return -LIBBPF_ERRNO__NLPARSE;
+
+	if (tb[TCA_ACT_KIND] && strcmp(nla_data(tb[TCA_ACT_KIND]), "bpf") == 0)
+		return do_bpf_dump_one_act(tb[TCA_ACT_OPTIONS]);
+
+	return 0;
+}
+
+static int do_bpf_act_dump(struct nlattr *attr)
+{
+	struct nlattr *tb[TCA_ACT_MAX_PRIO + 1];
+	int act, ret;
+
+	if (nla_parse_nested(tb, TCA_ACT_MAX_PRIO, attr, NULL) < 0)
+		return -LIBBPF_ERRNO__NLPARSE;
+
+	NET_START_ARRAY("act", "");
+	for (act = 0; act <= TCA_ACT_MAX_PRIO; act++) {
+		ret = do_dump_one_act(tb[act]);
+		if (ret)
+			break;
+	}
+	NET_END_ARRAY(" ");
+
+	return ret;
+}
+
+static int do_bpf_filter_dump(struct nlattr *attr)
+{
+	struct nlattr *tb[TCA_BPF_MAX + 1];
+	char buf[256];
+	int ret;
+
+	if (nla_parse_nested(tb, TCA_BPF_MAX, attr, NULL) < 0)
+		return -LIBBPF_ERRNO__NLPARSE;
+
+	if (tb[TCA_BPF_NAME])
+		NET_DUMP_STR("name", nla_getattr_str(tb[TCA_BPF_NAME]));
+	if (tb[TCA_BPF_ID])
+		NET_DUMP_UINT("prog_id", nla_getattr_u32(tb[TCA_BPF_ID]));
+	if (tb[TCA_BPF_TAG])
+		NET_DUMP_STR("tag", hexstring_n2a(nla_data(tb[TCA_BPF_TAG]),
+						  nla_len(tb[TCA_BPF_TAG]),
+						  buf, sizeof(buf)));
+	if (tb[TCA_BPF_ACT]) {
+		ret = do_bpf_act_dump(tb[TCA_BPF_ACT]);
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
+int do_filter_dump(struct tcmsg *info, struct nlattr **tb, const char *kind)
+{
+	int ret = 0;
+
+	if (tb[TCA_OPTIONS] && strcmp(nla_data(tb[TCA_KIND]), "bpf") == 0) {
+		NET_START_OBJECT;
+		NET_DUMP_UINT("ifindex", info->tcm_ifindex);
+		NET_DUMP_STR("kind", kind);
+		ret = do_bpf_filter_dump(tb[TCA_OPTIONS]);
+		NET_END_OBJECT_FINAL;
+	}
+
+	return ret;
+}
diff --git a/tools/bpf/bpftool/netlink_dumper.h b/tools/bpf/bpftool/netlink_dumper.h
new file mode 100644
index 000000000000..552d8851ac06
--- /dev/null
+++ b/tools/bpf/bpftool/netlink_dumper.h
@@ -0,0 +1,103 @@
+// SPDX-License-Identifier: GPL-2.0+
+// Copyright (C) 2018 Facebook
+
+#ifndef _NETLINK_DUMPER_H_
+#define _NETLINK_DUMPER_H_
+
+#define NET_START_OBJECT				\
+{							\
+	if (json_output)				\
+		jsonw_start_object(json_wtr);		\
+}
+
+#define NET_START_OBJECT_NESTED(name)			\
+{							\
+	if (json_output) {				\
+		jsonw_name(json_wtr, name);		\
+		jsonw_start_object(json_wtr);		\
+	} else {					\
+		fprintf(stderr, "%s {", name);		\
+	}						\
+}
+
+#define NET_START_OBJECT_NESTED2			\
+{							\
+	if (json_output)				\
+		jsonw_start_object(json_wtr);		\
+	else						\
+		fprintf(stderr, "{");			\
+}
+
+#define NET_END_OBJECT_NESTED				\
+{							\
+	if (json_output)				\
+		jsonw_end_object(json_wtr);		\
+	else						\
+		fprintf(stderr, "}");			\
+}
+
+#define NET_END_OBJECT					\
+{							\
+	if (json_output)				\
+		jsonw_end_object(json_wtr);		\
+}
+
+#define NET_END_OBJECT_FINAL				\
+{							\
+	if (json_output)				\
+		jsonw_end_object(json_wtr);		\
+	else						\
+		fprintf(stderr, "\n");			\
+}
+
+#define NET_START_ARRAY(name, newline)			\
+{							\
+	if (json_output) {				\
+		jsonw_name(json_wtr, name);		\
+		jsonw_start_array(json_wtr);		\
+	} else {					\
+		fprintf(stderr, "%s [%s", name, newline);\
+	}						\
+}
+
+#define NET_END_ARRAY(endstr)				\
+{							\
+	if (json_output)				\
+		jsonw_end_array(json_wtr);		\
+	else						\
+		fprintf(stderr, "]%s", endstr);		\
+}
+
+#define NET_DUMP_UINT(name, val)			\
+{							\
+	if (json_output)				\
+		jsonw_uint_field(json_wtr, name, val);	\
+	else						\
+		fprintf(stderr, "%s %d ", name, val);	\
+}
+
+#define NET_DUMP_LLUINT(name, val)			\
+{							\
+	if (json_output)				\
+		jsonw_lluint_field(json_wtr, name, val);\
+	else						\
+		fprintf(stderr, "%s %lld ", name, val);	\
+}
+
+#define NET_DUMP_STR(name, str)				\
+{							\
+	if (json_output)				\
+		jsonw_string_field(json_wtr, name, str);\
+	else						\
+		fprintf(stderr, "%s %s ", name, str);	\
+}
+
+#define NET_DUMP_STR_ONLY(str)				\
+{							\
+	if (json_output)				\
+		jsonw_string(json_wtr, str);		\
+	else						\
+		fprintf(stderr, "%s ", str);		\
+}
+
+#endif
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH bpf-next 0/4] tools/bpf: add bpftool net support
  2018-09-05 23:58 [PATCH bpf-next 0/4] tools/bpf: add bpftool net support Yonghong Song
                   ` (3 preceding siblings ...)
  2018-09-05 23:58 ` [PATCH bpf-next 4/4] tools/bpf: bpftool: add net support Yonghong Song
@ 2018-09-06 18:11 ` Alexei Starovoitov
  4 siblings, 0 replies; 10+ messages in thread
From: Alexei Starovoitov @ 2018-09-06 18:11 UTC (permalink / raw)
  To: Yonghong Song; +Cc: ast, daniel, netdev, kernel-team

On Wed, Sep 05, 2018 at 04:58:02PM -0700, Yonghong Song wrote:
> As bpf usage becomes more pervasive, people starts to worry
> about their cpu and memory cost. On a particular host,
> people often wanted to know all running bpf programs
> and their attachment context. So they can relate
> a performance/memory anormly quickly to a particular bpf
> program or an application.
> 
> bpftool already provides a pretty good coverage for perf
> and cgroup related attachments. This patch set enabled
> to dump attachment info for xdp and tc bpf programs.
> 
> Currently, users can already use "ip link show <dev>" and
> "tc filter show dev <dev> ..." to dump bpf program attachment
> information for xdp and tc bpf programs. The main reason
> to implement such functionality in bpftool as well is for
> better user experience. We want the bpftool to be the
> ultimate tool for bpf introspection. The bpftool net
> implementation will only present necessary bpf attachment
> information to the user, ignoring most other ip/tc
> specific information.
> 
> For example, the below is a pretty json print for xdp
> and tc_filters.
> 
>   $ ./bpftool -jp net
>   [{
>         "xdp": [{
>                 "ifindex": 2,
>                 "devname": "eth0",
>                 "prog_id": 198
>             }
>         ],
>         "tc_filters": [{
>                 "ifindex": 2,
>                 "kind": "qdisc_htb",
>                 "name": "prefix_matcher.o:[cls_prefix_matcher_htb]",
>                 "prog_id": 111727,
>                 "tag": "d08fe3b4319bc2fd",
>                 "act": []
>             },{
>                 "ifindex": 2,
>                 "kind": "qdisc_clsact_ingress",
>                 "name": "fbflow_icmp",
>                 "prog_id": 130246,
>                 "tag": "3f265c7f26db62c9",
>                 "act": []
>             },{
>                 "ifindex": 2,
>                 "kind": "qdisc_clsact_egress",
>                 "name": "prefix_matcher.o:[cls_prefix_matcher_clsact]",
>                 "prog_id": 111726,
>                 "tag": "99a197826974c876"
>             },{
>                 "ifindex": 2,
>                 "kind": "qdisc_clsact_egress",
>                 "name": "cls_fg_dscp",
>                 "prog_id": 108619,
>                 "tag": "dc4630674fd72dcc",
>                 "act": []
>             },{
>                 "ifindex": 2,
>                 "kind": "qdisc_clsact_egress",
>                 "name": "fbflow_egress",
>                 "prog_id": 130245,
>                 "tag": "72d2d830d6888d2c"
>             }
>         ]
>     }
>   ]
> 
> Patch #1 synced kernel uapi header if_link.h to tools directory.
> Patch #2 moved tools/bpf/lib/bpf.c netlink related functions to
> a new file. Patch #3 implemented additional functions
> in libbpf which will be used in Patch #4.
> Patch #4 implemented bpftool net support to dump
> xdp and tc bpf program attachments.

Applied, Thanks

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH bpf-next 4/4] tools/bpf: bpftool: add net support
  2018-09-05 23:58 ` [PATCH bpf-next 4/4] tools/bpf: bpftool: add net support Yonghong Song
@ 2018-09-12 22:29   ` Daniel Borkmann
  2018-09-12 22:40     ` Daniel Borkmann
  2018-09-13  4:57     ` Yonghong Song
  0 siblings, 2 replies; 10+ messages in thread
From: Daniel Borkmann @ 2018-09-12 22:29 UTC (permalink / raw)
  To: Yonghong Song, ast, netdev; +Cc: kernel-team

On 09/06/2018 01:58 AM, Yonghong Song wrote:
> Add "bpftool net" support. Networking devices are enumerated
> to dump device index/name associated with xdp progs.
> 
> For each networking device, tc classes and qdiscs are enumerated
> in order to check their bpf filters.
> In addition, root handle and clsact ingress/egress are also checked for
> bpf filters.  Not all filter information is printed out. Only ifindex,
> kind, filter name, prog_id and tag are printed out, which are good
> enough to show attachment information. If the filter action
> is a bpf action, its bpf program id, bpf name and tag will be
> printed out as well.
> 
> For example,
>   $ ./bpftool net
>   xdp [
>   ifindex 2 devname eth0 prog_id 198
>   ]

Could we make the output more terse? E.g. the 'ifindex' and 'devname' is basically
zero information but will take lots of space. 'eth0 (2)' would for example make it
shorter. Also info is missing whether the attached prog is driver/hw/generic XDP. :(

>   tc_filters [
>   ifindex 2 kind qdisc_htb name prefix_matcher.o:[cls_prefix_matcher_htb]
>             prog_id 111727 tag d08fe3b4319bc2fd act []
>   ifindex 2 kind qdisc_clsact_ingress name fbflow_icmp
>             prog_id 130246 tag 3f265c7f26db62c9 act []
>   ifindex 2 kind qdisc_clsact_egress name prefix_matcher.o:[cls_prefix_matcher_clsact]
>             prog_id 111726 tag 99a197826974c876
>   ifindex 2 kind qdisc_clsact_egress name cls_fg_dscp
>             prog_id 108619 tag dc4630674fd72dcc act []
>   ifindex 2 kind qdisc_clsact_egress name fbflow_egress
>             prog_id 130245 tag 72d2d830d6888d2c
>   ]

Similar comment here. Do we need the tag here? I think it's not really needed, e.g.
the output of bpftool perf [0] doesn't provide it either and therefore makes the list
as nice one-liners, so overview is much nicer there. Is there a reason that tc progs
do not show dev name as opposed to xdp progs? Can we shorten everything to make it
a one-liner like in bpftool perf?

Should we have a small indicator here if the tc prog was offloaded?

Does the dump work with tc shared blocks?

Should we also dump networking related cgroup BPF progs here under bpftool net?

Thanks,
Daniel

  [0] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=b04df400c30235fa347313c9e2a0695549bd2c8e

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH bpf-next 4/4] tools/bpf: bpftool: add net support
  2018-09-12 22:29   ` Daniel Borkmann
@ 2018-09-12 22:40     ` Daniel Borkmann
  2018-09-13  5:01       ` Yonghong Song
  2018-09-13  4:57     ` Yonghong Song
  1 sibling, 1 reply; 10+ messages in thread
From: Daniel Borkmann @ 2018-09-12 22:40 UTC (permalink / raw)
  To: Yonghong Song, ast, netdev; +Cc: kernel-team

On 09/13/2018 12:29 AM, Daniel Borkmann wrote:
> On 09/06/2018 01:58 AM, Yonghong Song wrote:
>> Add "bpftool net" support. Networking devices are enumerated
>> to dump device index/name associated with xdp progs.
>>
>> For each networking device, tc classes and qdiscs are enumerated
>> in order to check their bpf filters.
>> In addition, root handle and clsact ingress/egress are also checked for
>> bpf filters.  Not all filter information is printed out. Only ifindex,
>> kind, filter name, prog_id and tag are printed out, which are good
>> enough to show attachment information. If the filter action
>> is a bpf action, its bpf program id, bpf name and tag will be
>> printed out as well.
>>
>> For example,
>>   $ ./bpftool net
>>   xdp [
>>   ifindex 2 devname eth0 prog_id 198
>>   ]
> 
> Could we make the output more terse? E.g. the 'ifindex' and 'devname' is basically
> zero information but will take lots of space. 'eth0 (2)' would for example make it
> shorter. Also info is missing whether the attached prog is driver/hw/generic XDP. :(
> 
>>   tc_filters [
>>   ifindex 2 kind qdisc_htb name prefix_matcher.o:[cls_prefix_matcher_htb]
>>             prog_id 111727 tag d08fe3b4319bc2fd act []
>>   ifindex 2 kind qdisc_clsact_ingress name fbflow_icmp
>>             prog_id 130246 tag 3f265c7f26db62c9 act []
>>   ifindex 2 kind qdisc_clsact_egress name prefix_matcher.o:[cls_prefix_matcher_clsact]
>>             prog_id 111726 tag 99a197826974c876
>>   ifindex 2 kind qdisc_clsact_egress name cls_fg_dscp
>>             prog_id 108619 tag dc4630674fd72dcc act []
>>   ifindex 2 kind qdisc_clsact_egress name fbflow_egress
>>             prog_id 130245 tag 72d2d830d6888d2c
>>   ]
> 
> Similar comment here. Do we need the tag here? I think it's not really needed, e.g.
> the output of bpftool perf [0] doesn't provide it either and therefore makes the list
> as nice one-liners, so overview is much nicer there. Is there a reason that tc progs
> do not show dev name as opposed to xdp progs? Can we shorten everything to make it
> a one-liner like in bpftool perf?
> 
> Should we have a small indicator here if the tc prog was offloaded?
> 
> Does the dump work with tc shared blocks?
> 
> Should we also dump networking related cgroup BPF progs here under bpftool net?

One more thought, I think it would make sense to also explicitly document current
limitations of the progs that this listing does not show where user should consult
other tools aka iproute2 e.g. lwt, seg6, tc bpf actions, etc, so the current scope
would be more clear for users.

> Thanks,
> Daniel
> 
>   [0] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=b04df400c30235fa347313c9e2a0695549bd2c8e
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH bpf-next 4/4] tools/bpf: bpftool: add net support
  2018-09-12 22:29   ` Daniel Borkmann
  2018-09-12 22:40     ` Daniel Borkmann
@ 2018-09-13  4:57     ` Yonghong Song
  1 sibling, 0 replies; 10+ messages in thread
From: Yonghong Song @ 2018-09-13  4:57 UTC (permalink / raw)
  To: Daniel Borkmann, ast, netdev; +Cc: kernel-team



On 9/12/18 3:29 PM, Daniel Borkmann wrote:
> On 09/06/2018 01:58 AM, Yonghong Song wrote:
>> Add "bpftool net" support. Networking devices are enumerated
>> to dump device index/name associated with xdp progs.
>>
>> For each networking device, tc classes and qdiscs are enumerated
>> in order to check their bpf filters.
>> In addition, root handle and clsact ingress/egress are also checked for
>> bpf filters.  Not all filter information is printed out. Only ifindex,
>> kind, filter name, prog_id and tag are printed out, which are good
>> enough to show attachment information. If the filter action
>> is a bpf action, its bpf program id, bpf name and tag will be
>> printed out as well.
>>
>> For example,
>>    $ ./bpftool net
>>    xdp [
>>    ifindex 2 devname eth0 prog_id 198
>>    ]
> 
> Could we make the output more terse? E.g. the 'ifindex' and 'devname' is basically
> zero information but will take lots of space. 'eth0 (2)' would for example make it
> shorter. Also info is missing whether the attached prog is driver/hw/generic XDP. :(

Right 'eth0 (2)' is a good idea. Similarly to other bpftool plain 
output, agree we should have concise printout.
For the above xdp output, I guess I tested on an old kernel and will 
test on newer kernel to ensure it works properly.

> 
>>    tc_filters [
>>    ifindex 2 kind qdisc_htb name prefix_matcher.o:[cls_prefix_matcher_htb]
>>              prog_id 111727 tag d08fe3b4319bc2fd act []
>>    ifindex 2 kind qdisc_clsact_ingress name fbflow_icmp
>>              prog_id 130246 tag 3f265c7f26db62c9 act []
>>    ifindex 2 kind qdisc_clsact_egress name prefix_matcher.o:[cls_prefix_matcher_clsact]
>>              prog_id 111726 tag 99a197826974c876
>>    ifindex 2 kind qdisc_clsact_egress name cls_fg_dscp
>>              prog_id 108619 tag dc4630674fd72dcc act []
>>    ifindex 2 kind qdisc_clsact_egress name fbflow_egress
>>              prog_id 130245 tag 72d2d830d6888d2c
>>    ]
> 
> Similar comment here. Do we need the tag here? I think it's not really needed, e.g.
> the output of bpftool perf [0] doesn't provide it either and therefore makes the list
> as nice one-liners, so overview is much nicer there. Is there a reason that tc progs
> do not show dev name as opposed to xdp progs? Can we shorten everything to make it
> a one-liner like in bpftool perf?

Yes, we should remove 'tag'. Users can use prog_id to find tag.
There is no IFNAME attribute in the tc filter return message.
But I already got them in previous iplink message, so I can
print out here.

> Should we have a small indicator here if the tc prog was offloaded?

Not sure about this one. As you suggested in the next email, we
can have a message like if users want more information they
can use more specific tools 'ip link ...', 'tc filter ...' etc.

> 
> Does the dump work with tc shared blocks?

I does not.I did not have experiences with tc shared block and that is 
why I did not add support for it.

> 
> Should we also dump networking related cgroup BPF progs here under bpftool net?

probably not, but in additional to the above suggestions about using 
different tools, we can add a message to suggest check `bpftool cgroup 
tree` for cgroup related net programs.

> 
> Thanks,
> Daniel
> 
>    [0] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=b04df400c30235fa347313c9e2a0695549bd2c8e
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH bpf-next 4/4] tools/bpf: bpftool: add net support
  2018-09-12 22:40     ` Daniel Borkmann
@ 2018-09-13  5:01       ` Yonghong Song
  0 siblings, 0 replies; 10+ messages in thread
From: Yonghong Song @ 2018-09-13  5:01 UTC (permalink / raw)
  To: Daniel Borkmann, ast, netdev; +Cc: kernel-team



On 9/12/18 3:40 PM, Daniel Borkmann wrote:
> On 09/13/2018 12:29 AM, Daniel Borkmann wrote:
>> On 09/06/2018 01:58 AM, Yonghong Song wrote:
>>> Add "bpftool net" support. Networking devices are enumerated
>>> to dump device index/name associated with xdp progs.
>>>
>>> For each networking device, tc classes and qdiscs are enumerated
>>> in order to check their bpf filters.
>>> In addition, root handle and clsact ingress/egress are also checked for
>>> bpf filters.  Not all filter information is printed out. Only ifindex,
>>> kind, filter name, prog_id and tag are printed out, which are good
>>> enough to show attachment information. If the filter action
>>> is a bpf action, its bpf program id, bpf name and tag will be
>>> printed out as well.
>>>
>>> For example,
>>>    $ ./bpftool net
>>>    xdp [
>>>    ifindex 2 devname eth0 prog_id 198
>>>    ]
>>
>> Could we make the output more terse? E.g. the 'ifindex' and 'devname' is basically
>> zero information but will take lots of space. 'eth0 (2)' would for example make it
>> shorter. Also info is missing whether the attached prog is driver/hw/generic XDP. :(
>>
>>>    tc_filters [
>>>    ifindex 2 kind qdisc_htb name prefix_matcher.o:[cls_prefix_matcher_htb]
>>>              prog_id 111727 tag d08fe3b4319bc2fd act []
>>>    ifindex 2 kind qdisc_clsact_ingress name fbflow_icmp
>>>              prog_id 130246 tag 3f265c7f26db62c9 act []
>>>    ifindex 2 kind qdisc_clsact_egress name prefix_matcher.o:[cls_prefix_matcher_clsact]
>>>              prog_id 111726 tag 99a197826974c876
>>>    ifindex 2 kind qdisc_clsact_egress name cls_fg_dscp
>>>              prog_id 108619 tag dc4630674fd72dcc act []
>>>    ifindex 2 kind qdisc_clsact_egress name fbflow_egress
>>>              prog_id 130245 tag 72d2d830d6888d2c
>>>    ]
>>
>> Similar comment here. Do we need the tag here? I think it's not really needed, e.g.
>> the output of bpftool perf [0] doesn't provide it either and therefore makes the list
>> as nice one-liners, so overview is much nicer there. Is there a reason that tc progs
>> do not show dev name as opposed to xdp progs? Can we shorten everything to make it
>> a one-liner like in bpftool perf?
>>
>> Should we have a small indicator here if the tc prog was offloaded?
>>
>> Does the dump work with tc shared blocks?
>>
>> Should we also dump networking related cgroup BPF progs here under bpftool net?
> 
> One more thought, I think it would make sense to also explicitly document current
> limitations of the progs that this listing does not show where user should consult
> other tools aka iproute2 e.g. lwt, seg6, tc bpf actions, etc, so the current scope
> would be more clear for users.

I will do a followup patch to address a few issues mentioned in my 
previous email (mostly more concise output, briefly mentioning 
limitation of the tool, etc.) and also add more information in the
documentation and also in the 'bpf net help' output to set user's
expectation right about what this tool can do and how to find more
information.

> 
>> Thanks,
>> Daniel
>>
>>    [0] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=b04df400c30235fa347313c9e2a0695549bd2c8e
>>
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2018-09-13 10:09 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-09-05 23:58 [PATCH bpf-next 0/4] tools/bpf: add bpftool net support Yonghong Song
2018-09-05 23:58 ` [PATCH bpf-next 1/4] tools/bpf: sync kernel uapi header if_link.h to tools Yonghong Song
2018-09-05 23:58 ` [PATCH bpf-next 2/4] tools/bpf: move bpf/lib netlink related functions into a new file Yonghong Song
2018-09-05 23:58 ` [PATCH bpf-next 3/4] tools/bpf: add more netlink functionalities in lib/bpf Yonghong Song
2018-09-05 23:58 ` [PATCH bpf-next 4/4] tools/bpf: bpftool: add net support Yonghong Song
2018-09-12 22:29   ` Daniel Borkmann
2018-09-12 22:40     ` Daniel Borkmann
2018-09-13  5:01       ` Yonghong Song
2018-09-13  4:57     ` Yonghong Song
2018-09-06 18:11 ` [PATCH bpf-next 0/4] tools/bpf: add bpftool " Alexei Starovoitov

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.