linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/5] Process connector bug fixes & enhancements
@ 2023-03-09  3:19 Anjali Kulkarni
  2023-03-09  3:19 ` [PATCH 1/5] netlink: Reverse the patch which removed filtering Anjali Kulkarni
                   ` (5 more replies)
  0 siblings, 6 replies; 12+ messages in thread
From: Anjali Kulkarni @ 2023-03-09  3:19 UTC (permalink / raw)
  To: davem
  Cc: edumazet, kuba, pabeni, zbr, brauner, johannes, ecree.xilinx,
	leon, keescook, socketcan, petrm, linux-kernel, netdev,
	anjali.k.kulkarni

From: Anjali Kulkarni <anjali.k.kulkarni@oracle.com>

In this series, we add back filtering to the proc connector module. This
is required to fix some bugs and also will enable the addition of event 
based filtering, which will improve performance for anyone interested
in a subset of process events, as compared to the current approach,
which is to send all event notifications.

Thus, a client can register to listen for only exit or fork or a mix or
all of the events. This greatly enhances performance - currently, we
need to listen to all events, and there are 9 different types of events.
For eg. handling 3 types of events - 8K-forks + 8K-exits + 8K-execs takes
200ms, whereas handling 2 types - 8K-forks + 8K-exits takes about 150ms,
and handling just one type - 8K exits takes about 70ms.

Reason why we need the above changes and also a new event type 
PROC_EVENT_NONZERO_EXIT, which is only sent by kernel to a listening
application when any process exiting has a non-zero exit status is:

Oracle DB runs on a large scale with 100000s of short lived processes,
starting up and exiting quickly. A process monitoring DB daemon which
tracks and cleans up after processes that have died without a proper exit
needs notifications only when a process died with a non-zero exit code
(which should be rare).

This change will give Oracle DB substantial performance savings - it takes
50ms to scan about 8K PIDs in /proc, about 500ms for 100K PIDs. DB does
this check every 3 secs, so over an hour we save 10secs for 100K PIDs.

Measuring the time using pidfds for monitoring 8K process exits took 4
times longer - 200ms, as compared to 70ms using only exit notifications
of proc connector. Hence, we cannot use pidfd for our use case.

This kind of a new event could also be useful to other applications like
Google's lmkd daemon, which needs a killed process's exit notification.

This patch series is organized as follows -

Patch 1    : Is needed for patches 2 & 3 to work.
Patches 2-3: Fixes some bugs in proc connector, details in the patches.
Patch 4    : Allow non-root users access to proc connector events.
Patch 5    : Adds event based filtering for performance enhancements.

Anjali Kulkarni (5):
  netlink: Reverse the patch which removed filtering
  connector/cn_proc: Add filtering to fix some bugs
  connector/cn_proc: Test code for proc connector
  connector/cn_proc: Allow non-root users access
  connector/cn_proc: Performance improvements

 drivers/connector/cn_proc.c     | 103 +++++++++--
 drivers/connector/connector.c   |  13 +-
 drivers/w1/w1_netlink.c         |   6 +-
 include/linux/connector.h       |   6 +-
 include/linux/netlink.h         |   5 +
 include/uapi/linux/cn_proc.h    |  62 +++++--
 net/netlink/af_netlink.c        |  35 +++-
 samples/connector/proc_filter.c | 299 ++++++++++++++++++++++++++++++++
 8 files changed, 485 insertions(+), 44 deletions(-)
 create mode 100644 samples/connector/proc_filter.c

-- 
2.39.2


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 1/5] netlink: Reverse the patch which removed filtering
  2023-03-09  3:19 [PATCH 0/5] Process connector bug fixes & enhancements Anjali Kulkarni
@ 2023-03-09  3:19 ` Anjali Kulkarni
  2023-03-09  3:19 ` [PATCH 2/5] connector/cn_proc: Add filtering to fix some bugs Anjali Kulkarni
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 12+ messages in thread
From: Anjali Kulkarni @ 2023-03-09  3:19 UTC (permalink / raw)
  To: davem
  Cc: edumazet, kuba, pabeni, zbr, brauner, johannes, ecree.xilinx,
	leon, keescook, socketcan, petrm, linux-kernel, netdev,
	anjali.k.kulkarni

To use filtering at the connector & cn_proc layers, we need to enable
filtering in the netlink layer. This reverses the patch which removed
netlink filtering:
https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git/commit/?id=549017aa1bb7

Signed-off-by: Anjali Kulkarni <anjali.k.kulkarni@oracle.com>
---
 include/linux/netlink.h  |  5 +++++
 net/netlink/af_netlink.c | 25 +++++++++++++++++++++++--
 2 files changed, 28 insertions(+), 2 deletions(-)

diff --git a/include/linux/netlink.h b/include/linux/netlink.h
index c43ac7690eca..866bbc5a4c8d 100644
--- a/include/linux/netlink.h
+++ b/include/linux/netlink.h
@@ -206,6 +206,11 @@ bool netlink_strict_get_check(struct sk_buff *skb);
 int netlink_unicast(struct sock *ssk, struct sk_buff *skb, __u32 portid, int nonblock);
 int netlink_broadcast(struct sock *ssk, struct sk_buff *skb, __u32 portid,
 		      __u32 group, gfp_t allocation);
+int netlink_broadcast_filtered(struct sock *ssk, struct sk_buff *skb,
+			       __u32 portid, __u32 group, gfp_t allocation,
+			       int (*filter)(struct sock *dsk,
+					     struct sk_buff *skb, void *data),
+			       void *filter_data);
 int netlink_set_err(struct sock *ssk, __u32 portid, __u32 group, int code);
 int netlink_register_notifier(struct notifier_block *nb);
 int netlink_unregister_notifier(struct notifier_block *nb);
diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
index c64277659753..003c7e6ec9be 100644
--- a/net/netlink/af_netlink.c
+++ b/net/netlink/af_netlink.c
@@ -1432,6 +1432,8 @@ struct netlink_broadcast_data {
 	int delivered;
 	gfp_t allocation;
 	struct sk_buff *skb, *skb2;
+	int (*tx_filter)(struct sock *dsk, struct sk_buff *skb, void *data);
+	void *tx_data;
 };
 
 static void do_one_broadcast(struct sock *sk,
@@ -1485,6 +1487,11 @@ static void do_one_broadcast(struct sock *sk,
 			p->delivery_failure = 1;
 		goto out;
 	}
+	if (p->tx_filter && p->tx_filter(sk, p->skb2, p->tx_data)) {
+		kfree_skb(p->skb2);
+		p->skb2 = NULL;
+		goto out;
+	}
 	if (sk_filter(sk, p->skb2)) {
 		kfree_skb(p->skb2);
 		p->skb2 = NULL;
@@ -1507,8 +1514,12 @@ static void do_one_broadcast(struct sock *sk,
 	sock_put(sk);
 }
 
-int netlink_broadcast(struct sock *ssk, struct sk_buff *skb, u32 portid,
-		      u32 group, gfp_t allocation)
+int netlink_broadcast_filtered(struct sock *ssk, struct sk_buff *skb,
+			       u32 portid,
+			       u32 group, gfp_t allocation,
+			       int (*filter)(struct sock *dsk,
+					     struct sk_buff *skb, void *data),
+			       void *filter_data)
 {
 	struct net *net = sock_net(ssk);
 	struct netlink_broadcast_data info;
@@ -1527,6 +1538,8 @@ int netlink_broadcast(struct sock *ssk, struct sk_buff *skb, u32 portid,
 	info.allocation = allocation;
 	info.skb = skb;
 	info.skb2 = NULL;
+	info.tx_filter = filter;
+	info.tx_data = filter_data;
 
 	/* While we sleep in clone, do not allow to change socket list */
 
@@ -1552,6 +1565,14 @@ int netlink_broadcast(struct sock *ssk, struct sk_buff *skb, u32 portid,
 	}
 	return -ESRCH;
 }
+EXPORT_SYMBOL(netlink_broadcast_filtered);
+
+int netlink_broadcast(struct sock *ssk, struct sk_buff *skb, u32 portid,
+		      u32 group, gfp_t allocation)
+{
+	return netlink_broadcast_filtered(ssk, skb, portid, group, allocation,
+					  NULL, NULL);
+}
 EXPORT_SYMBOL(netlink_broadcast);
 
 struct netlink_set_err_data {
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 2/5] connector/cn_proc: Add filtering to fix some bugs
  2023-03-09  3:19 [PATCH 0/5] Process connector bug fixes & enhancements Anjali Kulkarni
  2023-03-09  3:19 ` [PATCH 1/5] netlink: Reverse the patch which removed filtering Anjali Kulkarni
@ 2023-03-09  3:19 ` Anjali Kulkarni
  2023-03-09  7:15   ` kernel test robot
  2023-03-09  3:19 ` [PATCH 3/5] connector/cn_proc: Test code for proc connector Anjali Kulkarni
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 12+ messages in thread
From: Anjali Kulkarni @ 2023-03-09  3:19 UTC (permalink / raw)
  To: davem
  Cc: edumazet, kuba, pabeni, zbr, brauner, johannes, ecree.xilinx,
	leon, keescook, socketcan, petrm, linux-kernel, netdev,
	anjali.k.kulkarni

One bug is if there are more than one listeners for the proc connector
messages, and one of them deregisters for listening using 
PROC_CN_MCAST_IGNORE, they will still get all proc connector messages,
as long as there is another listener.

Another issue is if one client calls PROC_CN_MCAST_LISTEN, and another
one calls PROC_CN_MCAST_IGNORE, then both will end up not getting any
messages.

This patch adds filtering and drops packet if client has sent
PROC_CN_MCAST_IGNORE. This data is stored in the client socket's
sk_user_data. In addition, we only increment or decrement
proc_event_num_listeners once per client. This fixes the above issues.

Signed-off-by: Anjali Kulkarni <anjali.k.kulkarni@oracle.com>
---
 drivers/connector/cn_proc.c   | 54 ++++++++++++++++++++++++++++-------
 drivers/connector/connector.c | 12 +++++---
 drivers/w1/w1_netlink.c       |  6 ++--
 include/linux/connector.h     |  6 +++-
 include/uapi/linux/cn_proc.h  | 43 ++++++++++++++++------------
 net/netlink/af_netlink.c      | 10 +++++--
 6 files changed, 94 insertions(+), 37 deletions(-)

diff --git a/drivers/connector/cn_proc.c b/drivers/connector/cn_proc.c
index ccac1c453080..ef3820b43b5c 100644
--- a/drivers/connector/cn_proc.c
+++ b/drivers/connector/cn_proc.c
@@ -48,6 +48,22 @@ static DEFINE_PER_CPU(struct local_event, local_event) = {
 	.lock = INIT_LOCAL_LOCK(lock),
 };
 
+int cn_filter(struct sock *dsk, struct sk_buff *skb, void *data)
+{
+	enum proc_cn_mcast_op mc_op;
+
+	if (!dsk)
+		return 0;
+
+	mc_op = ((struct proc_input *)(dsk->sk_user_data))->mcast_op;
+
+	if (mc_op == PROC_CN_MCAST_IGNORE)
+		return 1;
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(cn_filter);
+
 static inline void send_msg(struct cn_msg *msg)
 {
 	local_lock(&local_event.lock);
@@ -61,7 +77,8 @@ static inline void send_msg(struct cn_msg *msg)
 	 *
 	 * If cn_netlink_send() fails, the data is not sent.
 	 */
-	cn_netlink_send(msg, 0, CN_IDX_PROC, GFP_NOWAIT);
+	cn_netlink_send_mult(msg, msg->len, 0, CN_IDX_PROC, GFP_NOWAIT,
+			     cn_filter, NULL);
 
 	local_unlock(&local_event.lock);
 }
@@ -346,11 +363,9 @@ static void cn_proc_ack(int err, int rcvd_seq, int rcvd_ack)
 static void cn_proc_mcast_ctl(struct cn_msg *msg,
 			      struct netlink_skb_parms *nsp)
 {
-	enum proc_cn_mcast_op *mc_op = NULL;
-	int err = 0;
-
-	if (msg->len != sizeof(*mc_op))
-		return;
+	enum proc_cn_mcast_op mc_op = 0, prev_mc_op = 0;
+	int err = 0, initial = 0;
+	struct sock *sk = NULL;
 
 	/* 
 	 * Events are reported with respect to the initial pid
@@ -367,13 +382,32 @@ static void cn_proc_mcast_ctl(struct cn_msg *msg,
 		goto out;
 	}
 
-	mc_op = (enum proc_cn_mcast_op *)msg->data;
-	switch (*mc_op) {
+	if (msg->len == sizeof(mc_op))
+		mc_op = *((enum proc_cn_mcast_op *)msg->data);
+	else
+		return;
+
+	if (nsp->sk) {
+		sk = nsp->sk;
+		if (sk->sk_user_data == NULL) {
+			sk->sk_user_data = kzalloc(sizeof(struct proc_input),
+						   GFP_KERNEL);
+			initial = 1;
+		} else {
+			prev_mc_op =
+			((struct proc_input *)(sk->sk_user_data))->mcast_op;
+		}
+		((struct proc_input *)(sk->sk_user_data))->mcast_op = mc_op;
+	}
+
+	switch (mc_op) {
 	case PROC_CN_MCAST_LISTEN:
-		atomic_inc(&proc_event_num_listeners);
+		if (initial || (prev_mc_op != PROC_CN_MCAST_LISTEN))
+			atomic_inc(&proc_event_num_listeners);
 		break;
 	case PROC_CN_MCAST_IGNORE:
-		atomic_dec(&proc_event_num_listeners);
+		if (!initial && (prev_mc_op != PROC_CN_MCAST_IGNORE))
+			atomic_dec(&proc_event_num_listeners);
 		break;
 	default:
 		err = EINVAL;
diff --git a/drivers/connector/connector.c b/drivers/connector/connector.c
index 48ec7ce6ecac..1b7851b1aa0f 100644
--- a/drivers/connector/connector.c
+++ b/drivers/connector/connector.c
@@ -59,7 +59,9 @@ static int cn_already_initialized;
  * both, or if both are zero then the group is looked up and sent there.
  */
 int cn_netlink_send_mult(struct cn_msg *msg, u16 len, u32 portid, u32 __group,
-	gfp_t gfp_mask)
+	gfp_t gfp_mask,
+	int (*filter)(struct sock *dsk, struct sk_buff *skb, void *data),
+	void *filter_data)
 {
 	struct cn_callback_entry *__cbq;
 	unsigned int size;
@@ -110,8 +112,9 @@ int cn_netlink_send_mult(struct cn_msg *msg, u16 len, u32 portid, u32 __group,
 	NETLINK_CB(skb).dst_group = group;
 
 	if (group)
-		return netlink_broadcast(dev->nls, skb, portid, group,
-					 gfp_mask);
+		return netlink_broadcast_filtered(dev->nls, skb, portid, group,
+						  gfp_mask, filter,
+						  (void *)filter_data);
 	return netlink_unicast(dev->nls, skb, portid,
 			!gfpflags_allow_blocking(gfp_mask));
 }
@@ -121,7 +124,8 @@ EXPORT_SYMBOL_GPL(cn_netlink_send_mult);
 int cn_netlink_send(struct cn_msg *msg, u32 portid, u32 __group,
 	gfp_t gfp_mask)
 {
-	return cn_netlink_send_mult(msg, msg->len, portid, __group, gfp_mask);
+	return cn_netlink_send_mult(msg, msg->len, portid, __group, gfp_mask,
+				    NULL, NULL);
 }
 EXPORT_SYMBOL_GPL(cn_netlink_send);
 
diff --git a/drivers/w1/w1_netlink.c b/drivers/w1/w1_netlink.c
index db110cc442b1..691978cddab7 100644
--- a/drivers/w1/w1_netlink.c
+++ b/drivers/w1/w1_netlink.c
@@ -65,7 +65,8 @@ static void w1_unref_block(struct w1_cb_block *block)
 		u16 len = w1_reply_len(block);
 		if (len) {
 			cn_netlink_send_mult(block->first_cn, len,
-				block->portid, 0, GFP_KERNEL);
+					     block->portid, 0,
+					     GFP_KERNEL, NULL, NULL);
 		}
 		kfree(block);
 	}
@@ -83,7 +84,8 @@ static void w1_reply_make_space(struct w1_cb_block *block, u16 space)
 {
 	u16 len = w1_reply_len(block);
 	if (len + space >= block->maxlen) {
-		cn_netlink_send_mult(block->first_cn, len, block->portid, 0, GFP_KERNEL);
+		cn_netlink_send_mult(block->first_cn, len, block->portid,
+				     0, GFP_KERNEL, NULL, NULL);
 		block->first_cn->len = 0;
 		block->cn = NULL;
 		block->msg = NULL;
diff --git a/include/linux/connector.h b/include/linux/connector.h
index 487350bb19c3..1336a5e7dd2f 100644
--- a/include/linux/connector.h
+++ b/include/linux/connector.h
@@ -96,7 +96,11 @@ void cn_del_callback(const struct cb_id *id);
  *
  * If there are no listeners for given group %-ESRCH can be returned.
  */
-int cn_netlink_send_mult(struct cn_msg *msg, u16 len, u32 portid, u32 group, gfp_t gfp_mask);
+int cn_netlink_send_mult(struct cn_msg *msg, u16 len, u32 portid,
+			 u32 group, gfp_t gfp_mask,
+			 int (*filter)(struct sock *dsk, struct sk_buff *skb,
+				       void *data),
+			 void *filter_data);
 
 /**
  * cn_netlink_send - Sends message to the specified groups.
diff --git a/include/uapi/linux/cn_proc.h b/include/uapi/linux/cn_proc.h
index db210625cee8..6a06fb424313 100644
--- a/include/uapi/linux/cn_proc.h
+++ b/include/uapi/linux/cn_proc.h
@@ -30,6 +30,30 @@ enum proc_cn_mcast_op {
 	PROC_CN_MCAST_IGNORE = 2
 };
 
+enum proc_cn_event {
+	/* Use successive bits so the enums can be used to record
+	 * sets of events as well
+	 */
+	PROC_EVENT_NONE = 0x00000000,
+	PROC_EVENT_FORK = 0x00000001,
+	PROC_EVENT_EXEC = 0x00000002,
+	PROC_EVENT_UID  = 0x00000004,
+	PROC_EVENT_GID  = 0x00000040,
+	PROC_EVENT_SID  = 0x00000080,
+	PROC_EVENT_PTRACE = 0x00000100,
+	PROC_EVENT_COMM = 0x00000200,
+	/* "next" should be 0x00000400 */
+	/* "last" is the last process event: exit,
+	 * while "next to last" is coredumping event
+	 */
+	PROC_EVENT_COREDUMP = 0x40000000,
+	PROC_EVENT_EXIT = 0x80000000
+};
+
+struct proc_input {
+	enum proc_cn_mcast_op mcast_op;
+};
+
 /*
  * From the user's point of view, the process
  * ID is the thread group ID and thread ID is the internal
@@ -44,24 +68,7 @@ enum proc_cn_mcast_op {
  */
 
 struct proc_event {
-	enum what {
-		/* Use successive bits so the enums can be used to record
-		 * sets of events as well
-		 */
-		PROC_EVENT_NONE = 0x00000000,
-		PROC_EVENT_FORK = 0x00000001,
-		PROC_EVENT_EXEC = 0x00000002,
-		PROC_EVENT_UID  = 0x00000004,
-		PROC_EVENT_GID  = 0x00000040,
-		PROC_EVENT_SID  = 0x00000080,
-		PROC_EVENT_PTRACE = 0x00000100,
-		PROC_EVENT_COMM = 0x00000200,
-		/* "next" should be 0x00000400 */
-		/* "last" is the last process event: exit,
-		 * while "next to last" is coredumping event */
-		PROC_EVENT_COREDUMP = 0x40000000,
-		PROC_EVENT_EXIT = 0x80000000
-	} what;
+	enum proc_cn_event what;
 	__u32 cpu;
 	__u64 __attribute__((aligned(8))) timestamp_ns;
 		/* Number of nano seconds since system boot */
diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
index 003c7e6ec9be..b311375b8c4c 100644
--- a/net/netlink/af_netlink.c
+++ b/net/netlink/af_netlink.c
@@ -63,6 +63,7 @@
 #include <linux/net_namespace.h>
 #include <linux/nospec.h>
 #include <linux/btf_ids.h>
+#include <linux/connector.h>
 
 #include <net/net_namespace.h>
 #include <net/netns/generic.h>
@@ -767,9 +768,14 @@ static int netlink_release(struct socket *sock)
 	/* must not acquire netlink_table_lock in any way again before unbind
 	 * and notifying genetlink is done as otherwise it might deadlock
 	 */
-	if (nlk->netlink_unbind) {
+	if (nlk->netlink_unbind && nlk->groups) {
 		int i;
-
+		if (sk->sk_protocol == NETLINK_CONNECTOR) {
+			if (test_bit(CN_IDX_PROC - 1, nlk->groups)) {
+				kfree(sk->sk_user_data);
+				sk->sk_user_data = NULL;
+			}
+		}
 		for (i = 0; i < nlk->ngroups; i++)
 			if (test_bit(i, nlk->groups))
 				nlk->netlink_unbind(sock_net(sk), i + 1);
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 3/5] connector/cn_proc: Test code for proc connector
  2023-03-09  3:19 [PATCH 0/5] Process connector bug fixes & enhancements Anjali Kulkarni
  2023-03-09  3:19 ` [PATCH 1/5] netlink: Reverse the patch which removed filtering Anjali Kulkarni
  2023-03-09  3:19 ` [PATCH 2/5] connector/cn_proc: Add filtering to fix some bugs Anjali Kulkarni
@ 2023-03-09  3:19 ` Anjali Kulkarni
  2023-03-09  3:19 ` [PATCH 4/5] connector/cn_proc: Allow non-root users access Anjali Kulkarni
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 12+ messages in thread
From: Anjali Kulkarni @ 2023-03-09  3:19 UTC (permalink / raw)
  To: davem
  Cc: edumazet, kuba, pabeni, zbr, brauner, johannes, ecree.xilinx,
	leon, keescook, socketcan, petrm, linux-kernel, netdev,
	anjali.k.kulkarni

Test code for proc connector.

Signed-off-by: Anjali Kulkarni <anjali.k.kulkarni@oracle.com>
---
 samples/connector/proc_filter.c | 262 ++++++++++++++++++++++++++++++++
 1 file changed, 262 insertions(+)
 create mode 100644 samples/connector/proc_filter.c

diff --git a/samples/connector/proc_filter.c b/samples/connector/proc_filter.c
new file mode 100644
index 000000000000..25202f5bc126
--- /dev/null
+++ b/samples/connector/proc_filter.c
@@ -0,0 +1,262 @@
+// SPDX-License-Identifier: GPL-2.0-only
+
+#include <sys/types.h>
+#include <sys/epoll.h>
+#include <sys/socket.h>
+#include <linux/netlink.h>
+#include <linux/connector.h>
+#include <linux/cn_proc.h>
+
+#include <stddef.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <strings.h>
+#include <errno.h>
+#include <signal.h>
+
+#define NL_MESSAGE_SIZE (sizeof(struct nlmsghdr) + sizeof(struct cn_msg) + \
+			 sizeof(int))
+
+#define MAX_EVENTS 1
+
+#ifdef ENABLE_PRINTS
+#define Printf printf
+#else
+#define Printf
+#endif
+
+volatile static int interrupted;
+static int nl_sock, ret_errno, tcount;
+static struct epoll_event evn;
+
+int send_message(enum proc_cn_mcast_op mcast_op)
+{
+	char buff[NL_MESSAGE_SIZE];
+	struct nlmsghdr *hdr;
+	struct cn_msg *msg;
+
+	hdr = (struct nlmsghdr *)buff;
+	hdr->nlmsg_len = NL_MESSAGE_SIZE;
+	hdr->nlmsg_type = NLMSG_DONE;
+	hdr->nlmsg_flags = 0;
+	hdr->nlmsg_seq = 0;
+	hdr->nlmsg_pid = getpid();
+
+	msg = (struct cn_msg *)NLMSG_DATA(hdr);
+	msg->id.idx = CN_IDX_PROC;
+	msg->id.val = CN_VAL_PROC;
+	msg->seq = 0;
+	msg->ack = 0;
+	msg->flags = 0;
+
+	msg->len = sizeof(int);
+	*(int *)msg->data = mcast_op;
+
+	if (send(nl_sock, hdr, hdr->nlmsg_len, 0) == -1) {
+		ret_errno = errno;
+		perror("send failed");
+		return -3;
+	}
+	return 0;
+}
+
+int register_proc_netlink(int *efd, enum proc_cn_mcast_op mcast_op)
+{
+	struct sockaddr_nl sa_nl;
+	int err = 0, epoll_fd;
+
+	nl_sock = socket(PF_NETLINK, SOCK_DGRAM, NETLINK_CONNECTOR);
+
+	if (nl_sock == -1) {
+		ret_errno = errno;
+		perror("socket failed");
+		return -1;
+	}
+
+	bzero(&sa_nl, sizeof(sa_nl));
+	sa_nl.nl_family = AF_NETLINK;
+	sa_nl.nl_groups = CN_IDX_PROC;
+	sa_nl.nl_pid    = getpid();
+
+	if (bind(nl_sock, (struct sockaddr *)&sa_nl, sizeof(sa_nl)) == -1) {
+		ret_errno = errno;
+		perror("bind failed");
+		return -2;
+	}
+
+	epoll_fd = epoll_create1(EPOLL_CLOEXEC);
+	if (epoll_fd < 0) {
+		ret_errno = errno;
+		perror("epoll_create1 failed");
+		return -2;
+	}
+
+	err = send_message(mcast_op);
+	if (err < 0)
+		return err;
+
+	evn.events = EPOLLIN;
+	evn.data.fd = nl_sock;
+	if (epoll_ctl(epoll_fd, EPOLL_CTL_ADD, nl_sock, &evn) < 0) {
+		ret_errno = errno;
+		perror("epoll_ctl failed");
+		return -3;
+	}
+	*efd = epoll_fd;
+	return 0;
+}
+
+static void sigint(__attribute__((__always_unused)) int sig)
+{
+	interrupted = 1;
+}
+
+int handle_packet(char *buff, int fd, struct proc_event *event)
+{
+	struct nlmsghdr *hdr;
+
+	hdr = (struct nlmsghdr *)buff;
+
+	if (hdr->nlmsg_type == NLMSG_ERROR) {
+		perror("NLMSG_ERROR error\n");
+		return -3;
+	} else if (hdr->nlmsg_type == NLMSG_DONE) {
+		event = (struct proc_event *)
+			((struct cn_msg *)NLMSG_DATA(hdr))->data;
+		tcount++;
+		switch (event->what) {
+		case PROC_EVENT_EXIT:
+			Printf("Exit process %d (tgid %d) with code %d, signal %d\n",
+			       event->event_data.exit.process_pid,
+			       event->event_data.exit.process_tgid,
+			       event->event_data.exit.exit_code,
+			       event->event_data.exit.exit_signal);
+			break;
+		case PROC_EVENT_FORK:
+			Printf("Fork process %d (tgid %d), parent %d (tgid %d)\n",
+			       event->event_data.fork.child_pid,
+			       event->event_data.fork.child_tgid,
+			       event->event_data.fork.parent_pid,
+			       event->event_data.fork.parent_tgid);
+			break;
+		case PROC_EVENT_EXEC:
+			Printf("Exec process %d (tgid %d)\n",
+			       event->event_data.exec.process_pid,
+			       event->event_data.exec.process_tgid);
+			break;
+		case PROC_EVENT_UID:
+			Printf("UID process %d (tgid %d) uid %d euid %d\n",
+			       event->event_data.id.process_pid,
+			       event->event_data.id.process_tgid,
+			       event->event_data.id.r.ruid,
+			       event->event_data.id.e.euid);
+			break;
+		case PROC_EVENT_GID:
+			Printf("GID process %d (tgid %d) gid %d egid %d\n",
+			       event->event_data.id.process_pid,
+			       event->event_data.id.process_tgid,
+			       event->event_data.id.r.rgid,
+			       event->event_data.id.e.egid);
+			break;
+		case PROC_EVENT_SID:
+			Printf("SID process %d (tgid %d)\n",
+			       event->event_data.sid.process_pid,
+			       event->event_data.sid.process_tgid);
+			break;
+		case PROC_EVENT_PTRACE:
+			Printf("Ptrace process %d (tgid %d), Tracer %d (tgid %d)\n",
+			       event->event_data.ptrace.process_pid,
+			       event->event_data.ptrace.process_tgid,
+			       event->event_data.ptrace.tracer_pid,
+			       event->event_data.ptrace.tracer_tgid);
+			break;
+		case PROC_EVENT_COMM:
+			Printf("Comm process %d (tgid %d) comm %s\n",
+			       event->event_data.comm.process_pid,
+			       event->event_data.comm.process_tgid,
+			       event->event_data.comm.comm);
+			break;
+		case PROC_EVENT_COREDUMP:
+			Printf("Coredump process %d (tgid %d) parent %d, (tgid %d)\n",
+			       event->event_data.coredump.process_pid,
+			       event->event_data.coredump.process_tgid,
+			       event->event_data.coredump.parent_pid,
+			       event->event_data.coredump.parent_tgid);
+			break;
+		default:
+			break;
+		}
+	}
+	return 0;
+}
+
+int handle_events(int epoll_fd, struct proc_event *pev)
+{
+	char buff[CONNECTOR_MAX_MSG_SIZE];
+	struct epoll_event ev[MAX_EVENTS];
+	int i, event_count = 0, err = 0;
+
+	event_count = epoll_wait(epoll_fd, ev, MAX_EVENTS, -1);
+	if (event_count < 0) {
+		ret_errno = errno;
+		if (ret_errno != EINTR)
+			perror("epoll_wait failed");
+		return -3;
+	}
+	for (i = 0; i < event_count; i++) {
+		if (!(ev[i].events & EPOLLIN))
+			continue;
+		if (recv(ev[i].data.fd, buff, sizeof(buff), 0) == -1) {
+			ret_errno = errno;
+			perror("recv failed");
+			return -3;
+		}
+		err = handle_packet(buff, ev[i].data.fd, pev);
+		if (err < 0)
+			return err;
+	}
+	return 0;
+}
+
+int main(int argc, char *argv[])
+{
+	int epoll_fd, err;
+	struct proc_event proc_ev;
+
+	signal(SIGINT, sigint);
+
+	err = register_proc_netlink(&epoll_fd, PROC_CN_MCAST_LISTEN);
+	if (err < 0) {
+		if (err == -2)
+			close(nl_sock);
+		if (err == -3) {
+			close(nl_sock);
+			close(epoll_fd);
+		}
+		exit(1);
+	}
+
+	while (!interrupted) {
+		err = handle_events(epoll_fd, &proc_ev);
+		if (err < 0) {
+			if (ret_errno == EINTR)
+				continue;
+			if (err == -2)
+				close(nl_sock);
+			if (err == -3) {
+				close(nl_sock);
+				close(epoll_fd);
+			}
+			exit(1);
+		}
+	}
+
+	send_message(PROC_CN_MCAST_IGNORE);
+
+	close(epoll_fd);
+	close(nl_sock);
+
+	printf("Done total count: %d\n", tcount);
+	exit(0);
+}
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 4/5] connector/cn_proc: Allow non-root users access
  2023-03-09  3:19 [PATCH 0/5] Process connector bug fixes & enhancements Anjali Kulkarni
                   ` (2 preceding siblings ...)
  2023-03-09  3:19 ` [PATCH 3/5] connector/cn_proc: Test code for proc connector Anjali Kulkarni
@ 2023-03-09  3:19 ` Anjali Kulkarni
  2023-03-09 17:09   ` Christian Brauner
  2023-03-09  3:19 ` [PATCH 5/5] connector/cn_proc: Performance improvements Anjali Kulkarni
  2023-03-09 17:05 ` [PATCH 0/5] Process connector bug fixes & enhancements Christian Brauner
  5 siblings, 1 reply; 12+ messages in thread
From: Anjali Kulkarni @ 2023-03-09  3:19 UTC (permalink / raw)
  To: davem
  Cc: edumazet, kuba, pabeni, zbr, brauner, johannes, ecree.xilinx,
	leon, keescook, socketcan, petrm, linux-kernel, netdev,
	anjali.k.kulkarni

The patch allows non-root users to receive cn proc connector
notifications, as anyone can normally get process start/exit status from
/proc. The reason for not allowing non-root users to receive multicast
messages is long gone, as described in this thread:
https://linux-kernel.vger.kernel.narkive.com/CpJFcnra/multicast-netlink-for-non-root-process

Also, many other netlink protocols allow non-root users to receive multicast
messages, and there is no reason to discriminate against CONNECTOR.

Reason we need this change is we need to run our DB application as a 
non-root user.

Signed-off-by: Anjali Kulkarni <anjali.k.kulkarni@oracle.com>
---
 drivers/connector/cn_proc.c   | 7 -------
 drivers/connector/connector.c | 1 +
 2 files changed, 1 insertion(+), 7 deletions(-)

diff --git a/drivers/connector/cn_proc.c b/drivers/connector/cn_proc.c
index ef3820b43b5c..03ba70f07113 100644
--- a/drivers/connector/cn_proc.c
+++ b/drivers/connector/cn_proc.c
@@ -376,12 +376,6 @@ static void cn_proc_mcast_ctl(struct cn_msg *msg,
 	    !task_is_in_init_pid_ns(current))
 		return;
 
-	/* Can only change if privileged. */
-	if (!__netlink_ns_capable(nsp, &init_user_ns, CAP_NET_ADMIN)) {
-		err = EPERM;
-		goto out;
-	}
-
 	if (msg->len == sizeof(mc_op))
 		mc_op = *((enum proc_cn_mcast_op *)msg->data);
 	else
@@ -414,7 +408,6 @@ static void cn_proc_mcast_ctl(struct cn_msg *msg,
 		break;
 	}
 
-out:
 	cn_proc_ack(err, msg->seq, msg->ack);
 }
 
diff --git a/drivers/connector/connector.c b/drivers/connector/connector.c
index 1b7851b1aa0f..136a9f38a063 100644
--- a/drivers/connector/connector.c
+++ b/drivers/connector/connector.c
@@ -251,6 +251,7 @@ static int cn_init(void)
 {
 	struct cn_dev *dev = &cdev;
 	struct netlink_kernel_cfg cfg = {
+		.flags = NL_CFG_F_NONROOT_RECV,
 		.groups	= CN_NETLINK_USERS + 0xf,
 		.input	= cn_rx_skb,
 	};
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 5/5] connector/cn_proc: Performance improvements
  2023-03-09  3:19 [PATCH 0/5] Process connector bug fixes & enhancements Anjali Kulkarni
                   ` (3 preceding siblings ...)
  2023-03-09  3:19 ` [PATCH 4/5] connector/cn_proc: Allow non-root users access Anjali Kulkarni
@ 2023-03-09  3:19 ` Anjali Kulkarni
  2023-03-09 17:05 ` [PATCH 0/5] Process connector bug fixes & enhancements Christian Brauner
  5 siblings, 0 replies; 12+ messages in thread
From: Anjali Kulkarni @ 2023-03-09  3:19 UTC (permalink / raw)
  To: davem
  Cc: edumazet, kuba, pabeni, zbr, brauner, johannes, ecree.xilinx,
	leon, keescook, socketcan, petrm, linux-kernel, netdev,
	anjali.k.kulkarni

This patch adds the capability to filter messages sent by the proc
connector on the event type supplied in the message from the client
to the connector. The client can register to listen for an event type
given in struct proc_input. 

The event type supplied by client is stored in the client socket's
sk_user_data field, along with the multicast LISTEN or IGNORE message.

cn_filter function checks to see if the event type being notified via
proc connector matches the event type requested by client, before
sending(matches) or dropping(does not match) a packet.

The patch takes care that existing clients using old mechanism of not
sending the event type work without any changes.

We also add a new event PROC_EVENT_NONZERO_EXIT, which is only sent
by kernel to a listening application when any process exiting has a
non-zero exit status. 

The proc_filter.c test file is updated to reflect the new filtering.

Signed-off-by: Anjali Kulkarni <anjali.k.kulkarni@oracle.com>
---
 drivers/connector/cn_proc.c     | 56 ++++++++++++++++++++++++++++++---
 include/uapi/linux/cn_proc.h    | 19 +++++++++++
 samples/connector/proc_filter.c | 47 ++++++++++++++++++++++++---
 3 files changed, 113 insertions(+), 9 deletions(-)

diff --git a/drivers/connector/cn_proc.c b/drivers/connector/cn_proc.c
index 03ba70f07113..d5b2e6f26385 100644
--- a/drivers/connector/cn_proc.c
+++ b/drivers/connector/cn_proc.c
@@ -50,22 +50,45 @@ static DEFINE_PER_CPU(struct local_event, local_event) = {
 
 int cn_filter(struct sock *dsk, struct sk_buff *skb, void *data)
 {
+	uintptr_t val;
+	__u32 what, exit_code, *ptr;
 	enum proc_cn_mcast_op mc_op;
 
-	if (!dsk)
+	if (!dsk || !data)
 		return 0;
 
+	ptr = (__u32 *)data;
+	what = *ptr++;
+	exit_code = *ptr;
+	val = ((struct proc_input *)(dsk->sk_user_data))->event_type;
 	mc_op = ((struct proc_input *)(dsk->sk_user_data))->mcast_op;
 
 	if (mc_op == PROC_CN_MCAST_IGNORE)
 		return 1;
 
-	return 0;
+	if ((__u32)val == PROC_EVENT_ALL)
+		return 0;
+	/*
+	 * Drop packet if we have to report only non-zero exit status
+	 * (PROC_EVENT_NONZERO_EXIT) and exit status is 0
+	 */
+	if (((__u32)val & PROC_EVENT_NONZERO_EXIT) &&
+	    (what == PROC_EVENT_EXIT)) {
+		if (exit_code)
+			return 0;
+		else
+			return 1;
+	}
+	if ((__u32)val & what)
+		return 0;
+	return 1;
 }
 EXPORT_SYMBOL_GPL(cn_filter);
 
 static inline void send_msg(struct cn_msg *msg)
 {
+	__u32 filter_data[2];
+
 	local_lock(&local_event.lock);
 
 	msg->seq = __this_cpu_inc_return(local_event.count) - 1;
@@ -77,8 +100,15 @@ static inline void send_msg(struct cn_msg *msg)
 	 *
 	 * If cn_netlink_send() fails, the data is not sent.
 	 */
+	filter_data[0] = ((struct proc_event *)msg->data)->what;
+	if (filter_data[0] == PROC_EVENT_EXIT) {
+		filter_data[1] =
+		((struct proc_event *)msg->data)->event_data.exit.exit_code;
+	} else {
+		filter_data[1] = 0;
+	}
 	cn_netlink_send_mult(msg, msg->len, 0, CN_IDX_PROC, GFP_NOWAIT,
-			     cn_filter, NULL);
+			     cn_filter, (void *)filter_data);
 
 	local_unlock(&local_event.lock);
 }
@@ -364,6 +394,8 @@ static void cn_proc_mcast_ctl(struct cn_msg *msg,
 			      struct netlink_skb_parms *nsp)
 {
 	enum proc_cn_mcast_op mc_op = 0, prev_mc_op = 0;
+	struct proc_input *pinput = NULL;
+	enum proc_cn_event ev_type = 0;
 	int err = 0, initial = 0;
 	struct sock *sk = NULL;
 
@@ -376,11 +408,21 @@ static void cn_proc_mcast_ctl(struct cn_msg *msg,
 	    !task_is_in_init_pid_ns(current))
 		return;
 
-	if (msg->len == sizeof(mc_op))
+	if (msg->len == sizeof(*pinput)) {
+		pinput = (struct proc_input *)msg->data;
+		mc_op = pinput->mcast_op;
+		ev_type = pinput->event_type;
+	} else if (msg->len == sizeof(mc_op)) {
 		mc_op = *((enum proc_cn_mcast_op *)msg->data);
-	else
+		ev_type = PROC_EVENT_ALL;
+	} else
 		return;
 
+	ev_type = valid_event((enum proc_cn_event)ev_type);
+
+	if (ev_type == PROC_EVENT_NONE)
+		ev_type = PROC_EVENT_ALL;
+
 	if (nsp->sk) {
 		sk = nsp->sk;
 		if (sk->sk_user_data == NULL) {
@@ -391,6 +433,8 @@ static void cn_proc_mcast_ctl(struct cn_msg *msg,
 			prev_mc_op =
 			((struct proc_input *)(sk->sk_user_data))->mcast_op;
 		}
+		((struct proc_input *)(sk->sk_user_data))->event_type =
+			ev_type;
 		((struct proc_input *)(sk->sk_user_data))->mcast_op = mc_op;
 	}
 
@@ -402,6 +446,8 @@ static void cn_proc_mcast_ctl(struct cn_msg *msg,
 	case PROC_CN_MCAST_IGNORE:
 		if (!initial && (prev_mc_op != PROC_CN_MCAST_IGNORE))
 			atomic_dec(&proc_event_num_listeners);
+		((struct proc_input *)(sk->sk_user_data))->event_type =
+			PROC_EVENT_NONE;
 		break;
 	default:
 		err = EINVAL;
diff --git a/include/uapi/linux/cn_proc.h b/include/uapi/linux/cn_proc.h
index 6a06fb424313..f2afb7cc4926 100644
--- a/include/uapi/linux/cn_proc.h
+++ b/include/uapi/linux/cn_proc.h
@@ -30,6 +30,15 @@ enum proc_cn_mcast_op {
 	PROC_CN_MCAST_IGNORE = 2
 };
 
+#define PROC_EVENT_ALL (PROC_EVENT_FORK | PROC_EVENT_EXEC | PROC_EVENT_UID |  \
+			PROC_EVENT_GID | PROC_EVENT_SID | PROC_EVENT_PTRACE | \
+			PROC_EVENT_COMM | PROC_EVENT_NONZERO_EXIT |           \
+			PROC_EVENT_COREDUMP | PROC_EVENT_EXIT)
+
+/*
+ * If you add an entry in proc_cn_event, make sure you add it in
+ * PROC_EVENT_ALL above as well.
+ */
 enum proc_cn_event {
 	/* Use successive bits so the enums can be used to record
 	 * sets of events as well
@@ -45,15 +54,25 @@ enum proc_cn_event {
 	/* "next" should be 0x00000400 */
 	/* "last" is the last process event: exit,
 	 * while "next to last" is coredumping event
+	 * before that is report only if process dies
+	 * with non-zero exit status
 	 */
+	PROC_EVENT_NONZERO_EXIT = 0x20000000,
 	PROC_EVENT_COREDUMP = 0x40000000,
 	PROC_EVENT_EXIT = 0x80000000
 };
 
 struct proc_input {
 	enum proc_cn_mcast_op mcast_op;
+	enum proc_cn_event event_type;
 };
 
+static inline enum proc_cn_event valid_event(enum proc_cn_event ev_type)
+{
+	ev_type &= PROC_EVENT_ALL;
+	return ev_type;
+}
+
 /*
  * From the user's point of view, the process
  * ID is the thread group ID and thread ID is the internal
diff --git a/samples/connector/proc_filter.c b/samples/connector/proc_filter.c
index 25202f5bc126..63504fc5f002 100644
--- a/samples/connector/proc_filter.c
+++ b/samples/connector/proc_filter.c
@@ -15,22 +15,33 @@
 #include <errno.h>
 #include <signal.h>
 
+#define FILTER
+
+#ifdef FILTER
+#define NL_MESSAGE_SIZE (sizeof(struct nlmsghdr) + sizeof(struct cn_msg) + \
+			 sizeof(struct proc_input))
+#else
 #define NL_MESSAGE_SIZE (sizeof(struct nlmsghdr) + sizeof(struct cn_msg) + \
 			 sizeof(int))
+#endif
 
 #define MAX_EVENTS 1
 
+volatile static int interrupted;
+static int nl_sock, ret_errno, tcount;
+static struct epoll_event evn;
+
 #ifdef ENABLE_PRINTS
 #define Printf printf
 #else
 #define Printf
 #endif
 
-volatile static int interrupted;
-static int nl_sock, ret_errno, tcount;
-static struct epoll_event evn;
-
+#ifdef FILTER
+int send_message(struct proc_input *pinp)
+#else
 int send_message(enum proc_cn_mcast_op mcast_op)
+#endif
 {
 	char buff[NL_MESSAGE_SIZE];
 	struct nlmsghdr *hdr;
@@ -50,8 +61,14 @@ int send_message(enum proc_cn_mcast_op mcast_op)
 	msg->ack = 0;
 	msg->flags = 0;
 
+#ifdef FILTER
+	msg->len = sizeof(struct proc_input);
+	((struct proc_input *)msg->data)->mcast_op = pinp->mcast_op;
+	((struct proc_input *)msg->data)->event_type = pinp->event_type;
+#else
 	msg->len = sizeof(int);
 	*(int *)msg->data = mcast_op;
+#endif
 
 	if (send(nl_sock, hdr, hdr->nlmsg_len, 0) == -1) {
 		ret_errno = errno;
@@ -61,7 +78,11 @@ int send_message(enum proc_cn_mcast_op mcast_op)
 	return 0;
 }
 
+#ifdef FILTER
+int register_proc_netlink(int *efd, struct proc_input *input)
+#else
 int register_proc_netlink(int *efd, enum proc_cn_mcast_op mcast_op)
+#endif
 {
 	struct sockaddr_nl sa_nl;
 	int err = 0, epoll_fd;
@@ -92,7 +113,11 @@ int register_proc_netlink(int *efd, enum proc_cn_mcast_op mcast_op)
 		return -2;
 	}
 
+#ifdef FILTER
+	err = send_message(input);
+#else
 	err = send_message(mcast_op);
+#endif
 	if (err < 0)
 		return err;
 
@@ -223,10 +248,19 @@ int main(int argc, char *argv[])
 {
 	int epoll_fd, err;
 	struct proc_event proc_ev;
+#ifdef FILTER
+	struct proc_input input;
+#endif
 
 	signal(SIGINT, sigint);
 
+#ifdef FILTER
+	input.event_type = PROC_EVENT_NONZERO_EXIT;
+	input.mcast_op = PROC_CN_MCAST_LISTEN;
+	err = register_proc_netlink(&epoll_fd, &input);
+#else
 	err = register_proc_netlink(&epoll_fd, PROC_CN_MCAST_LISTEN);
+#endif
 	if (err < 0) {
 		if (err == -2)
 			close(nl_sock);
@@ -252,7 +286,12 @@ int main(int argc, char *argv[])
 		}
 	}
 
+#ifdef FILTER
+	input.mcast_op = PROC_CN_MCAST_IGNORE;
+	send_message(&input);
+#else
 	send_message(PROC_CN_MCAST_IGNORE);
+#endif
 
 	close(epoll_fd);
 	close(nl_sock);
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/5] connector/cn_proc: Add filtering to fix some bugs
  2023-03-09  3:19 ` [PATCH 2/5] connector/cn_proc: Add filtering to fix some bugs Anjali Kulkarni
@ 2023-03-09  7:15   ` kernel test robot
  0 siblings, 0 replies; 12+ messages in thread
From: kernel test robot @ 2023-03-09  7:15 UTC (permalink / raw)
  To: Anjali Kulkarni, davem
  Cc: oe-kbuild-all, edumazet, kuba, pabeni, zbr, brauner, johannes,
	ecree.xilinx, leon, keescook, socketcan, petrm, linux-kernel,
	netdev, anjali.k.kulkarni

Hi Anjali,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on net-next/master]
[also build test WARNING on net/master vfs-idmapping/for-next linus/master v6.3-rc1 next-20230309]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Anjali-Kulkarni/netlink-Reverse-the-patch-which-removed-filtering/20230309-112151
patch link:    https://lore.kernel.org/r/20230309031953.2350213-3-anjali.k.kulkarni%40oracle.com
patch subject: [PATCH 2/5] connector/cn_proc: Add filtering to fix some bugs
config: m68k-allyesconfig (https://download.01.org/0day-ci/archive/20230309/202303091536.tx7dCAzn-lkp@intel.com/config)
compiler: m68k-linux-gcc (GCC) 12.1.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/intel-lab-lkp/linux/commit/b27e8d125480d07c95a71e8ef2f0b38d890138cd
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review Anjali-Kulkarni/netlink-Reverse-the-patch-which-removed-filtering/20230309-112151
        git checkout b27e8d125480d07c95a71e8ef2f0b38d890138cd
        # save the config file
        mkdir build_dir && cp config build_dir/.config
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross W=1 O=build_dir ARCH=m68k olddefconfig
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross W=1 O=build_dir ARCH=m68k SHELL=/bin/bash drivers/

If you fix the issue, kindly add following tag where applicable
| Reported-by: kernel test robot <lkp@intel.com>
| Link: https://lore.kernel.org/oe-kbuild-all/202303091536.tx7dCAzn-lkp@intel.com/

All warnings (new ones prefixed by >>):

>> drivers/connector/cn_proc.c:51:5: warning: no previous prototype for 'cn_filter' [-Wmissing-prototypes]
      51 | int cn_filter(struct sock *dsk, struct sk_buff *skb, void *data)
         |     ^~~~~~~~~


vim +/cn_filter +51 drivers/connector/cn_proc.c

    50	
  > 51	int cn_filter(struct sock *dsk, struct sk_buff *skb, void *data)
    52	{
    53		enum proc_cn_mcast_op mc_op;
    54	
    55		if (!dsk)
    56			return 0;
    57	
    58		mc_op = ((struct proc_input *)(dsk->sk_user_data))->mcast_op;
    59	
    60		if (mc_op == PROC_CN_MCAST_IGNORE)
    61			return 1;
    62	
    63		return 0;
    64	}
    65	EXPORT_SYMBOL_GPL(cn_filter);
    66	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 0/5] Process connector bug fixes & enhancements
  2023-03-09  3:19 [PATCH 0/5] Process connector bug fixes & enhancements Anjali Kulkarni
                   ` (4 preceding siblings ...)
  2023-03-09  3:19 ` [PATCH 5/5] connector/cn_proc: Performance improvements Anjali Kulkarni
@ 2023-03-09 17:05 ` Christian Brauner
  2023-03-09 21:04   ` Anjali Kulkarni
  5 siblings, 1 reply; 12+ messages in thread
From: Christian Brauner @ 2023-03-09 17:05 UTC (permalink / raw)
  To: Anjali Kulkarni
  Cc: davem, edumazet, kuba, pabeni, zbr, johannes, ecree.xilinx, leon,
	keescook, socketcan, petrm, linux-kernel, netdev

On Wed, Mar 08, 2023 at 07:19:48PM -0800, Anjali Kulkarni wrote:
> From: Anjali Kulkarni <anjali.k.kulkarni@oracle.com>
> 
> In this series, we add back filtering to the proc connector module. This
> is required to fix some bugs and also will enable the addition of event 
> based filtering, which will improve performance for anyone interested
> in a subset of process events, as compared to the current approach,
> which is to send all event notifications.
> 
> Thus, a client can register to listen for only exit or fork or a mix or
> all of the events. This greatly enhances performance - currently, we
> need to listen to all events, and there are 9 different types of events.
> For eg. handling 3 types of events - 8K-forks + 8K-exits + 8K-execs takes
> 200ms, whereas handling 2 types - 8K-forks + 8K-exits takes about 150ms,
> and handling just one type - 8K exits takes about 70ms.
> 
> Reason why we need the above changes and also a new event type 
> PROC_EVENT_NONZERO_EXIT, which is only sent by kernel to a listening
> application when any process exiting has a non-zero exit status is:
> 
> Oracle DB runs on a large scale with 100000s of short lived processes,
> starting up and exiting quickly. A process monitoring DB daemon which
> tracks and cleans up after processes that have died without a proper exit
> needs notifications only when a process died with a non-zero exit code
> (which should be rare).
> 
> This change will give Oracle DB substantial performance savings - it takes
> 50ms to scan about 8K PIDs in /proc, about 500ms for 100K PIDs. DB does
> this check every 3 secs, so over an hour we save 10secs for 100K PIDs.
> 
> Measuring the time using pidfds for monitoring 8K process exits took 4
> times longer - 200ms, as compared to 70ms using only exit notifications
> of proc connector. Hence, we cannot use pidfd for our use case.

Just out of curiosity, what's the reason this took so much longer?

> 
> This kind of a new event could also be useful to other applications like
> Google's lmkd daemon, which needs a killed process's exit notification.

Fwiw - independent of this thing here - I think we might need to also
think about making the exit status of a process readable from a pidfd.
Even after the process has been exited + reaped... I have a _rough_ idea
how I thought this could work:

* introduce struct pidfd_info
* allocate one struct pidfd_info per struct pid _lazily_when the first a pidfd is created
* stash struct pidfd_info in pidfd_file->private_data
* add .exit_status field to struct pidfd_info
* when process exits statsh exit status in struct pidfd_info
* add either new system call or ioctl() to pidfd which returns EAGAIN or
  sm until process has exited and then becomes readable

Thought needs to be put into finding struct pidfd_info based on struct pid...

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 4/5] connector/cn_proc: Allow non-root users access
  2023-03-09  3:19 ` [PATCH 4/5] connector/cn_proc: Allow non-root users access Anjali Kulkarni
@ 2023-03-09 17:09   ` Christian Brauner
  2023-03-09 21:57     ` Anjali Kulkarni
  2023-03-10 20:47     ` Anjali Kulkarni
  0 siblings, 2 replies; 12+ messages in thread
From: Christian Brauner @ 2023-03-09 17:09 UTC (permalink / raw)
  To: Anjali Kulkarni
  Cc: davem, edumazet, kuba, pabeni, zbr, johannes, ecree.xilinx, leon,
	keescook, socketcan, petrm, linux-kernel, netdev

On Wed, Mar 08, 2023 at 07:19:52PM -0800, Anjali Kulkarni wrote:
> The patch allows non-root users to receive cn proc connector
> notifications, as anyone can normally get process start/exit status from
> /proc. The reason for not allowing non-root users to receive multicast
> messages is long gone, as described in this thread:
> https://linux-kernel.vger.kernel.narkive.com/CpJFcnra/multicast-netlink-for-non-root-process

Sorry that thread is kinda convoluted. Could you please provide a
summary in the commit message and explain why this isn't an issue
anymore?

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 0/5] Process connector bug fixes & enhancements
  2023-03-09 17:05 ` [PATCH 0/5] Process connector bug fixes & enhancements Christian Brauner
@ 2023-03-09 21:04   ` Anjali Kulkarni
  0 siblings, 0 replies; 12+ messages in thread
From: Anjali Kulkarni @ 2023-03-09 21:04 UTC (permalink / raw)
  To: Christian Brauner
  Cc: davem, edumazet, kuba, pabeni, zbr, johannes, ecree.xilinx, leon,
	keescook, socketcan, petrm, linux-kernel, netdev,
	Anjali Kulkarni



________________________________________
From: Christian Brauner <brauner@kernel.org>
Sent: Thursday, March 9, 2023 9:05 AM
To: Anjali Kulkarni
Cc: davem@davemloft.net; edumazet@google.com; kuba@kernel.org; pabeni@redhat.com; zbr@ioremap.net; johannes@sipsolutions.net; ecree.xilinx@gmail.com; leon@kernel.org; keescook@chromium.org; socketcan@hartkopp.net; petrm@nvidia.com; linux-kernel@vger.kernel.org; netdev@vger.kernel.org
Subject: Re: [PATCH 0/5] Process connector bug fixes & enhancements

On Wed, Mar 08, 2023 at 07:19:48PM -0800, Anjali Kulkarni wrote:
> From: Anjali Kulkarni <anjali.k.kulkarni@oracle.com>
>
> In this series, we add back filtering to the proc connector module. This
> is required to fix some bugs and also will enable the addition of event
> based filtering, which will improve performance for anyone interested
> in a subset of process events, as compared to the current approach,
> which is to send all event notifications.
>
> Thus, a client can register to listen for only exit or fork or a mix or
> all of the events. This greatly enhances performance - currently, we
> need to listen to all events, and there are 9 different types of events.
> For eg. handling 3 types of events - 8K-forks + 8K-exits + 8K-execs takes
> 200ms, whereas handling 2 types - 8K-forks + 8K-exits takes about 150ms,
> and handling just one type - 8K exits takes about 70ms.
>
> Reason why we need the above changes and also a new event type
> PROC_EVENT_NONZERO_EXIT, which is only sent by kernel to a listening
> application when any process exiting has a non-zero exit status is:
>
> Oracle DB runs on a large scale with 100000s of short lived processes,
> starting up and exiting quickly. A process monitoring DB daemon which
> tracks and cleans up after processes that have died without a proper exit
> needs notifications only when a process died with a non-zero exit code
> (which should be rare).
>
> This change will give Oracle DB substantial performance savings - it takes
> 50ms to scan about 8K PIDs in /proc, about 500ms for 100K PIDs. DB does
> this check every 3 secs, so over an hour we save 10secs for 100K PIDs.
>
> Measuring the time using pidfds for monitoring 8K process exits took 4
> times longer - 200ms, as compared to 70ms using only exit notifications
> of proc connector. Hence, we cannot use pidfd for our use case.

Just out of curiosity, what's the reason this took so much longer?

ANJALI> I have not looked in it in detail, but it seems this may be due to the number of system calls involved. The monitored process needs to send it’s pidfd to the monitoring process, which adds the pidfd in an epoll interface and removes it on process exit. (I did not include time required from monitored process’s side, to open the pidfd and send it, in this). For our case, we cannot have our monitoring process know about every exit (or receive new process’s fd) that happens due to the large no. of exits happening.

>
> This kind of a new event could also be useful to other applications like
> Google's lmkd daemon, which needs a killed process's exit notification.

Fwiw - independent of this thing here - I think we might need to also
think about making the exit status of a process readable from a pidfd.
Even after the process has been exited + reaped... I have a _rough_ idea
how I thought this could work:

* introduce struct pidfd_info
* allocate one struct pidfd_info per struct pid _lazily_when the first a pidfd is created
* stash struct pidfd_info in pidfd_file->private_data
* add .exit_status field to struct pidfd_info
* when process exits statsh exit status in struct pidfd_info
* add either new system call or ioctl() to pidfd which returns EAGAIN or
  sm until process has exited and then becomes readable

Thought needs to be put into finding struct pidfd_info based on struct pid...

ANJALI> This seems like a useful feature to have.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 4/5] connector/cn_proc: Allow non-root users access
  2023-03-09 17:09   ` Christian Brauner
@ 2023-03-09 21:57     ` Anjali Kulkarni
  2023-03-10 20:47     ` Anjali Kulkarni
  1 sibling, 0 replies; 12+ messages in thread
From: Anjali Kulkarni @ 2023-03-09 21:57 UTC (permalink / raw)
  To: Christian Brauner
  Cc: davem, edumazet, kuba, pabeni, zbr, johannes, ecree.xilinx, leon,
	keescook, socketcan, petrm, linux-kernel, netdev,
	Anjali Kulkarni



________________________________________
From: Christian Brauner <brauner@kernel.org>
Sent: Thursday, March 9, 2023 9:09 AM
To: Anjali Kulkarni
Cc: davem@davemloft.net; edumazet@google.com; kuba@kernel.org; pabeni@redhat.com; zbr@ioremap.net; johannes@sipsolutions.net; ecree.xilinx@gmail.com; leon@kernel.org; keescook@chromium.org; socketcan@hartkopp.net; petrm@nvidia.com; linux-kernel@vger.kernel.org; netdev@vger.kernel.org
Subject: Re: [PATCH 4/5] connector/cn_proc: Allow non-root users access

On Wed, Mar 08, 2023 at 07:19:52PM -0800, Anjali Kulkarni wrote:
> The patch allows non-root users to receive cn proc connector
> notifications, as anyone can normally get process start/exit status from
> /proc. The reason for not allowing non-root users to receive multicast
> messages is long gone, as described in this thread:
> https://urldefense.com/v3/__https://linux-kernel.vger.kernel.narkive.com/CpJFcnra/multicast-netlink-for-non-root-process__;!!ACWV5N9M2RV99hQ!NKjh44Qy5cy18bhIbdhHlHeA1w_i-N5u2PdbQPRTobAEUYW8ZiQ8hkOxaojiLWmq3POJ2k4DaD3CtyC9-C3Cnoo$

Sorry that thread is kinda convoluted. Could you please provide a
summary in the commit message and explain why this isn't an issue
anymore?

ANJALI> Will change commit message as follows:
There were a couple of reasons for not allowing non-root users access initially - one is there was "that at some point there was no proper receive buffer management in place for netlink multicast. But that should be long fixed." according to Andi Kleen & Alexey. Second is that some of the messages may contain data that is root only. But this should be handled with a finer granularity, which is being done at the protocol layer.  The only problematic protocols are nf_queue and the firewall netlink, according to Andi. Hence, this restriction for non-root access was relaxed for rtnetlink initially (and subsequently for other protocols as well):
https://lore.kernel.org/all/20020612013101.A22399@wotan.suse.de/
Since process connector messages are not sensitive (process fork, exit notifications etc.), and anyone can read /proc data, we can allow non-root access here too. Reason we need this change is we cannot run our DB application as root.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 4/5] connector/cn_proc: Allow non-root users access
  2023-03-09 17:09   ` Christian Brauner
  2023-03-09 21:57     ` Anjali Kulkarni
@ 2023-03-10 20:47     ` Anjali Kulkarni
  1 sibling, 0 replies; 12+ messages in thread
From: Anjali Kulkarni @ 2023-03-10 20:47 UTC (permalink / raw)
  To: Christian Brauner
  Cc: davem, edumazet, kuba, pabeni, zbr, johannes, ecree.xilinx, leon,
	keescook, socketcan, petrm, linux-kernel, netdev,
	Anjali Kulkarni



________________________________________
From: Christian Brauner <brauner@kernel.org>
Sent: Thursday, March 9, 2023 9:09 AM
To: Anjali Kulkarni
Cc: davem@davemloft.net; edumazet@google.com; kuba@kernel.org; pabeni@redhat.com; zbr@ioremap.net; johannes@sipsolutions.net; ecree.xilinx@gmail.com; leon@kernel.org; keescook@chromium.org; socketcan@hartkopp.net; petrm@nvidia.com; linux-kernel@vger.kernel.org; netdev@vger.kernel.org
Subject: Re: [PATCH 4/5] connector/cn_proc: Allow non-root users access

On Wed, Mar 08, 2023 at 07:19:52PM -0800, Anjali Kulkarni wrote:
> The patch allows non-root users to receive cn proc connector
> notifications, as anyone can normally get process start/exit status from
> /proc. The reason for not allowing non-root users to receive multicast
> messages is long gone, as described in this thread:
> https://urldefense.com/v3/__https://linux-kernel.vger.kernel.narkive.com/CpJFcnra/multicast-netlink-for-non-root-process__;!!ACWV5N9M2RV99hQ!NKjh44Qy5cy18bhIbdhHlHeA1w_i-N5u2PdbQPRTobAEUYW8ZiQ8hkOxaojiLWmq3POJ2k4DaD3CtyC9-C3Cnoo$

Sorry that thread is kinda convoluted. Could you please provide a
summary in the commit message and explain why this isn't an issue
anymore?

ANJALI> Looking into this some more, I think that instead of adding non-root access for all NETLINK_CONNECTOR users by including the flag NL_CFG_F_NONROOT_RECV, we could make this change at an even more fine grained level than protocol level. So I will add a check to enable non-root access only for event notification (cn_proc) user of NETLINK_CONNECTOR, based on the multicast group. Since CONNECTOR is very generic and could be used for varied purposes, a more fine grained approach may be required here. I will send the next patch series with this change.


^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2023-03-10 20:48 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-03-09  3:19 [PATCH 0/5] Process connector bug fixes & enhancements Anjali Kulkarni
2023-03-09  3:19 ` [PATCH 1/5] netlink: Reverse the patch which removed filtering Anjali Kulkarni
2023-03-09  3:19 ` [PATCH 2/5] connector/cn_proc: Add filtering to fix some bugs Anjali Kulkarni
2023-03-09  7:15   ` kernel test robot
2023-03-09  3:19 ` [PATCH 3/5] connector/cn_proc: Test code for proc connector Anjali Kulkarni
2023-03-09  3:19 ` [PATCH 4/5] connector/cn_proc: Allow non-root users access Anjali Kulkarni
2023-03-09 17:09   ` Christian Brauner
2023-03-09 21:57     ` Anjali Kulkarni
2023-03-10 20:47     ` Anjali Kulkarni
2023-03-09  3:19 ` [PATCH 5/5] connector/cn_proc: Performance improvements Anjali Kulkarni
2023-03-09 17:05 ` [PATCH 0/5] Process connector bug fixes & enhancements Christian Brauner
2023-03-09 21:04   ` Anjali Kulkarni

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).