All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH RFC net-next v2 0/3] vsock: add support for sockmap
@ 2023-01-31  4:35 Bobby Eshleman
  2023-01-31  4:35 ` [PATCH RFC net-next v2 1/3] vsock: support sockmap Bobby Eshleman
                   ` (4 more replies)
  0 siblings, 5 replies; 16+ messages in thread
From: Bobby Eshleman @ 2023-01-31  4:35 UTC (permalink / raw)
  To: Stefan Hajnoczi, Stefano Garzarella, Michael S. Tsirkin,
	Jason Wang, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Andrii Nakryiko, Mykola Lysenko, Alexei Starovoitov,
	Daniel Borkmann, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
	Shuah Khan
  Cc: Bobby Eshleman, Bobby Eshleman, linux-kernel, kvm,
	virtualization, netdev, bpf, linux-kselftest, jakub, hdanton,
	cong.wang

Add support for sockmap to vsock.

We're testing usage of vsock as a way to redirect guest-local UDS requests to
the host and this patch series greatly improves the performance of such a
setup.

Compared to copying packets via userspace, this improves throughput by 121% in
basic testing.

Tested as follows.

Setup: guest unix dgram sender -> guest vsock redirector -> host vsock server
Threads: 1
Payload: 64k
No sockmap:
- 76.3 MB/s
- The guest vsock redirector was
  "socat VSOCK-CONNECT:2:1234 UNIX-RECV:/path/to/sock"
Using sockmap (this patch):
- 168.8 MB/s (+121%)
- The guest redirector was a simple sockmap echo server,
  redirecting unix ingress to vsock 2:1234 egress.
- Same sender and server programs

*Note: these numbers are from RFC v1

Only the virtio transport has been tested. The loopback transport was used in
writing bpf/selftests, but not thoroughly tested otherwise.

This series requires the skb patch.

Changes in v2:
- vsock/bpf: rename vsock_dgram_* -> vsock_*
- vsock/bpf: change sk_psock_{get,put} and {lock,release}_sock() order to
	     minimize slock hold time
- vsock/bpf: use "new style" wait
- vsock/bpf: fix bug in wait log
- vsock/bpf: add check that recvmsg sk_type is one dgram, seqpacket, or stream.
	     Return error if not one of the three.
- virtio/vsock: comment __skb_recv_datagram() usage
- virtio/vsock: do not init copied in read_skb()
- vsock/bpf: add ifdef guard around struct proto in dgram_recvmsg()
- selftests/bpf: add vsock loopback config for aarch64
- selftests/bpf: add vsock loopback config for s390x
- selftests/bpf: remove vsock device from vmtest.sh qemu machine
- selftests/bpf: remove CONFIG_VIRTIO_VSOCKETS=y from config.x86_64
- vsock/bpf: move transport-related (e.g., if (!vsk->transport)) checks out of
	     fast path

Signed-off-by: Bobby Eshleman <bobby.eshleman@bytedance.com>
---
Bobby Eshleman (3):
      vsock: support sockmap
      selftests/bpf: add vsock to vmtest.sh
      selftests/bpf: Add a test case for vsock sockmap

 drivers/vhost/vsock.c                              |   1 +
 include/linux/virtio_vsock.h                       |   1 +
 include/net/af_vsock.h                             |  17 ++
 net/vmw_vsock/Makefile                             |   1 +
 net/vmw_vsock/af_vsock.c                           |  55 ++++++-
 net/vmw_vsock/virtio_transport.c                   |   2 +
 net/vmw_vsock/virtio_transport_common.c            |  24 +++
 net/vmw_vsock/vsock_bpf.c                          | 175 +++++++++++++++++++++
 net/vmw_vsock/vsock_loopback.c                     |   2 +
 tools/testing/selftests/bpf/config.aarch64         |   2 +
 tools/testing/selftests/bpf/config.s390x           |   3 +
 tools/testing/selftests/bpf/config.x86_64          |   3 +
 .../selftests/bpf/prog_tests/sockmap_listen.c      | 163 +++++++++++++++++++
 13 files changed, 443 insertions(+), 6 deletions(-)
---
base-commit: d83115ce337a632f996e44c9f9e18cadfcf5a094
change-id: 20230118-support-vsock-sockmap-connectible-2e1297d2111a

Best regards,
-- 
Bobby Eshleman <bobby.eshleman@bytedance.com>


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH RFC net-next v2 1/3] vsock: support sockmap
  2023-01-31  4:35 [PATCH RFC net-next v2 0/3] vsock: add support for sockmap Bobby Eshleman
@ 2023-01-31  4:35 ` Bobby Eshleman
  2023-02-16 10:11     ` Stefano Garzarella
  2023-01-31  4:35 ` [PATCH RFC net-next v2 2/3] selftests/bpf: add vsock to vmtest.sh Bobby Eshleman
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 16+ messages in thread
From: Bobby Eshleman @ 2023-01-31  4:35 UTC (permalink / raw)
  To: Stefan Hajnoczi, Stefano Garzarella, Michael S. Tsirkin,
	Jason Wang, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Andrii Nakryiko, Mykola Lysenko, Alexei Starovoitov,
	Daniel Borkmann, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
	Shuah Khan
  Cc: Bobby Eshleman, Bobby Eshleman, linux-kernel, kvm,
	virtualization, netdev, bpf, linux-kselftest, jakub, hdanton,
	cong.wang

This patch adds sockmap support for vsock sockets. It is intended to be
usable by all transports, but only the virtio transport is implemented.

Signed-off-by: Bobby Eshleman <bobby.eshleman@bytedance.com>
---
 drivers/vhost/vsock.c                   |   1 +
 include/linux/virtio_vsock.h            |   1 +
 include/net/af_vsock.h                  |  17 ++++
 net/vmw_vsock/Makefile                  |   1 +
 net/vmw_vsock/af_vsock.c                |  55 ++++++++--
 net/vmw_vsock/virtio_transport.c        |   2 +
 net/vmw_vsock/virtio_transport_common.c |  24 +++++
 net/vmw_vsock/vsock_bpf.c               | 175 ++++++++++++++++++++++++++++++++
 net/vmw_vsock/vsock_loopback.c          |   2 +
 9 files changed, 272 insertions(+), 6 deletions(-)

diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
index 1f3b89c885cc..3c6dc036b904 100644
--- a/drivers/vhost/vsock.c
+++ b/drivers/vhost/vsock.c
@@ -439,6 +439,7 @@ static struct virtio_transport vhost_transport = {
 		.notify_send_post_enqueue = virtio_transport_notify_send_post_enqueue,
 		.notify_buffer_size       = virtio_transport_notify_buffer_size,
 
+		.read_skb = virtio_transport_read_skb,
 	},
 
 	.send_pkt = vhost_transport_send_pkt,
diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h
index 3f9c16611306..c58453699ee9 100644
--- a/include/linux/virtio_vsock.h
+++ b/include/linux/virtio_vsock.h
@@ -245,4 +245,5 @@ u32 virtio_transport_get_credit(struct virtio_vsock_sock *vvs, u32 wanted);
 void virtio_transport_put_credit(struct virtio_vsock_sock *vvs, u32 credit);
 void virtio_transport_deliver_tap_pkt(struct sk_buff *skb);
 int virtio_transport_purge_skbs(void *vsk, struct sk_buff_head *list);
+int virtio_transport_read_skb(struct vsock_sock *vsk, skb_read_actor_t read_actor);
 #endif /* _LINUX_VIRTIO_VSOCK_H */
diff --git a/include/net/af_vsock.h b/include/net/af_vsock.h
index 568a87c5e0d0..a73f5fbd296a 100644
--- a/include/net/af_vsock.h
+++ b/include/net/af_vsock.h
@@ -75,6 +75,7 @@ struct vsock_sock {
 	void *trans;
 };
 
+s64 vsock_connectible_has_data(struct vsock_sock *vsk);
 s64 vsock_stream_has_data(struct vsock_sock *vsk);
 s64 vsock_stream_has_space(struct vsock_sock *vsk);
 struct sock *vsock_create_connected(struct sock *parent);
@@ -173,6 +174,9 @@ struct vsock_transport {
 
 	/* Addressing. */
 	u32 (*get_local_cid)(void);
+
+	/* Read a single skb */
+	int (*read_skb)(struct vsock_sock *, skb_read_actor_t);
 };
 
 /**** CORE ****/
@@ -225,5 +229,18 @@ int vsock_init_tap(void);
 int vsock_add_tap(struct vsock_tap *vt);
 int vsock_remove_tap(struct vsock_tap *vt);
 void vsock_deliver_tap(struct sk_buff *build_skb(void *opaque), void *opaque);
+int vsock_connectible_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
+			      int flags);
+int vsock_dgram_recvmsg(struct socket *sock, struct msghdr *msg,
+			       size_t len, int flags);
+
+#ifdef CONFIG_BPF_SYSCALL
+extern struct proto vsock_proto;
+int vsock_bpf_update_proto(struct sock *sk, struct sk_psock *psock, bool restore);
+void __init vsock_bpf_build_proto(void);
+#else
+static inline void __init vsock_bpf_build_proto(void)
+{}
+#endif
 
 #endif /* __AF_VSOCK_H__ */
diff --git a/net/vmw_vsock/Makefile b/net/vmw_vsock/Makefile
index 6a943ec95c4a..5da74c4a9f1d 100644
--- a/net/vmw_vsock/Makefile
+++ b/net/vmw_vsock/Makefile
@@ -8,6 +8,7 @@ obj-$(CONFIG_HYPERV_VSOCKETS) += hv_sock.o
 obj-$(CONFIG_VSOCKETS_LOOPBACK) += vsock_loopback.o
 
 vsock-y += af_vsock.o af_vsock_tap.o vsock_addr.o
+vsock-$(CONFIG_BPF_SYSCALL) += vsock_bpf.o
 
 vsock_diag-y += diag.o
 
diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
index 19aea7cba26e..f2cc04fb8b13 100644
--- a/net/vmw_vsock/af_vsock.c
+++ b/net/vmw_vsock/af_vsock.c
@@ -116,10 +116,13 @@ static void vsock_sk_destruct(struct sock *sk);
 static int vsock_queue_rcv_skb(struct sock *sk, struct sk_buff *skb);
 
 /* Protocol family. */
-static struct proto vsock_proto = {
+struct proto vsock_proto = {
 	.name = "AF_VSOCK",
 	.owner = THIS_MODULE,
 	.obj_size = sizeof(struct vsock_sock),
+#ifdef CONFIG_BPF_SYSCALL
+	.psock_update_sk_prot = vsock_bpf_update_proto,
+#endif
 };
 
 /* The default peer timeout indicates how long we will wait for a peer response
@@ -865,7 +868,7 @@ s64 vsock_stream_has_data(struct vsock_sock *vsk)
 }
 EXPORT_SYMBOL_GPL(vsock_stream_has_data);
 
-static s64 vsock_connectible_has_data(struct vsock_sock *vsk)
+s64 vsock_connectible_has_data(struct vsock_sock *vsk)
 {
 	struct sock *sk = sk_vsock(vsk);
 
@@ -874,6 +877,7 @@ static s64 vsock_connectible_has_data(struct vsock_sock *vsk)
 	else
 		return vsock_stream_has_data(vsk);
 }
+EXPORT_SYMBOL_GPL(vsock_connectible_has_data);
 
 s64 vsock_stream_has_space(struct vsock_sock *vsk)
 {
@@ -1131,6 +1135,13 @@ static __poll_t vsock_poll(struct file *file, struct socket *sock,
 	return mask;
 }
 
+static int vsock_read_skb(struct sock *sk, skb_read_actor_t read_actor)
+{
+	struct vsock_sock *vsk = vsock_sk(sk);
+
+	return vsk->transport->read_skb(vsk, read_actor);
+}
+
 static int vsock_dgram_sendmsg(struct socket *sock, struct msghdr *msg,
 			       size_t len)
 {
@@ -1241,19 +1252,34 @@ static int vsock_dgram_connect(struct socket *sock,
 
 	memcpy(&vsk->remote_addr, remote_addr, sizeof(vsk->remote_addr));
 	sock->state = SS_CONNECTED;
+	sk->sk_state = TCP_ESTABLISHED;
 
 out:
 	release_sock(sk);
 	return err;
 }
 
-static int vsock_dgram_recvmsg(struct socket *sock, struct msghdr *msg,
-			       size_t len, int flags)
+int vsock_dgram_recvmsg(struct socket *sock, struct msghdr *msg,
+			size_t len, int flags)
 {
-	struct vsock_sock *vsk = vsock_sk(sock->sk);
+#ifdef CONFIG_BPF_SYSCALL
+	const struct proto *prot;
+#endif
+	struct vsock_sock *vsk;
+	struct sock *sk;
+
+	sk = sock->sk;
+	vsk = vsock_sk(sk);
+
+#ifdef CONFIG_BPF_SYSCALL
+	prot = READ_ONCE(sk->sk_prot);
+	if (prot != &vsock_proto)
+		return prot->recvmsg(sk, msg, len, flags, NULL);
+#endif
 
 	return vsk->transport->dgram_dequeue(vsk, msg, len, flags);
 }
+EXPORT_SYMBOL_GPL(vsock_dgram_recvmsg);
 
 static const struct proto_ops vsock_dgram_ops = {
 	.family = PF_VSOCK,
@@ -1272,6 +1298,7 @@ static const struct proto_ops vsock_dgram_ops = {
 	.recvmsg = vsock_dgram_recvmsg,
 	.mmap = sock_no_mmap,
 	.sendpage = sock_no_sendpage,
+	.read_skb = vsock_read_skb,
 };
 
 static int vsock_transport_cancel_pkt(struct vsock_sock *vsk)
@@ -2086,13 +2113,16 @@ static int __vsock_seqpacket_recvmsg(struct sock *sk, struct msghdr *msg,
 	return err;
 }
 
-static int
+int
 vsock_connectible_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
 			  int flags)
 {
 	struct sock *sk;
 	struct vsock_sock *vsk;
 	const struct vsock_transport *transport;
+#ifdef CONFIG_BPF_SYSCALL
+	const struct proto *prot;
+#endif
 	int err;
 
 	sk = sock->sk;
@@ -2139,6 +2169,14 @@ vsock_connectible_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
 		goto out;
 	}
 
+#ifdef CONFIG_BPF_SYSCALL
+	prot = READ_ONCE(sk->sk_prot);
+	if (prot != &vsock_proto) {
+		release_sock(sk);
+		return prot->recvmsg(sk, msg, len, flags, NULL);
+	}
+#endif
+
 	if (sk->sk_type == SOCK_STREAM)
 		err = __vsock_stream_recvmsg(sk, msg, len, flags);
 	else
@@ -2148,6 +2186,7 @@ vsock_connectible_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
 	release_sock(sk);
 	return err;
 }
+EXPORT_SYMBOL_GPL(vsock_connectible_recvmsg);
 
 static int vsock_set_rcvlowat(struct sock *sk, int val)
 {
@@ -2188,6 +2227,7 @@ static const struct proto_ops vsock_stream_ops = {
 	.mmap = sock_no_mmap,
 	.sendpage = sock_no_sendpage,
 	.set_rcvlowat = vsock_set_rcvlowat,
+	.read_skb = vsock_read_skb,
 };
 
 static const struct proto_ops vsock_seqpacket_ops = {
@@ -2209,6 +2249,7 @@ static const struct proto_ops vsock_seqpacket_ops = {
 	.recvmsg = vsock_connectible_recvmsg,
 	.mmap = sock_no_mmap,
 	.sendpage = sock_no_sendpage,
+	.read_skb = vsock_read_skb,
 };
 
 static int vsock_create(struct net *net, struct socket *sock,
@@ -2348,6 +2389,8 @@ static int __init vsock_init(void)
 		goto err_unregister_proto;
 	}
 
+	vsock_bpf_build_proto();
+
 	return 0;
 
 err_unregister_proto:
diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
index 28b5a8e8e094..e95df847176b 100644
--- a/net/vmw_vsock/virtio_transport.c
+++ b/net/vmw_vsock/virtio_transport.c
@@ -457,6 +457,8 @@ static struct virtio_transport virtio_transport = {
 		.notify_send_pre_enqueue  = virtio_transport_notify_send_pre_enqueue,
 		.notify_send_post_enqueue = virtio_transport_notify_send_post_enqueue,
 		.notify_buffer_size       = virtio_transport_notify_buffer_size,
+
+		.read_skb = virtio_transport_read_skb,
 	},
 
 	.send_pkt = virtio_transport_send_pkt,
diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
index a1581c77cf84..82bdaf61ee02 100644
--- a/net/vmw_vsock/virtio_transport_common.c
+++ b/net/vmw_vsock/virtio_transport_common.c
@@ -1388,6 +1388,30 @@ int virtio_transport_purge_skbs(void *vsk, struct sk_buff_head *queue)
 }
 EXPORT_SYMBOL_GPL(virtio_transport_purge_skbs);
 
+int virtio_transport_read_skb(struct vsock_sock *vsk, skb_read_actor_t recv_actor)
+{
+	struct virtio_vsock_sock *vvs = vsk->trans;
+	struct sock *sk = sk_vsock(vsk);
+	struct sk_buff *skb;
+	int off = 0;
+	int copied;
+	int err;
+
+	spin_lock_bh(&vvs->rx_lock);
+	/* Use __skb_recv_datagram() for race-free handling of the receive. It
+	 * works for types other than dgrams. */
+	skb = __skb_recv_datagram(sk, &vvs->rx_queue, MSG_DONTWAIT, &off, &err);
+	spin_unlock_bh(&vvs->rx_lock);
+
+	if (!skb)
+		return err;
+
+	copied = recv_actor(sk, skb);
+	kfree_skb(skb);
+	return copied;
+}
+EXPORT_SYMBOL_GPL(virtio_transport_read_skb);
+
 MODULE_LICENSE("GPL v2");
 MODULE_AUTHOR("Asias He");
 MODULE_DESCRIPTION("common code for virtio vsock");
diff --git a/net/vmw_vsock/vsock_bpf.c b/net/vmw_vsock/vsock_bpf.c
new file mode 100644
index 000000000000..ee92a1f42031
--- /dev/null
+++ b/net/vmw_vsock/vsock_bpf.c
@@ -0,0 +1,175 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2022 Bobby Eshleman <bobby.eshleman@bytedance.com>
+ *
+ * Based off of net/unix/unix_bpf.c
+ */
+
+#include <linux/bpf.h>
+#include <linux/module.h>
+#include <linux/skmsg.h>
+#include <linux/socket.h>
+#include <linux/wait.h>
+#include <net/af_vsock.h>
+#include <net/sock.h>
+
+#define vsock_sk_has_data(__sk, __psock)				\
+		({	!skb_queue_empty(&__sk->sk_receive_queue) ||	\
+			!skb_queue_empty(&__psock->ingress_skb) ||	\
+			!list_empty(&__psock->ingress_msg);		\
+		})
+
+static struct proto *vsock_prot_saved __read_mostly;
+static DEFINE_SPINLOCK(vsock_prot_lock);
+static struct proto vsock_bpf_prot;
+
+static bool vsock_has_data(struct sock *sk, struct sk_psock *psock)
+{
+	struct vsock_sock *vsk = vsock_sk(sk);
+	s64 ret;
+
+	ret = vsock_connectible_has_data(vsk);
+	if (ret > 0)
+		return true;
+
+	return vsock_sk_has_data(sk, psock);
+}
+
+static int vsock_msg_wait_data(struct sock *sk, struct sk_psock *psock, long timeo)
+{
+	int ret = 0;
+
+	DEFINE_WAIT_FUNC(wait, woken_wake_function);
+
+	if (sk->sk_shutdown & RCV_SHUTDOWN)
+		return 1;
+
+	if (!timeo)
+		return ret;
+
+	add_wait_queue(sk_sleep(sk), &wait);
+	sk_set_bit(SOCKWQ_ASYNC_WAITDATA, sk);
+	ret = vsock_has_data(sk, psock);
+	if (!ret) {
+		wait_woken(&wait, TASK_INTERRUPTIBLE, timeo);
+		ret = vsock_has_data(sk, psock);
+	}
+	sk_clear_bit(SOCKWQ_ASYNC_WAITDATA, sk);
+	remove_wait_queue(sk_sleep(sk), &wait);
+	return ret;
+}
+
+static int __vsock_recvmsg(struct sock *sk, struct msghdr *msg, size_t len, int flags)
+{
+	struct socket *sock = sk->sk_socket;
+	int err;
+
+	if (sk->sk_type == SOCK_STREAM || sk->sk_type == SOCK_SEQPACKET)
+		err = vsock_connectible_recvmsg(sock, msg, len, flags);
+	else if (sk->sk_type == SOCK_DGRAM)
+		err = vsock_dgram_recvmsg(sock, msg, len, flags);
+	else
+		err = -EPROTOTYPE;
+
+	return err;
+}
+
+static int vsock_bpf_recvmsg(struct sock *sk, struct msghdr *msg,
+			     size_t len, int flags, int *addr_len)
+{
+	struct sk_psock *psock;
+	int copied;
+
+	psock = sk_psock_get(sk);
+	lock_sock(sk);
+	if (unlikely(!psock)) {
+		release_sock(sk);
+		return __vsock_recvmsg(sk, msg, len, flags);
+	}
+
+	if (vsock_has_data(sk, psock) && sk_psock_queue_empty(psock)) {
+		release_sock(sk);
+		sk_psock_put(sk, psock);
+		return __vsock_recvmsg(sk, msg, len, flags);
+	}
+
+msg_bytes_ready:
+	copied = sk_msg_recvmsg(sk, psock, msg, len, flags);
+	if (!copied) {
+		long timeo;
+		int data;
+
+		timeo = sock_rcvtimeo(sk, flags & MSG_DONTWAIT);
+		data = vsock_msg_wait_data(sk, psock, timeo);
+		if (data) {
+			if (!sk_psock_queue_empty(psock))
+				goto msg_bytes_ready;
+			release_sock(sk);
+			sk_psock_put(sk, psock);
+			return __vsock_recvmsg(sk, msg, len, flags);
+		}
+		copied = -EAGAIN;
+	}
+	release_sock(sk);
+	sk_psock_put(sk, psock);
+
+	return copied;
+}
+
+/* Copy of original proto with updated sock_map methods */
+static struct proto vsock_bpf_prot = {
+	.close = sock_map_close,
+	.recvmsg = vsock_bpf_recvmsg,
+	.sock_is_readable = sk_msg_is_readable,
+	.unhash = sock_map_unhash,
+};
+
+static void vsock_bpf_rebuild_protos(struct proto *prot, const struct proto *base)
+{
+	*prot        = *base;
+	prot->close  = sock_map_close;
+	prot->recvmsg = vsock_bpf_recvmsg;
+	prot->sock_is_readable = sk_msg_is_readable;
+}
+
+static void vsock_bpf_check_needs_rebuild(struct proto *ops)
+{
+	/* Paired with the smp_store_release() below. */
+	if (unlikely(ops != smp_load_acquire(&vsock_prot_saved))) {
+		spin_lock_bh(&vsock_prot_lock);
+		if (likely(ops != vsock_prot_saved)) {
+			vsock_bpf_rebuild_protos(&vsock_bpf_prot, ops);
+			/* Make sure proto function pointers are updated before publishing the
+			 * pointer to the struct.
+			 */
+			smp_store_release(&vsock_prot_saved, ops);
+		}
+		spin_unlock_bh(&vsock_prot_lock);
+	}
+}
+
+int vsock_bpf_update_proto(struct sock *sk, struct sk_psock *psock, bool restore)
+{
+	struct vsock_sock *vsk;
+
+	if (restore) {
+		sk->sk_write_space = psock->saved_write_space;
+		sock_replace_proto(sk, psock->sk_proto);
+		return 0;
+	}
+
+	vsk = vsock_sk(sk);
+	if (!vsk->transport)
+		return -ENODEV;
+
+	if (!vsk->transport->read_skb)
+		return -EOPNOTSUPP;
+
+	vsock_bpf_check_needs_rebuild(psock->sk_proto);
+	sock_replace_proto(sk, &vsock_bpf_prot);
+	return 0;
+}
+
+void __init vsock_bpf_build_proto(void)
+{
+	vsock_bpf_rebuild_protos(&vsock_bpf_prot, &vsock_proto);
+}
diff --git a/net/vmw_vsock/vsock_loopback.c b/net/vmw_vsock/vsock_loopback.c
index 671e03240fc5..40753b661c13 100644
--- a/net/vmw_vsock/vsock_loopback.c
+++ b/net/vmw_vsock/vsock_loopback.c
@@ -94,6 +94,8 @@ static struct virtio_transport loopback_transport = {
 		.notify_send_pre_enqueue  = virtio_transport_notify_send_pre_enqueue,
 		.notify_send_post_enqueue = virtio_transport_notify_send_post_enqueue,
 		.notify_buffer_size       = virtio_transport_notify_buffer_size,
+
+		.read_skb = virtio_transport_read_skb,
 	},
 
 	.send_pkt = vsock_loopback_send_pkt,

-- 
2.35.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH RFC net-next v2 2/3] selftests/bpf: add vsock to vmtest.sh
  2023-01-31  4:35 [PATCH RFC net-next v2 0/3] vsock: add support for sockmap Bobby Eshleman
  2023-01-31  4:35 ` [PATCH RFC net-next v2 1/3] vsock: support sockmap Bobby Eshleman
@ 2023-01-31  4:35 ` Bobby Eshleman
  2023-02-16 10:12     ` Stefano Garzarella
  2023-01-31  4:35 ` [PATCH RFC net-next v2 3/3] selftests/bpf: Add a test case for vsock sockmap Bobby Eshleman
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 16+ messages in thread
From: Bobby Eshleman @ 2023-01-31  4:35 UTC (permalink / raw)
  To: Stefan Hajnoczi, Stefano Garzarella, Michael S. Tsirkin,
	Jason Wang, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Andrii Nakryiko, Mykola Lysenko, Alexei Starovoitov,
	Daniel Borkmann, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
	Shuah Khan
  Cc: Bobby Eshleman, Bobby Eshleman, linux-kernel, kvm,
	virtualization, netdev, bpf, linux-kselftest, jakub, hdanton,
	cong.wang

Add vsock loopback to the test kernel.

This allows sockmap for vsock to be tested.

Signed-off-by: Bobby Eshleman <bobby.eshleman@bytedance.com>
---
 tools/testing/selftests/bpf/config.aarch64 | 2 ++
 tools/testing/selftests/bpf/config.s390x   | 3 +++
 tools/testing/selftests/bpf/config.x86_64  | 3 +++
 3 files changed, 8 insertions(+)

diff --git a/tools/testing/selftests/bpf/config.aarch64 b/tools/testing/selftests/bpf/config.aarch64
index 1f0437644186..253821494884 100644
--- a/tools/testing/selftests/bpf/config.aarch64
+++ b/tools/testing/selftests/bpf/config.aarch64
@@ -176,6 +176,8 @@ CONFIG_VIRTIO_MMIO_CMDLINE_DEVICES=y
 CONFIG_VIRTIO_MMIO=y
 CONFIG_VIRTIO_NET=y
 CONFIG_VIRTIO_PCI=y
+CONFIG_VIRTIO_VSOCKETS_COMMON=y
 CONFIG_VLAN_8021Q=y
 CONFIG_VSOCKETS=y
+CONFIG_VSOCKETS_LOOPBACK=y
 CONFIG_XFRM_USER=y
diff --git a/tools/testing/selftests/bpf/config.s390x b/tools/testing/selftests/bpf/config.s390x
index d49f6170e7bd..2ba92167be35 100644
--- a/tools/testing/selftests/bpf/config.s390x
+++ b/tools/testing/selftests/bpf/config.s390x
@@ -140,5 +140,8 @@ CONFIG_VIRTIO_BALLOON=y
 CONFIG_VIRTIO_BLK=y
 CONFIG_VIRTIO_NET=y
 CONFIG_VIRTIO_PCI=y
+CONFIG_VIRTIO_VSOCKETS_COMMON=y
 CONFIG_VLAN_8021Q=y
+CONFIG_VSOCKETS=y
+CONFIG_VSOCKETS_LOOPBACK=y
 CONFIG_XFRM_USER=y
diff --git a/tools/testing/selftests/bpf/config.x86_64 b/tools/testing/selftests/bpf/config.x86_64
index dd97d61d325c..b650b2e617b8 100644
--- a/tools/testing/selftests/bpf/config.x86_64
+++ b/tools/testing/selftests/bpf/config.x86_64
@@ -234,7 +234,10 @@ CONFIG_VIRTIO_BLK=y
 CONFIG_VIRTIO_CONSOLE=y
 CONFIG_VIRTIO_NET=y
 CONFIG_VIRTIO_PCI=y
+CONFIG_VIRTIO_VSOCKETS_COMMON=y
 CONFIG_VLAN_8021Q=y
+CONFIG_VSOCKETS=y
+CONFIG_VSOCKETS_LOOPBACK=y
 CONFIG_X86_ACPI_CPUFREQ=y
 CONFIG_X86_CPUID=y
 CONFIG_X86_MSR=y

-- 
2.35.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH RFC net-next v2 3/3] selftests/bpf: Add a test case for vsock sockmap
  2023-01-31  4:35 [PATCH RFC net-next v2 0/3] vsock: add support for sockmap Bobby Eshleman
  2023-01-31  4:35 ` [PATCH RFC net-next v2 1/3] vsock: support sockmap Bobby Eshleman
  2023-01-31  4:35 ` [PATCH RFC net-next v2 2/3] selftests/bpf: add vsock to vmtest.sh Bobby Eshleman
@ 2023-01-31  4:35 ` Bobby Eshleman
  2023-02-16 10:16     ` Stefano Garzarella
  2023-02-05 20:08   ` Cong Wang
  2023-02-16 10:20   ` Stefano Garzarella
  4 siblings, 1 reply; 16+ messages in thread
From: Bobby Eshleman @ 2023-01-31  4:35 UTC (permalink / raw)
  To: Stefan Hajnoczi, Stefano Garzarella, Michael S. Tsirkin,
	Jason Wang, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Andrii Nakryiko, Mykola Lysenko, Alexei Starovoitov,
	Daniel Borkmann, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
	Shuah Khan
  Cc: Bobby Eshleman, Bobby Eshleman, linux-kernel, kvm,
	virtualization, netdev, bpf, linux-kselftest, jakub, hdanton,
	cong.wang

Add a test case testing the redirection from connectible AF_VSOCK
sockets to connectible AF_UNIX sockets.

Signed-off-by: Bobby Eshleman <bobby.eshleman@bytedance.com>
---
 .../selftests/bpf/prog_tests/sockmap_listen.c      | 163 +++++++++++++++++++++
 1 file changed, 163 insertions(+)

diff --git a/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c b/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c
index 2cf0c7a3fe23..8b5a2e09c9ed 100644
--- a/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c
+++ b/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c
@@ -18,6 +18,7 @@
 #include <string.h>
 #include <sys/select.h>
 #include <unistd.h>
+#include <linux/vm_sockets.h>
 
 #include <bpf/bpf.h>
 #include <bpf/libbpf.h>
@@ -249,6 +250,16 @@ static void init_addr_loopback6(struct sockaddr_storage *ss, socklen_t *len)
 	*len = sizeof(*addr6);
 }
 
+static void init_addr_loopback_vsock(struct sockaddr_storage *ss, socklen_t *len)
+{
+	struct sockaddr_vm *addr = memset(ss, 0, sizeof(*ss));
+
+	addr->svm_family = AF_VSOCK;
+	addr->svm_port = VMADDR_PORT_ANY;
+	addr->svm_cid = VMADDR_CID_LOCAL;
+	*len = sizeof(*addr);
+}
+
 static void init_addr_loopback(int family, struct sockaddr_storage *ss,
 			       socklen_t *len)
 {
@@ -259,6 +270,9 @@ static void init_addr_loopback(int family, struct sockaddr_storage *ss,
 	case AF_INET6:
 		init_addr_loopback6(ss, len);
 		return;
+	case AF_VSOCK:
+		init_addr_loopback_vsock(ss, len);
+		return;
 	default:
 		FAIL("unsupported address family %d", family);
 	}
@@ -1434,6 +1448,8 @@ static const char *family_str(sa_family_t family)
 		return "IPv6";
 	case AF_UNIX:
 		return "Unix";
+	case AF_VSOCK:
+		return "VSOCK";
 	default:
 		return "unknown";
 	}
@@ -1644,6 +1660,151 @@ static void test_unix_redir(struct test_sockmap_listen *skel, struct bpf_map *ma
 	unix_skb_redir_to_connected(skel, map, sotype);
 }
 
+/* Returns two connected loopback vsock sockets */
+static int vsock_socketpair_connectible(int sotype, int *v0, int *v1)
+{
+	struct sockaddr_storage addr;
+	socklen_t len = sizeof(addr);
+	int s, p, c;
+
+	s = socket_loopback(AF_VSOCK, sotype);
+	if (s < 0)
+		return -1;
+
+	c = xsocket(AF_VSOCK, sotype | SOCK_NONBLOCK, 0);
+	if (c == -1)
+		goto close_srv;
+
+	if (getsockname(s, sockaddr(&addr), &len) < 0)
+		goto close_cli;
+
+	if (connect(c, sockaddr(&addr), len) < 0 && errno != EINPROGRESS) {
+		FAIL_ERRNO("connect");
+		goto close_cli;
+	}
+
+	len = sizeof(addr);
+	p = accept_timeout(s, sockaddr(&addr), &len, IO_TIMEOUT_SEC);
+	if (p < 0)
+		goto close_cli;
+
+	*v0 = p;
+	*v1 = c;
+
+	return 0;
+
+close_cli:
+	close(c);
+close_srv:
+	close(s);
+
+	return -1;
+}
+
+static void vsock_unix_redir_connectible(int sock_mapfd, int verd_mapfd,
+					 enum redir_mode mode, int sotype)
+{
+	const char *log_prefix = redir_mode_str(mode);
+	char a = 'a', b = 'b';
+	int u0, u1, v0, v1;
+	int sfd[2];
+	unsigned int pass;
+	int err, n;
+	u32 key;
+
+	zero_verdict_count(verd_mapfd);
+
+	if (socketpair(AF_UNIX, SOCK_STREAM | SOCK_NONBLOCK, 0, sfd))
+		return;
+
+	u0 = sfd[0];
+	u1 = sfd[1];
+
+	err = vsock_socketpair_connectible(sotype, &v0, &v1);
+	if (err) {
+		FAIL("vsock_socketpair_connectible() failed");
+		goto close_uds;
+	}
+
+	err = add_to_sockmap(sock_mapfd, u0, v0);
+	if (err) {
+		FAIL("add_to_sockmap failed");
+		goto close_vsock;
+	}
+
+	n = write(v1, &a, sizeof(a));
+	if (n < 0)
+		FAIL_ERRNO("%s: write", log_prefix);
+	if (n == 0)
+		FAIL("%s: incomplete write", log_prefix);
+	if (n < 1)
+		goto out;
+
+	n = recv(mode == REDIR_INGRESS ? u0 : u1, &b, sizeof(b), MSG_DONTWAIT);
+	if (n < 0)
+		FAIL("%s: recv() err, errno=%d", log_prefix, errno);
+	if (n == 0)
+		FAIL("%s: incomplete recv", log_prefix);
+	if (b != a)
+		FAIL("%s: vsock socket map failed, %c != %c", log_prefix, a, b);
+
+	key = SK_PASS;
+	err = xbpf_map_lookup_elem(verd_mapfd, &key, &pass);
+	if (err)
+		goto out;
+	if (pass != 1)
+		FAIL("%s: want pass count 1, have %d", log_prefix, pass);
+out:
+	key = 0;
+	bpf_map_delete_elem(sock_mapfd, &key);
+	key = 1;
+	bpf_map_delete_elem(sock_mapfd, &key);
+
+close_vsock:
+	close(v0);
+	close(v1);
+
+close_uds:
+	close(u0);
+	close(u1);
+}
+
+static void vsock_unix_skb_redir_connectible(struct test_sockmap_listen *skel,
+					     struct bpf_map *inner_map,
+					     int sotype)
+{
+	int verdict = bpf_program__fd(skel->progs.prog_skb_verdict);
+	int verdict_map = bpf_map__fd(skel->maps.verdict_map);
+	int sock_map = bpf_map__fd(inner_map);
+	int err;
+
+	err = xbpf_prog_attach(verdict, sock_map, BPF_SK_SKB_VERDICT, 0);
+	if (err)
+		return;
+
+	skel->bss->test_ingress = false;
+	vsock_unix_redir_connectible(sock_map, verdict_map, REDIR_EGRESS, sotype);
+	skel->bss->test_ingress = true;
+	vsock_unix_redir_connectible(sock_map, verdict_map, REDIR_INGRESS, sotype);
+
+	xbpf_prog_detach2(verdict, sock_map, BPF_SK_SKB_VERDICT);
+}
+
+static void test_vsock_redir(struct test_sockmap_listen *skel, struct bpf_map *map)
+{
+	const char *family_name, *map_name;
+	char s[MAX_TEST_NAME];
+
+	family_name = family_str(AF_VSOCK);
+	map_name = map_type_str(map);
+	snprintf(s, sizeof(s), "%s %s %s", map_name, family_name, __func__);
+	if (!test__start_subtest(s))
+		return;
+
+	vsock_unix_skb_redir_connectible(skel, map, SOCK_STREAM);
+	vsock_unix_skb_redir_connectible(skel, map, SOCK_SEQPACKET);
+}
+
 static void test_reuseport(struct test_sockmap_listen *skel,
 			   struct bpf_map *map, int family, int sotype)
 {
@@ -2015,12 +2176,14 @@ void serial_test_sockmap_listen(void)
 	run_tests(skel, skel->maps.sock_map, AF_INET6);
 	test_unix_redir(skel, skel->maps.sock_map, SOCK_DGRAM);
 	test_unix_redir(skel, skel->maps.sock_map, SOCK_STREAM);
+	test_vsock_redir(skel, skel->maps.sock_map);
 
 	skel->bss->test_sockmap = false;
 	run_tests(skel, skel->maps.sock_hash, AF_INET);
 	run_tests(skel, skel->maps.sock_hash, AF_INET6);
 	test_unix_redir(skel, skel->maps.sock_hash, SOCK_DGRAM);
 	test_unix_redir(skel, skel->maps.sock_hash, SOCK_STREAM);
+	test_vsock_redir(skel, skel->maps.sock_hash);
 
 	test_sockmap_listen__destroy(skel);
 }

-- 
2.35.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH RFC net-next v2 0/3] vsock: add support for sockmap
  2023-02-05 20:08   ` Cong Wang
  (?)
@ 2023-02-03  5:30   ` Bobby Eshleman
  -1 siblings, 0 replies; 16+ messages in thread
From: Bobby Eshleman @ 2023-02-03  5:30 UTC (permalink / raw)
  To: Cong Wang
  Cc: Bobby Eshleman, Stefan Hajnoczi, Stefano Garzarella,
	Michael S. Tsirkin, Jason Wang, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, Andrii Nakryiko, Mykola Lysenko,
	Alexei Starovoitov, Daniel Borkmann, Martin KaFai Lau, Song Liu,
	Yonghong Song, John Fastabend, KP Singh, Stanislav Fomichev,
	Hao Luo, Jiri Olsa, Shuah Khan, linux-kernel, kvm,
	virtualization, netdev, bpf, linux-kselftest, jakub, hdanton,
	cong.wang

On Sun, Feb 05, 2023 at 12:08:49PM -0800, Cong Wang wrote:
> On Mon, Jan 30, 2023 at 08:35:11PM -0800, Bobby Eshleman wrote:
> > Add support for sockmap to vsock.
> > 
> > We're testing usage of vsock as a way to redirect guest-local UDS requests to
> > the host and this patch series greatly improves the performance of such a
> > setup.
> > 
> > Compared to copying packets via userspace, this improves throughput by 121% in
> > basic testing.
> > 
> > Tested as follows.
> > 
> > Setup: guest unix dgram sender -> guest vsock redirector -> host vsock server
> > Threads: 1
> > Payload: 64k
> > No sockmap:
> > - 76.3 MB/s
> > - The guest vsock redirector was
> >   "socat VSOCK-CONNECT:2:1234 UNIX-RECV:/path/to/sock"
> > Using sockmap (this patch):
> > - 168.8 MB/s (+121%)
> > - The guest redirector was a simple sockmap echo server,
> >   redirecting unix ingress to vsock 2:1234 egress.
> > - Same sender and server programs
> > 
> > *Note: these numbers are from RFC v1
> > 
> > Only the virtio transport has been tested. The loopback transport was used in
> > writing bpf/selftests, but not thoroughly tested otherwise.
> > 
> > This series requires the skb patch.
> > 
> 
> Looks good to me. Definitely good to go as non-RFC.
> 
> Thanks.

Thank you for the review.

Best,
Bobby

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH RFC net-next v2 0/3] vsock: add support for sockmap
  2023-01-31  4:35 [PATCH RFC net-next v2 0/3] vsock: add support for sockmap Bobby Eshleman
@ 2023-02-05 20:08   ` Cong Wang
  2023-01-31  4:35 ` [PATCH RFC net-next v2 2/3] selftests/bpf: add vsock to vmtest.sh Bobby Eshleman
                     ` (3 subsequent siblings)
  4 siblings, 0 replies; 16+ messages in thread
From: Cong Wang @ 2023-02-05 20:08 UTC (permalink / raw)
  To: Bobby Eshleman
  Cc: Stefan Hajnoczi, Stefano Garzarella, Michael S. Tsirkin,
	Jason Wang, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Andrii Nakryiko, Mykola Lysenko, Alexei Starovoitov,
	Daniel Borkmann, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
	Shuah Khan, Bobby Eshleman, linux-kernel, kvm, virtualization,
	netdev, bpf, linux-kselftest, jakub, hdanton, cong.wang

On Mon, Jan 30, 2023 at 08:35:11PM -0800, Bobby Eshleman wrote:
> Add support for sockmap to vsock.
> 
> We're testing usage of vsock as a way to redirect guest-local UDS requests to
> the host and this patch series greatly improves the performance of such a
> setup.
> 
> Compared to copying packets via userspace, this improves throughput by 121% in
> basic testing.
> 
> Tested as follows.
> 
> Setup: guest unix dgram sender -> guest vsock redirector -> host vsock server
> Threads: 1
> Payload: 64k
> No sockmap:
> - 76.3 MB/s
> - The guest vsock redirector was
>   "socat VSOCK-CONNECT:2:1234 UNIX-RECV:/path/to/sock"
> Using sockmap (this patch):
> - 168.8 MB/s (+121%)
> - The guest redirector was a simple sockmap echo server,
>   redirecting unix ingress to vsock 2:1234 egress.
> - Same sender and server programs
> 
> *Note: these numbers are from RFC v1
> 
> Only the virtio transport has been tested. The loopback transport was used in
> writing bpf/selftests, but not thoroughly tested otherwise.
> 
> This series requires the skb patch.
> 

Looks good to me. Definitely good to go as non-RFC.

Thanks.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH RFC net-next v2 0/3] vsock: add support for sockmap
@ 2023-02-05 20:08   ` Cong Wang
  0 siblings, 0 replies; 16+ messages in thread
From: Cong Wang @ 2023-02-05 20:08 UTC (permalink / raw)
  To: Bobby Eshleman
  Cc: Bobby Eshleman, hdanton, cong.wang, kvm, Michael S. Tsirkin,
	Alexei Starovoitov, virtualization, Song Liu, Eric Dumazet,
	Stanislav Fomichev, linux-kselftest, Shuah Khan, Mykola Lysenko,
	jakub, Daniel Borkmann, John Fastabend, Andrii Nakryiko,
	Yonghong Song, Paolo Abeni, KP Singh, Stefan Hajnoczi,
	Jakub Kicinski, Hao Luo, netdev, linux-kernel, Jiri Olsa, bpf,
	Martin KaFai Lau, David S. Miller

On Mon, Jan 30, 2023 at 08:35:11PM -0800, Bobby Eshleman wrote:
> Add support for sockmap to vsock.
> 
> We're testing usage of vsock as a way to redirect guest-local UDS requests to
> the host and this patch series greatly improves the performance of such a
> setup.
> 
> Compared to copying packets via userspace, this improves throughput by 121% in
> basic testing.
> 
> Tested as follows.
> 
> Setup: guest unix dgram sender -> guest vsock redirector -> host vsock server
> Threads: 1
> Payload: 64k
> No sockmap:
> - 76.3 MB/s
> - The guest vsock redirector was
>   "socat VSOCK-CONNECT:2:1234 UNIX-RECV:/path/to/sock"
> Using sockmap (this patch):
> - 168.8 MB/s (+121%)
> - The guest redirector was a simple sockmap echo server,
>   redirecting unix ingress to vsock 2:1234 egress.
> - Same sender and server programs
> 
> *Note: these numbers are from RFC v1
> 
> Only the virtio transport has been tested. The loopback transport was used in
> writing bpf/selftests, but not thoroughly tested otherwise.
> 
> This series requires the skb patch.
> 

Looks good to me. Definitely good to go as non-RFC.

Thanks.
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH RFC net-next v2 1/3] vsock: support sockmap
  2023-01-31  4:35 ` [PATCH RFC net-next v2 1/3] vsock: support sockmap Bobby Eshleman
@ 2023-02-16 10:11     ` Stefano Garzarella
  0 siblings, 0 replies; 16+ messages in thread
From: Stefano Garzarella @ 2023-02-16 10:11 UTC (permalink / raw)
  To: Bobby Eshleman
  Cc: Bobby Eshleman, hdanton, cong.wang, kvm, Michael S. Tsirkin,
	Alexei Starovoitov, virtualization, Song Liu, Eric Dumazet,
	Stanislav Fomichev, linux-kselftest, Shuah Khan, Mykola Lysenko,
	jakub, Daniel Borkmann, John Fastabend, Andrii Nakryiko,
	Yonghong Song, Paolo Abeni, KP Singh, Stefan Hajnoczi,
	Jakub Kicinski, Hao Luo, netdev, linux-kernel, Jiri Olsa, bpf,
	Martin KaFai Lau, David S. Miller

On Mon, Jan 30, 2023 at 08:35:12PM -0800, Bobby Eshleman wrote:
>This patch adds sockmap support for vsock sockets. It is intended to be
>usable by all transports, but only the virtio transport is implemented.
>
>Signed-off-by: Bobby Eshleman <bobby.eshleman@bytedance.com>
>---
> drivers/vhost/vsock.c                   |   1 +
> include/linux/virtio_vsock.h            |   1 +
> include/net/af_vsock.h                  |  17 ++++
> net/vmw_vsock/Makefile                  |   1 +
> net/vmw_vsock/af_vsock.c                |  55 ++++++++--
> net/vmw_vsock/virtio_transport.c        |   2 +
> net/vmw_vsock/virtio_transport_common.c |  24 +++++
> net/vmw_vsock/vsock_bpf.c               | 175 ++++++++++++++++++++++++++++++++
> net/vmw_vsock/vsock_loopback.c          |   2 +
> 9 files changed, 272 insertions(+), 6 deletions(-)
>
>diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
>index 1f3b89c885cc..3c6dc036b904 100644
>--- a/drivers/vhost/vsock.c
>+++ b/drivers/vhost/vsock.c
>@@ -439,6 +439,7 @@ static struct virtio_transport vhost_transport = {
> 		.notify_send_post_enqueue = virtio_transport_notify_send_post_enqueue,
> 		.notify_buffer_size       = virtio_transport_notify_buffer_size,
>
>+		.read_skb = virtio_transport_read_skb,
> 	},
>
> 	.send_pkt = vhost_transport_send_pkt,
>diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h
>index 3f9c16611306..c58453699ee9 100644
>--- a/include/linux/virtio_vsock.h
>+++ b/include/linux/virtio_vsock.h
>@@ -245,4 +245,5 @@ u32 virtio_transport_get_credit(struct virtio_vsock_sock *vvs, u32 wanted);
> void virtio_transport_put_credit(struct virtio_vsock_sock *vvs, u32 credit);
> void virtio_transport_deliver_tap_pkt(struct sk_buff *skb);
> int virtio_transport_purge_skbs(void *vsk, struct sk_buff_head *list);
>+int virtio_transport_read_skb(struct vsock_sock *vsk, skb_read_actor_t read_actor);
> #endif /* _LINUX_VIRTIO_VSOCK_H */
>diff --git a/include/net/af_vsock.h b/include/net/af_vsock.h
>index 568a87c5e0d0..a73f5fbd296a 100644
>--- a/include/net/af_vsock.h
>+++ b/include/net/af_vsock.h
>@@ -75,6 +75,7 @@ struct vsock_sock {
> 	void *trans;
> };
>
>+s64 vsock_connectible_has_data(struct vsock_sock *vsk);
> s64 vsock_stream_has_data(struct vsock_sock *vsk);
> s64 vsock_stream_has_space(struct vsock_sock *vsk);
> struct sock *vsock_create_connected(struct sock *parent);
>@@ -173,6 +174,9 @@ struct vsock_transport {
>
> 	/* Addressing. */
> 	u32 (*get_local_cid)(void);
>+
>+	/* Read a single skb */
>+	int (*read_skb)(struct vsock_sock *, skb_read_actor_t);
> };
>
> /**** CORE ****/
>@@ -225,5 +229,18 @@ int vsock_init_tap(void);
> int vsock_add_tap(struct vsock_tap *vt);
> int vsock_remove_tap(struct vsock_tap *vt);
> void vsock_deliver_tap(struct sk_buff *build_skb(void *opaque), void *opaque);
>+int vsock_connectible_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
>+			      int flags);
>+int vsock_dgram_recvmsg(struct socket *sock, struct msghdr *msg,
>+			       size_t len, int flags);

checkpatch complains: "CHECK: Alignment should match open parenthesis"

>+
>+#ifdef CONFIG_BPF_SYSCALL
>+extern struct proto vsock_proto;
>+int vsock_bpf_update_proto(struct sock *sk, struct sk_psock *psock, bool restore);
>+void __init vsock_bpf_build_proto(void);
>+#else
>+static inline void __init vsock_bpf_build_proto(void)
>+{}
>+#endif
>
> #endif /* __AF_VSOCK_H__ */
>diff --git a/net/vmw_vsock/Makefile b/net/vmw_vsock/Makefile
>index 6a943ec95c4a..5da74c4a9f1d 100644
>--- a/net/vmw_vsock/Makefile
>+++ b/net/vmw_vsock/Makefile
>@@ -8,6 +8,7 @@ obj-$(CONFIG_HYPERV_VSOCKETS) += hv_sock.o
> obj-$(CONFIG_VSOCKETS_LOOPBACK) += vsock_loopback.o
>
> vsock-y += af_vsock.o af_vsock_tap.o vsock_addr.o
>+vsock-$(CONFIG_BPF_SYSCALL) += vsock_bpf.o
>
> vsock_diag-y += diag.o
>
>diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
>index 19aea7cba26e..f2cc04fb8b13 100644
>--- a/net/vmw_vsock/af_vsock.c
>+++ b/net/vmw_vsock/af_vsock.c
>@@ -116,10 +116,13 @@ static void vsock_sk_destruct(struct sock *sk);
> static int vsock_queue_rcv_skb(struct sock *sk, struct sk_buff *skb);
>
> /* Protocol family. */
>-static struct proto vsock_proto = {
>+struct proto vsock_proto = {
> 	.name = "AF_VSOCK",
> 	.owner = THIS_MODULE,
> 	.obj_size = sizeof(struct vsock_sock),
>+#ifdef CONFIG_BPF_SYSCALL
>+	.psock_update_sk_prot = vsock_bpf_update_proto,
>+#endif
> };
>
> /* The default peer timeout indicates how long we will wait for a peer response
>@@ -865,7 +868,7 @@ s64 vsock_stream_has_data(struct vsock_sock *vsk)
> }
> EXPORT_SYMBOL_GPL(vsock_stream_has_data);
>
>-static s64 vsock_connectible_has_data(struct vsock_sock *vsk)
>+s64 vsock_connectible_has_data(struct vsock_sock *vsk)
> {
> 	struct sock *sk = sk_vsock(vsk);
>
>@@ -874,6 +877,7 @@ static s64 vsock_connectible_has_data(struct vsock_sock *vsk)
> 	else
> 		return vsock_stream_has_data(vsk);
> }
>+EXPORT_SYMBOL_GPL(vsock_connectible_has_data);
>
> s64 vsock_stream_has_space(struct vsock_sock *vsk)
> {
>@@ -1131,6 +1135,13 @@ static __poll_t vsock_poll(struct file *file, struct socket *sock,
> 	return mask;
> }
>
>+static int vsock_read_skb(struct sock *sk, skb_read_actor_t read_actor)
>+{
>+	struct vsock_sock *vsk = vsock_sk(sk);
>+
>+	return vsk->transport->read_skb(vsk, read_actor);
>+}
>+
> static int vsock_dgram_sendmsg(struct socket *sock, struct msghdr *msg,
> 			       size_t len)
> {
>@@ -1241,19 +1252,34 @@ static int vsock_dgram_connect(struct socket *sock,
>
> 	memcpy(&vsk->remote_addr, remote_addr, sizeof(vsk->remote_addr));
> 	sock->state = SS_CONNECTED;
>+	sk->sk_state = TCP_ESTABLISHED;
>
> out:
> 	release_sock(sk);
> 	return err;
> }
>
>-static int vsock_dgram_recvmsg(struct socket *sock, struct msghdr *msg,
>-			       size_t len, int flags)
>+int vsock_dgram_recvmsg(struct socket *sock, struct msghdr *msg,
>+			size_t len, int flags)
> {
>-	struct vsock_sock *vsk = vsock_sk(sock->sk);
>+#ifdef CONFIG_BPF_SYSCALL
>+	const struct proto *prot;
>+#endif
>+	struct vsock_sock *vsk;
>+	struct sock *sk;
>+
>+	sk = sock->sk;
>+	vsk = vsock_sk(sk);
>+
>+#ifdef CONFIG_BPF_SYSCALL
>+	prot = READ_ONCE(sk->sk_prot);
>+	if (prot != &vsock_proto)
>+		return prot->recvmsg(sk, msg, len, flags, NULL);
>+#endif
>
> 	return vsk->transport->dgram_dequeue(vsk, msg, len, flags);
> }
>+EXPORT_SYMBOL_GPL(vsock_dgram_recvmsg);
>
> static const struct proto_ops vsock_dgram_ops = {
> 	.family = PF_VSOCK,
>@@ -1272,6 +1298,7 @@ static const struct proto_ops vsock_dgram_ops = {
> 	.recvmsg = vsock_dgram_recvmsg,
> 	.mmap = sock_no_mmap,
> 	.sendpage = sock_no_sendpage,
>+	.read_skb = vsock_read_skb,
> };
>
> static int vsock_transport_cancel_pkt(struct vsock_sock *vsk)
>@@ -2086,13 +2113,16 @@ static int __vsock_seqpacket_recvmsg(struct sock *sk, struct msghdr *msg,
> 	return err;
> }
>
>-static int
>+int
> vsock_connectible_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
> 			  int flags)
> {
> 	struct sock *sk;
> 	struct vsock_sock *vsk;
> 	const struct vsock_transport *transport;
>+#ifdef CONFIG_BPF_SYSCALL
>+	const struct proto *prot;
>+#endif
> 	int err;
>
> 	sk = sock->sk;
>@@ -2139,6 +2169,14 @@ vsock_connectible_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
> 		goto out;
> 	}
>
>+#ifdef CONFIG_BPF_SYSCALL
>+	prot = READ_ONCE(sk->sk_prot);
>+	if (prot != &vsock_proto) {
>+		release_sock(sk);
>+		return prot->recvmsg(sk, msg, len, flags, NULL);
>+	}
>+#endif
>+
> 	if (sk->sk_type == SOCK_STREAM)
> 		err = __vsock_stream_recvmsg(sk, msg, len, flags);
> 	else
>@@ -2148,6 +2186,7 @@ vsock_connectible_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
> 	release_sock(sk);
> 	return err;
> }
>+EXPORT_SYMBOL_GPL(vsock_connectible_recvmsg);
>
> static int vsock_set_rcvlowat(struct sock *sk, int val)
> {
>@@ -2188,6 +2227,7 @@ static const struct proto_ops vsock_stream_ops = {
> 	.mmap = sock_no_mmap,
> 	.sendpage = sock_no_sendpage,
> 	.set_rcvlowat = vsock_set_rcvlowat,
>+	.read_skb = vsock_read_skb,
> };
>
> static const struct proto_ops vsock_seqpacket_ops = {
>@@ -2209,6 +2249,7 @@ static const struct proto_ops vsock_seqpacket_ops = {
> 	.recvmsg = vsock_connectible_recvmsg,
> 	.mmap = sock_no_mmap,
> 	.sendpage = sock_no_sendpage,
>+	.read_skb = vsock_read_skb,
> };
>
> static int vsock_create(struct net *net, struct socket *sock,
>@@ -2348,6 +2389,8 @@ static int __init vsock_init(void)
> 		goto err_unregister_proto;
> 	}
>
>+	vsock_bpf_build_proto();
>+
> 	return 0;
>
> err_unregister_proto:
>diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
>index 28b5a8e8e094..e95df847176b 100644
>--- a/net/vmw_vsock/virtio_transport.c
>+++ b/net/vmw_vsock/virtio_transport.c
>@@ -457,6 +457,8 @@ static struct virtio_transport virtio_transport = {
> 		.notify_send_pre_enqueue  = virtio_transport_notify_send_pre_enqueue,
> 		.notify_send_post_enqueue = virtio_transport_notify_send_post_enqueue,
> 		.notify_buffer_size       = virtio_transport_notify_buffer_size,
>+
>+		.read_skb = virtio_transport_read_skb,
> 	},
>
> 	.send_pkt = virtio_transport_send_pkt,
>diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
>index a1581c77cf84..82bdaf61ee02 100644
>--- a/net/vmw_vsock/virtio_transport_common.c
>+++ b/net/vmw_vsock/virtio_transport_common.c
>@@ -1388,6 +1388,30 @@ int virtio_transport_purge_skbs(void *vsk, struct sk_buff_head *queue)
> }
> EXPORT_SYMBOL_GPL(virtio_transport_purge_skbs);
>
>+int virtio_transport_read_skb(struct vsock_sock *vsk, skb_read_actor_t recv_actor)
>+{
>+	struct virtio_vsock_sock *vvs = vsk->trans;
>+	struct sock *sk = sk_vsock(vsk);
>+	struct sk_buff *skb;
>+	int off = 0;
>+	int copied;
>+	int err;
>+
>+	spin_lock_bh(&vvs->rx_lock);
>+	/* Use __skb_recv_datagram() for race-free handling of the receive. It
>+	 * works for types other than dgrams. */

Checkpatch:
   WARNING: Block comments use a trailing */ on a separate line

Since we also use this convention in the other comments in this file,
I would follow it.

>+	skb = __skb_recv_datagram(sk, &vvs->rx_queue, MSG_DONTWAIT, &off, &err);
>+	spin_unlock_bh(&vvs->rx_lock);
>+
>+	if (!skb)
>+		return err;
>+
>+	copied = recv_actor(sk, skb);
>+	kfree_skb(skb);
>+	return copied;
>+}
>+EXPORT_SYMBOL_GPL(virtio_transport_read_skb);
>+
> MODULE_LICENSE("GPL v2");
> MODULE_AUTHOR("Asias He");
> MODULE_DESCRIPTION("common code for virtio vsock");
>diff --git a/net/vmw_vsock/vsock_bpf.c b/net/vmw_vsock/vsock_bpf.c
>new file mode 100644
>index 000000000000..ee92a1f42031
>--- /dev/null
>+++ b/net/vmw_vsock/vsock_bpf.c
>@@ -0,0 +1,175 @@
>+// SPDX-License-Identifier: GPL-2.0
>+/* Copyright (c) 2022 Bobby Eshleman <bobby.eshleman@bytedance.com>
>+ *
>+ * Based off of net/unix/unix_bpf.c
>+ */
>+
>+#include <linux/bpf.h>
>+#include <linux/module.h>
>+#include <linux/skmsg.h>
>+#include <linux/socket.h>
>+#include <linux/wait.h>
>+#include <net/af_vsock.h>
>+#include <net/sock.h>
>+
>+#define vsock_sk_has_data(__sk, __psock)				\
>+		({	!skb_queue_empty(&__sk->sk_receive_queue) ||	\
>+			!skb_queue_empty(&__psock->ingress_skb) ||	\
>+			!list_empty(&__psock->ingress_msg);		\
>+		})

Some suggestions from checkpatch that I would fix:

CHECK: Macro argument '__sk' may be better as '(__sk)' to avoid precedence issues
CHECK: Macro argument '__psock' may be better as '(__psock)' to avoid precedence issues

>+
>+static struct proto *vsock_prot_saved __read_mostly;
>+static DEFINE_SPINLOCK(vsock_prot_lock);
>+static struct proto vsock_bpf_prot;
>+
>+static bool vsock_has_data(struct sock *sk, struct sk_psock *psock)
>+{
>+	struct vsock_sock *vsk = vsock_sk(sk);
>+	s64 ret;
>+
>+	ret = vsock_connectible_has_data(vsk);
>+	if (ret > 0)
>+		return true;
>+
>+	return vsock_sk_has_data(sk, psock);
>+}
>+
>+static int vsock_msg_wait_data(struct sock *sk, struct sk_psock *psock, long timeo)

IIUC now this function returns 0 when there isn't data and 1 when there
is data, maybe better to change the return type to bool.

>+{
>+	int ret = 0;
>+
>+	DEFINE_WAIT_FUNC(wait, woken_wake_function);
>+
>+	if (sk->sk_shutdown & RCV_SHUTDOWN)
>+		return 1;
>+
>+	if (!timeo)
>+		return ret;
>+
>+	add_wait_queue(sk_sleep(sk), &wait);
>+	sk_set_bit(SOCKWQ_ASYNC_WAITDATA, sk);
>+	ret = vsock_has_data(sk, psock);
>+	if (!ret) {
>+		wait_woken(&wait, TASK_INTERRUPTIBLE, timeo);
>+		ret = vsock_has_data(sk, psock);
>+	}
>+	sk_clear_bit(SOCKWQ_ASYNC_WAITDATA, sk);
>+	remove_wait_queue(sk_sleep(sk), &wait);
>+	return ret;
>+}
>+
>+static int __vsock_recvmsg(struct sock *sk, struct msghdr *msg, size_t len, int flags)
>+{
>+	struct socket *sock = sk->sk_socket;
>+	int err;
>+
>+	if (sk->sk_type == SOCK_STREAM || sk->sk_type == SOCK_SEQPACKET)
>+		err = vsock_connectible_recvmsg(sock, msg, len, flags);
>+	else if (sk->sk_type == SOCK_DGRAM)
>+		err = vsock_dgram_recvmsg(sock, msg, len, flags);
>+	else
>+		err = -EPROTOTYPE;
>+
>+	return err;
>+}
>+
>+static int vsock_bpf_recvmsg(struct sock *sk, struct msghdr *msg,
>+			     size_t len, int flags, int *addr_len)
>+{
>+	struct sk_psock *psock;
>+	int copied;
>+
>+	psock = sk_psock_get(sk);
>+	lock_sock(sk);
>+	if (unlikely(!psock)) {
>+		release_sock(sk);

I'm not sure we need the lock to check psock, so can we move this
block before the lock_sock?

>+		return __vsock_recvmsg(sk, msg, len, flags);
>+	}
>+
>+	if (vsock_has_data(sk, psock) && sk_psock_queue_empty(psock)) {
>+		release_sock(sk);
>+		sk_psock_put(sk, psock);
>+		return __vsock_recvmsg(sk, msg, len, flags);
>+	}
>+
>+msg_bytes_ready:

I saw that this code is also in unix_bpf.c and tcp_bpf.c, so I guess
it's acceptable, but I would have avoided using a label to go back.
I prefer a to use goto only for centralized exiting of functions.

One possible refactoring (not tested):

         copied = sk_msg_recvmsg(sk, psock, msg, len, flags);
         while (!copied) {
                 long timeo = sock_rcvtimeo(sk, flags & MSG_DONTWAIT);

                 if (!vsock_msg_wait_data(sk, psock, timeo)) {
                         copied = -EAGAIN;
                         break;
                 }

                 if (sk_psock_queue_empty(psock)) {
                         release_sock(sk);
                         sk_psock_put(sk, psock);
                         return __vsock_recvmsg(sk, msg, len, flags);
                 }

                 copied = sk_msg_recvmsg(sk, psock, msg, len, flags);
         }
         ...

I'm not sure why we need to call release_sock() and sk_psock_put()
before __vsock_recvmsg(). If it is not needed we can simplify more the
previous code.

The rest LGTM.

Thanks,
Stefano

>+	copied = sk_msg_recvmsg(sk, psock, msg, len, flags);
>+	if (!copied) {
>+		long timeo;
>+		int data;
>+
>+		timeo = sock_rcvtimeo(sk, flags & MSG_DONTWAIT);
>+		data = vsock_msg_wait_data(sk, psock, timeo);
>+		if (data) {
>+			if (!sk_psock_queue_empty(psock))
>+				goto msg_bytes_ready;
>+			release_sock(sk);
>+			sk_psock_put(sk, psock);
>+			return __vsock_recvmsg(sk, msg, len, flags);
>+		}
>+		copied = -EAGAIN;
>+	}
>+	release_sock(sk);
>+	sk_psock_put(sk, psock);
>+
>+	return copied;
>+}
>+
>+/* Copy of original proto with updated sock_map methods */
>+static struct proto vsock_bpf_prot = {
>+	.close = sock_map_close,
>+	.recvmsg = vsock_bpf_recvmsg,
>+	.sock_is_readable = sk_msg_is_readable,
>+	.unhash = sock_map_unhash,
>+};
>+
>+static void vsock_bpf_rebuild_protos(struct proto *prot, const struct proto *base)
>+{
>+	*prot        = *base;
>+	prot->close  = sock_map_close;
>+	prot->recvmsg = vsock_bpf_recvmsg;
>+	prot->sock_is_readable = sk_msg_is_readable;
>+}
>+
>+static void vsock_bpf_check_needs_rebuild(struct proto *ops)
>+{
>+	/* Paired with the smp_store_release() below. */
>+	if (unlikely(ops != smp_load_acquire(&vsock_prot_saved))) {
>+		spin_lock_bh(&vsock_prot_lock);
>+		if (likely(ops != vsock_prot_saved)) {
>+			vsock_bpf_rebuild_protos(&vsock_bpf_prot, ops);
>+			/* Make sure proto function pointers are updated before publishing the
>+			 * pointer to the struct.
>+			 */
>+			smp_store_release(&vsock_prot_saved, ops);
>+		}
>+		spin_unlock_bh(&vsock_prot_lock);
>+	}
>+}
>+
>+int vsock_bpf_update_proto(struct sock *sk, struct sk_psock *psock, bool restore)
>+{
>+	struct vsock_sock *vsk;
>+
>+	if (restore) {
>+		sk->sk_write_space = psock->saved_write_space;
>+		sock_replace_proto(sk, psock->sk_proto);
>+		return 0;
>+	}
>+
>+	vsk = vsock_sk(sk);
>+	if (!vsk->transport)
>+		return -ENODEV;
>+
>+	if (!vsk->transport->read_skb)
>+		return -EOPNOTSUPP;
>+
>+	vsock_bpf_check_needs_rebuild(psock->sk_proto);
>+	sock_replace_proto(sk, &vsock_bpf_prot);
>+	return 0;
>+}
>+
>+void __init vsock_bpf_build_proto(void)
>+{
>+	vsock_bpf_rebuild_protos(&vsock_bpf_prot, &vsock_proto);
>+}
>diff --git a/net/vmw_vsock/vsock_loopback.c b/net/vmw_vsock/vsock_loopback.c
>index 671e03240fc5..40753b661c13 100644
>--- a/net/vmw_vsock/vsock_loopback.c
>+++ b/net/vmw_vsock/vsock_loopback.c
>@@ -94,6 +94,8 @@ static struct virtio_transport loopback_transport = {
> 		.notify_send_pre_enqueue  = virtio_transport_notify_send_pre_enqueue,
> 		.notify_send_post_enqueue = virtio_transport_notify_send_post_enqueue,
> 		.notify_buffer_size       = virtio_transport_notify_buffer_size,
>+
>+		.read_skb = virtio_transport_read_skb,
> 	},
>
> 	.send_pkt = vsock_loopback_send_pkt,
>
>-- 
>2.35.1
>

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH RFC net-next v2 1/3] vsock: support sockmap
@ 2023-02-16 10:11     ` Stefano Garzarella
  0 siblings, 0 replies; 16+ messages in thread
From: Stefano Garzarella @ 2023-02-16 10:11 UTC (permalink / raw)
  To: Bobby Eshleman
  Cc: Stefan Hajnoczi, Michael S. Tsirkin, Jason Wang, David S. Miller,
	Eric Dumazet, Jakub Kicinski, Paolo Abeni, Andrii Nakryiko,
	Mykola Lysenko, Alexei Starovoitov, Daniel Borkmann,
	Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Shuah Khan,
	Bobby Eshleman, linux-kernel, kvm, virtualization, netdev, bpf,
	linux-kselftest, jakub, hdanton, cong.wang

On Mon, Jan 30, 2023 at 08:35:12PM -0800, Bobby Eshleman wrote:
>This patch adds sockmap support for vsock sockets. It is intended to be
>usable by all transports, but only the virtio transport is implemented.
>
>Signed-off-by: Bobby Eshleman <bobby.eshleman@bytedance.com>
>---
> drivers/vhost/vsock.c                   |   1 +
> include/linux/virtio_vsock.h            |   1 +
> include/net/af_vsock.h                  |  17 ++++
> net/vmw_vsock/Makefile                  |   1 +
> net/vmw_vsock/af_vsock.c                |  55 ++++++++--
> net/vmw_vsock/virtio_transport.c        |   2 +
> net/vmw_vsock/virtio_transport_common.c |  24 +++++
> net/vmw_vsock/vsock_bpf.c               | 175 ++++++++++++++++++++++++++++++++
> net/vmw_vsock/vsock_loopback.c          |   2 +
> 9 files changed, 272 insertions(+), 6 deletions(-)
>
>diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
>index 1f3b89c885cc..3c6dc036b904 100644
>--- a/drivers/vhost/vsock.c
>+++ b/drivers/vhost/vsock.c
>@@ -439,6 +439,7 @@ static struct virtio_transport vhost_transport = {
> 		.notify_send_post_enqueue = virtio_transport_notify_send_post_enqueue,
> 		.notify_buffer_size       = virtio_transport_notify_buffer_size,
>
>+		.read_skb = virtio_transport_read_skb,
> 	},
>
> 	.send_pkt = vhost_transport_send_pkt,
>diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h
>index 3f9c16611306..c58453699ee9 100644
>--- a/include/linux/virtio_vsock.h
>+++ b/include/linux/virtio_vsock.h
>@@ -245,4 +245,5 @@ u32 virtio_transport_get_credit(struct virtio_vsock_sock *vvs, u32 wanted);
> void virtio_transport_put_credit(struct virtio_vsock_sock *vvs, u32 credit);
> void virtio_transport_deliver_tap_pkt(struct sk_buff *skb);
> int virtio_transport_purge_skbs(void *vsk, struct sk_buff_head *list);
>+int virtio_transport_read_skb(struct vsock_sock *vsk, skb_read_actor_t read_actor);
> #endif /* _LINUX_VIRTIO_VSOCK_H */
>diff --git a/include/net/af_vsock.h b/include/net/af_vsock.h
>index 568a87c5e0d0..a73f5fbd296a 100644
>--- a/include/net/af_vsock.h
>+++ b/include/net/af_vsock.h
>@@ -75,6 +75,7 @@ struct vsock_sock {
> 	void *trans;
> };
>
>+s64 vsock_connectible_has_data(struct vsock_sock *vsk);
> s64 vsock_stream_has_data(struct vsock_sock *vsk);
> s64 vsock_stream_has_space(struct vsock_sock *vsk);
> struct sock *vsock_create_connected(struct sock *parent);
>@@ -173,6 +174,9 @@ struct vsock_transport {
>
> 	/* Addressing. */
> 	u32 (*get_local_cid)(void);
>+
>+	/* Read a single skb */
>+	int (*read_skb)(struct vsock_sock *, skb_read_actor_t);
> };
>
> /**** CORE ****/
>@@ -225,5 +229,18 @@ int vsock_init_tap(void);
> int vsock_add_tap(struct vsock_tap *vt);
> int vsock_remove_tap(struct vsock_tap *vt);
> void vsock_deliver_tap(struct sk_buff *build_skb(void *opaque), void *opaque);
>+int vsock_connectible_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
>+			      int flags);
>+int vsock_dgram_recvmsg(struct socket *sock, struct msghdr *msg,
>+			       size_t len, int flags);

checkpatch complains: "CHECK: Alignment should match open parenthesis"

>+
>+#ifdef CONFIG_BPF_SYSCALL
>+extern struct proto vsock_proto;
>+int vsock_bpf_update_proto(struct sock *sk, struct sk_psock *psock, bool restore);
>+void __init vsock_bpf_build_proto(void);
>+#else
>+static inline void __init vsock_bpf_build_proto(void)
>+{}
>+#endif
>
> #endif /* __AF_VSOCK_H__ */
>diff --git a/net/vmw_vsock/Makefile b/net/vmw_vsock/Makefile
>index 6a943ec95c4a..5da74c4a9f1d 100644
>--- a/net/vmw_vsock/Makefile
>+++ b/net/vmw_vsock/Makefile
>@@ -8,6 +8,7 @@ obj-$(CONFIG_HYPERV_VSOCKETS) += hv_sock.o
> obj-$(CONFIG_VSOCKETS_LOOPBACK) += vsock_loopback.o
>
> vsock-y += af_vsock.o af_vsock_tap.o vsock_addr.o
>+vsock-$(CONFIG_BPF_SYSCALL) += vsock_bpf.o
>
> vsock_diag-y += diag.o
>
>diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
>index 19aea7cba26e..f2cc04fb8b13 100644
>--- a/net/vmw_vsock/af_vsock.c
>+++ b/net/vmw_vsock/af_vsock.c
>@@ -116,10 +116,13 @@ static void vsock_sk_destruct(struct sock *sk);
> static int vsock_queue_rcv_skb(struct sock *sk, struct sk_buff *skb);
>
> /* Protocol family. */
>-static struct proto vsock_proto = {
>+struct proto vsock_proto = {
> 	.name = "AF_VSOCK",
> 	.owner = THIS_MODULE,
> 	.obj_size = sizeof(struct vsock_sock),
>+#ifdef CONFIG_BPF_SYSCALL
>+	.psock_update_sk_prot = vsock_bpf_update_proto,
>+#endif
> };
>
> /* The default peer timeout indicates how long we will wait for a peer response
>@@ -865,7 +868,7 @@ s64 vsock_stream_has_data(struct vsock_sock *vsk)
> }
> EXPORT_SYMBOL_GPL(vsock_stream_has_data);
>
>-static s64 vsock_connectible_has_data(struct vsock_sock *vsk)
>+s64 vsock_connectible_has_data(struct vsock_sock *vsk)
> {
> 	struct sock *sk = sk_vsock(vsk);
>
>@@ -874,6 +877,7 @@ static s64 vsock_connectible_has_data(struct vsock_sock *vsk)
> 	else
> 		return vsock_stream_has_data(vsk);
> }
>+EXPORT_SYMBOL_GPL(vsock_connectible_has_data);
>
> s64 vsock_stream_has_space(struct vsock_sock *vsk)
> {
>@@ -1131,6 +1135,13 @@ static __poll_t vsock_poll(struct file *file, struct socket *sock,
> 	return mask;
> }
>
>+static int vsock_read_skb(struct sock *sk, skb_read_actor_t read_actor)
>+{
>+	struct vsock_sock *vsk = vsock_sk(sk);
>+
>+	return vsk->transport->read_skb(vsk, read_actor);
>+}
>+
> static int vsock_dgram_sendmsg(struct socket *sock, struct msghdr *msg,
> 			       size_t len)
> {
>@@ -1241,19 +1252,34 @@ static int vsock_dgram_connect(struct socket *sock,
>
> 	memcpy(&vsk->remote_addr, remote_addr, sizeof(vsk->remote_addr));
> 	sock->state = SS_CONNECTED;
>+	sk->sk_state = TCP_ESTABLISHED;
>
> out:
> 	release_sock(sk);
> 	return err;
> }
>
>-static int vsock_dgram_recvmsg(struct socket *sock, struct msghdr *msg,
>-			       size_t len, int flags)
>+int vsock_dgram_recvmsg(struct socket *sock, struct msghdr *msg,
>+			size_t len, int flags)
> {
>-	struct vsock_sock *vsk = vsock_sk(sock->sk);
>+#ifdef CONFIG_BPF_SYSCALL
>+	const struct proto *prot;
>+#endif
>+	struct vsock_sock *vsk;
>+	struct sock *sk;
>+
>+	sk = sock->sk;
>+	vsk = vsock_sk(sk);
>+
>+#ifdef CONFIG_BPF_SYSCALL
>+	prot = READ_ONCE(sk->sk_prot);
>+	if (prot != &vsock_proto)
>+		return prot->recvmsg(sk, msg, len, flags, NULL);
>+#endif
>
> 	return vsk->transport->dgram_dequeue(vsk, msg, len, flags);
> }
>+EXPORT_SYMBOL_GPL(vsock_dgram_recvmsg);
>
> static const struct proto_ops vsock_dgram_ops = {
> 	.family = PF_VSOCK,
>@@ -1272,6 +1298,7 @@ static const struct proto_ops vsock_dgram_ops = {
> 	.recvmsg = vsock_dgram_recvmsg,
> 	.mmap = sock_no_mmap,
> 	.sendpage = sock_no_sendpage,
>+	.read_skb = vsock_read_skb,
> };
>
> static int vsock_transport_cancel_pkt(struct vsock_sock *vsk)
>@@ -2086,13 +2113,16 @@ static int __vsock_seqpacket_recvmsg(struct sock *sk, struct msghdr *msg,
> 	return err;
> }
>
>-static int
>+int
> vsock_connectible_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
> 			  int flags)
> {
> 	struct sock *sk;
> 	struct vsock_sock *vsk;
> 	const struct vsock_transport *transport;
>+#ifdef CONFIG_BPF_SYSCALL
>+	const struct proto *prot;
>+#endif
> 	int err;
>
> 	sk = sock->sk;
>@@ -2139,6 +2169,14 @@ vsock_connectible_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
> 		goto out;
> 	}
>
>+#ifdef CONFIG_BPF_SYSCALL
>+	prot = READ_ONCE(sk->sk_prot);
>+	if (prot != &vsock_proto) {
>+		release_sock(sk);
>+		return prot->recvmsg(sk, msg, len, flags, NULL);
>+	}
>+#endif
>+
> 	if (sk->sk_type == SOCK_STREAM)
> 		err = __vsock_stream_recvmsg(sk, msg, len, flags);
> 	else
>@@ -2148,6 +2186,7 @@ vsock_connectible_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
> 	release_sock(sk);
> 	return err;
> }
>+EXPORT_SYMBOL_GPL(vsock_connectible_recvmsg);
>
> static int vsock_set_rcvlowat(struct sock *sk, int val)
> {
>@@ -2188,6 +2227,7 @@ static const struct proto_ops vsock_stream_ops = {
> 	.mmap = sock_no_mmap,
> 	.sendpage = sock_no_sendpage,
> 	.set_rcvlowat = vsock_set_rcvlowat,
>+	.read_skb = vsock_read_skb,
> };
>
> static const struct proto_ops vsock_seqpacket_ops = {
>@@ -2209,6 +2249,7 @@ static const struct proto_ops vsock_seqpacket_ops = {
> 	.recvmsg = vsock_connectible_recvmsg,
> 	.mmap = sock_no_mmap,
> 	.sendpage = sock_no_sendpage,
>+	.read_skb = vsock_read_skb,
> };
>
> static int vsock_create(struct net *net, struct socket *sock,
>@@ -2348,6 +2389,8 @@ static int __init vsock_init(void)
> 		goto err_unregister_proto;
> 	}
>
>+	vsock_bpf_build_proto();
>+
> 	return 0;
>
> err_unregister_proto:
>diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
>index 28b5a8e8e094..e95df847176b 100644
>--- a/net/vmw_vsock/virtio_transport.c
>+++ b/net/vmw_vsock/virtio_transport.c
>@@ -457,6 +457,8 @@ static struct virtio_transport virtio_transport = {
> 		.notify_send_pre_enqueue  = virtio_transport_notify_send_pre_enqueue,
> 		.notify_send_post_enqueue = virtio_transport_notify_send_post_enqueue,
> 		.notify_buffer_size       = virtio_transport_notify_buffer_size,
>+
>+		.read_skb = virtio_transport_read_skb,
> 	},
>
> 	.send_pkt = virtio_transport_send_pkt,
>diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
>index a1581c77cf84..82bdaf61ee02 100644
>--- a/net/vmw_vsock/virtio_transport_common.c
>+++ b/net/vmw_vsock/virtio_transport_common.c
>@@ -1388,6 +1388,30 @@ int virtio_transport_purge_skbs(void *vsk, struct sk_buff_head *queue)
> }
> EXPORT_SYMBOL_GPL(virtio_transport_purge_skbs);
>
>+int virtio_transport_read_skb(struct vsock_sock *vsk, skb_read_actor_t recv_actor)
>+{
>+	struct virtio_vsock_sock *vvs = vsk->trans;
>+	struct sock *sk = sk_vsock(vsk);
>+	struct sk_buff *skb;
>+	int off = 0;
>+	int copied;
>+	int err;
>+
>+	spin_lock_bh(&vvs->rx_lock);
>+	/* Use __skb_recv_datagram() for race-free handling of the receive. It
>+	 * works for types other than dgrams. */

Checkpatch:
   WARNING: Block comments use a trailing */ on a separate line

Since we also use this convention in the other comments in this file,
I would follow it.

>+	skb = __skb_recv_datagram(sk, &vvs->rx_queue, MSG_DONTWAIT, &off, &err);
>+	spin_unlock_bh(&vvs->rx_lock);
>+
>+	if (!skb)
>+		return err;
>+
>+	copied = recv_actor(sk, skb);
>+	kfree_skb(skb);
>+	return copied;
>+}
>+EXPORT_SYMBOL_GPL(virtio_transport_read_skb);
>+
> MODULE_LICENSE("GPL v2");
> MODULE_AUTHOR("Asias He");
> MODULE_DESCRIPTION("common code for virtio vsock");
>diff --git a/net/vmw_vsock/vsock_bpf.c b/net/vmw_vsock/vsock_bpf.c
>new file mode 100644
>index 000000000000..ee92a1f42031
>--- /dev/null
>+++ b/net/vmw_vsock/vsock_bpf.c
>@@ -0,0 +1,175 @@
>+// SPDX-License-Identifier: GPL-2.0
>+/* Copyright (c) 2022 Bobby Eshleman <bobby.eshleman@bytedance.com>
>+ *
>+ * Based off of net/unix/unix_bpf.c
>+ */
>+
>+#include <linux/bpf.h>
>+#include <linux/module.h>
>+#include <linux/skmsg.h>
>+#include <linux/socket.h>
>+#include <linux/wait.h>
>+#include <net/af_vsock.h>
>+#include <net/sock.h>
>+
>+#define vsock_sk_has_data(__sk, __psock)				\
>+		({	!skb_queue_empty(&__sk->sk_receive_queue) ||	\
>+			!skb_queue_empty(&__psock->ingress_skb) ||	\
>+			!list_empty(&__psock->ingress_msg);		\
>+		})

Some suggestions from checkpatch that I would fix:

CHECK: Macro argument '__sk' may be better as '(__sk)' to avoid precedence issues
CHECK: Macro argument '__psock' may be better as '(__psock)' to avoid precedence issues

>+
>+static struct proto *vsock_prot_saved __read_mostly;
>+static DEFINE_SPINLOCK(vsock_prot_lock);
>+static struct proto vsock_bpf_prot;
>+
>+static bool vsock_has_data(struct sock *sk, struct sk_psock *psock)
>+{
>+	struct vsock_sock *vsk = vsock_sk(sk);
>+	s64 ret;
>+
>+	ret = vsock_connectible_has_data(vsk);
>+	if (ret > 0)
>+		return true;
>+
>+	return vsock_sk_has_data(sk, psock);
>+}
>+
>+static int vsock_msg_wait_data(struct sock *sk, struct sk_psock *psock, long timeo)

IIUC now this function returns 0 when there isn't data and 1 when there
is data, maybe better to change the return type to bool.

>+{
>+	int ret = 0;
>+
>+	DEFINE_WAIT_FUNC(wait, woken_wake_function);
>+
>+	if (sk->sk_shutdown & RCV_SHUTDOWN)
>+		return 1;
>+
>+	if (!timeo)
>+		return ret;
>+
>+	add_wait_queue(sk_sleep(sk), &wait);
>+	sk_set_bit(SOCKWQ_ASYNC_WAITDATA, sk);
>+	ret = vsock_has_data(sk, psock);
>+	if (!ret) {
>+		wait_woken(&wait, TASK_INTERRUPTIBLE, timeo);
>+		ret = vsock_has_data(sk, psock);
>+	}
>+	sk_clear_bit(SOCKWQ_ASYNC_WAITDATA, sk);
>+	remove_wait_queue(sk_sleep(sk), &wait);
>+	return ret;
>+}
>+
>+static int __vsock_recvmsg(struct sock *sk, struct msghdr *msg, size_t len, int flags)
>+{
>+	struct socket *sock = sk->sk_socket;
>+	int err;
>+
>+	if (sk->sk_type == SOCK_STREAM || sk->sk_type == SOCK_SEQPACKET)
>+		err = vsock_connectible_recvmsg(sock, msg, len, flags);
>+	else if (sk->sk_type == SOCK_DGRAM)
>+		err = vsock_dgram_recvmsg(sock, msg, len, flags);
>+	else
>+		err = -EPROTOTYPE;
>+
>+	return err;
>+}
>+
>+static int vsock_bpf_recvmsg(struct sock *sk, struct msghdr *msg,
>+			     size_t len, int flags, int *addr_len)
>+{
>+	struct sk_psock *psock;
>+	int copied;
>+
>+	psock = sk_psock_get(sk);
>+	lock_sock(sk);
>+	if (unlikely(!psock)) {
>+		release_sock(sk);

I'm not sure we need the lock to check psock, so can we move this
block before the lock_sock?

>+		return __vsock_recvmsg(sk, msg, len, flags);
>+	}
>+
>+	if (vsock_has_data(sk, psock) && sk_psock_queue_empty(psock)) {
>+		release_sock(sk);
>+		sk_psock_put(sk, psock);
>+		return __vsock_recvmsg(sk, msg, len, flags);
>+	}
>+
>+msg_bytes_ready:

I saw that this code is also in unix_bpf.c and tcp_bpf.c, so I guess
it's acceptable, but I would have avoided using a label to go back.
I prefer a to use goto only for centralized exiting of functions.

One possible refactoring (not tested):

         copied = sk_msg_recvmsg(sk, psock, msg, len, flags);
         while (!copied) {
                 long timeo = sock_rcvtimeo(sk, flags & MSG_DONTWAIT);

                 if (!vsock_msg_wait_data(sk, psock, timeo)) {
                         copied = -EAGAIN;
                         break;
                 }

                 if (sk_psock_queue_empty(psock)) {
                         release_sock(sk);
                         sk_psock_put(sk, psock);
                         return __vsock_recvmsg(sk, msg, len, flags);
                 }

                 copied = sk_msg_recvmsg(sk, psock, msg, len, flags);
         }
         ...

I'm not sure why we need to call release_sock() and sk_psock_put()
before __vsock_recvmsg(). If it is not needed we can simplify more the
previous code.

The rest LGTM.

Thanks,
Stefano

>+	copied = sk_msg_recvmsg(sk, psock, msg, len, flags);
>+	if (!copied) {
>+		long timeo;
>+		int data;
>+
>+		timeo = sock_rcvtimeo(sk, flags & MSG_DONTWAIT);
>+		data = vsock_msg_wait_data(sk, psock, timeo);
>+		if (data) {
>+			if (!sk_psock_queue_empty(psock))
>+				goto msg_bytes_ready;
>+			release_sock(sk);
>+			sk_psock_put(sk, psock);
>+			return __vsock_recvmsg(sk, msg, len, flags);
>+		}
>+		copied = -EAGAIN;
>+	}
>+	release_sock(sk);
>+	sk_psock_put(sk, psock);
>+
>+	return copied;
>+}
>+
>+/* Copy of original proto with updated sock_map methods */
>+static struct proto vsock_bpf_prot = {
>+	.close = sock_map_close,
>+	.recvmsg = vsock_bpf_recvmsg,
>+	.sock_is_readable = sk_msg_is_readable,
>+	.unhash = sock_map_unhash,
>+};
>+
>+static void vsock_bpf_rebuild_protos(struct proto *prot, const struct proto *base)
>+{
>+	*prot        = *base;
>+	prot->close  = sock_map_close;
>+	prot->recvmsg = vsock_bpf_recvmsg;
>+	prot->sock_is_readable = sk_msg_is_readable;
>+}
>+
>+static void vsock_bpf_check_needs_rebuild(struct proto *ops)
>+{
>+	/* Paired with the smp_store_release() below. */
>+	if (unlikely(ops != smp_load_acquire(&vsock_prot_saved))) {
>+		spin_lock_bh(&vsock_prot_lock);
>+		if (likely(ops != vsock_prot_saved)) {
>+			vsock_bpf_rebuild_protos(&vsock_bpf_prot, ops);
>+			/* Make sure proto function pointers are updated before publishing the
>+			 * pointer to the struct.
>+			 */
>+			smp_store_release(&vsock_prot_saved, ops);
>+		}
>+		spin_unlock_bh(&vsock_prot_lock);
>+	}
>+}
>+
>+int vsock_bpf_update_proto(struct sock *sk, struct sk_psock *psock, bool restore)
>+{
>+	struct vsock_sock *vsk;
>+
>+	if (restore) {
>+		sk->sk_write_space = psock->saved_write_space;
>+		sock_replace_proto(sk, psock->sk_proto);
>+		return 0;
>+	}
>+
>+	vsk = vsock_sk(sk);
>+	if (!vsk->transport)
>+		return -ENODEV;
>+
>+	if (!vsk->transport->read_skb)
>+		return -EOPNOTSUPP;
>+
>+	vsock_bpf_check_needs_rebuild(psock->sk_proto);
>+	sock_replace_proto(sk, &vsock_bpf_prot);
>+	return 0;
>+}
>+
>+void __init vsock_bpf_build_proto(void)
>+{
>+	vsock_bpf_rebuild_protos(&vsock_bpf_prot, &vsock_proto);
>+}
>diff --git a/net/vmw_vsock/vsock_loopback.c b/net/vmw_vsock/vsock_loopback.c
>index 671e03240fc5..40753b661c13 100644
>--- a/net/vmw_vsock/vsock_loopback.c
>+++ b/net/vmw_vsock/vsock_loopback.c
>@@ -94,6 +94,8 @@ static struct virtio_transport loopback_transport = {
> 		.notify_send_pre_enqueue  = virtio_transport_notify_send_pre_enqueue,
> 		.notify_send_post_enqueue = virtio_transport_notify_send_post_enqueue,
> 		.notify_buffer_size       = virtio_transport_notify_buffer_size,
>+
>+		.read_skb = virtio_transport_read_skb,
> 	},
>
> 	.send_pkt = vsock_loopback_send_pkt,
>
>-- 
>2.35.1
>


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH RFC net-next v2 2/3] selftests/bpf: add vsock to vmtest.sh
  2023-01-31  4:35 ` [PATCH RFC net-next v2 2/3] selftests/bpf: add vsock to vmtest.sh Bobby Eshleman
@ 2023-02-16 10:12     ` Stefano Garzarella
  0 siblings, 0 replies; 16+ messages in thread
From: Stefano Garzarella @ 2023-02-16 10:12 UTC (permalink / raw)
  To: Bobby Eshleman
  Cc: Bobby Eshleman, hdanton, cong.wang, kvm, Michael S. Tsirkin,
	Alexei Starovoitov, virtualization, Song Liu, Eric Dumazet,
	Stanislav Fomichev, linux-kselftest, Shuah Khan, Mykola Lysenko,
	jakub, Daniel Borkmann, John Fastabend, Andrii Nakryiko,
	Yonghong Song, Paolo Abeni, KP Singh, Stefan Hajnoczi,
	Jakub Kicinski, Hao Luo, netdev, linux-kernel, Jiri Olsa, bpf,
	Martin KaFai Lau, David S. Miller

On Mon, Jan 30, 2023 at 08:35:13PM -0800, Bobby Eshleman wrote:
>Add vsock loopback to the test kernel.
>
>This allows sockmap for vsock to be tested.
>
>Signed-off-by: Bobby Eshleman <bobby.eshleman@bytedance.com>
>---
> tools/testing/selftests/bpf/config.aarch64 | 2 ++
> tools/testing/selftests/bpf/config.s390x   | 3 +++
> tools/testing/selftests/bpf/config.x86_64  | 3 +++
> 3 files changed, 8 insertions(+)

Acked-by: Stefano Garzarella <sgarzare@redhat.com>

>
>diff --git a/tools/testing/selftests/bpf/config.aarch64 b/tools/testing/selftests/bpf/config.aarch64
>index 1f0437644186..253821494884 100644
>--- a/tools/testing/selftests/bpf/config.aarch64
>+++ b/tools/testing/selftests/bpf/config.aarch64
>@@ -176,6 +176,8 @@ CONFIG_VIRTIO_MMIO_CMDLINE_DEVICES=y
> CONFIG_VIRTIO_MMIO=y
> CONFIG_VIRTIO_NET=y
> CONFIG_VIRTIO_PCI=y
>+CONFIG_VIRTIO_VSOCKETS_COMMON=y
> CONFIG_VLAN_8021Q=y
> CONFIG_VSOCKETS=y
>+CONFIG_VSOCKETS_LOOPBACK=y
> CONFIG_XFRM_USER=y
>diff --git a/tools/testing/selftests/bpf/config.s390x b/tools/testing/selftests/bpf/config.s390x
>index d49f6170e7bd..2ba92167be35 100644
>--- a/tools/testing/selftests/bpf/config.s390x
>+++ b/tools/testing/selftests/bpf/config.s390x
>@@ -140,5 +140,8 @@ CONFIG_VIRTIO_BALLOON=y
> CONFIG_VIRTIO_BLK=y
> CONFIG_VIRTIO_NET=y
> CONFIG_VIRTIO_PCI=y
>+CONFIG_VIRTIO_VSOCKETS_COMMON=y
> CONFIG_VLAN_8021Q=y
>+CONFIG_VSOCKETS=y
>+CONFIG_VSOCKETS_LOOPBACK=y
> CONFIG_XFRM_USER=y
>diff --git a/tools/testing/selftests/bpf/config.x86_64 b/tools/testing/selftests/bpf/config.x86_64
>index dd97d61d325c..b650b2e617b8 100644
>--- a/tools/testing/selftests/bpf/config.x86_64
>+++ b/tools/testing/selftests/bpf/config.x86_64
>@@ -234,7 +234,10 @@ CONFIG_VIRTIO_BLK=y
> CONFIG_VIRTIO_CONSOLE=y
> CONFIG_VIRTIO_NET=y
> CONFIG_VIRTIO_PCI=y
>+CONFIG_VIRTIO_VSOCKETS_COMMON=y
> CONFIG_VLAN_8021Q=y
>+CONFIG_VSOCKETS=y
>+CONFIG_VSOCKETS_LOOPBACK=y
> CONFIG_X86_ACPI_CPUFREQ=y
> CONFIG_X86_CPUID=y
> CONFIG_X86_MSR=y
>
>-- 
>2.35.1
>

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH RFC net-next v2 2/3] selftests/bpf: add vsock to vmtest.sh
@ 2023-02-16 10:12     ` Stefano Garzarella
  0 siblings, 0 replies; 16+ messages in thread
From: Stefano Garzarella @ 2023-02-16 10:12 UTC (permalink / raw)
  To: Bobby Eshleman
  Cc: Stefan Hajnoczi, Michael S. Tsirkin, Jason Wang, David S. Miller,
	Eric Dumazet, Jakub Kicinski, Paolo Abeni, Andrii Nakryiko,
	Mykola Lysenko, Alexei Starovoitov, Daniel Borkmann,
	Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Shuah Khan,
	Bobby Eshleman, linux-kernel, kvm, virtualization, netdev, bpf,
	linux-kselftest, jakub, hdanton, cong.wang

On Mon, Jan 30, 2023 at 08:35:13PM -0800, Bobby Eshleman wrote:
>Add vsock loopback to the test kernel.
>
>This allows sockmap for vsock to be tested.
>
>Signed-off-by: Bobby Eshleman <bobby.eshleman@bytedance.com>
>---
> tools/testing/selftests/bpf/config.aarch64 | 2 ++
> tools/testing/selftests/bpf/config.s390x   | 3 +++
> tools/testing/selftests/bpf/config.x86_64  | 3 +++
> 3 files changed, 8 insertions(+)

Acked-by: Stefano Garzarella <sgarzare@redhat.com>

>
>diff --git a/tools/testing/selftests/bpf/config.aarch64 b/tools/testing/selftests/bpf/config.aarch64
>index 1f0437644186..253821494884 100644
>--- a/tools/testing/selftests/bpf/config.aarch64
>+++ b/tools/testing/selftests/bpf/config.aarch64
>@@ -176,6 +176,8 @@ CONFIG_VIRTIO_MMIO_CMDLINE_DEVICES=y
> CONFIG_VIRTIO_MMIO=y
> CONFIG_VIRTIO_NET=y
> CONFIG_VIRTIO_PCI=y
>+CONFIG_VIRTIO_VSOCKETS_COMMON=y
> CONFIG_VLAN_8021Q=y
> CONFIG_VSOCKETS=y
>+CONFIG_VSOCKETS_LOOPBACK=y
> CONFIG_XFRM_USER=y
>diff --git a/tools/testing/selftests/bpf/config.s390x b/tools/testing/selftests/bpf/config.s390x
>index d49f6170e7bd..2ba92167be35 100644
>--- a/tools/testing/selftests/bpf/config.s390x
>+++ b/tools/testing/selftests/bpf/config.s390x
>@@ -140,5 +140,8 @@ CONFIG_VIRTIO_BALLOON=y
> CONFIG_VIRTIO_BLK=y
> CONFIG_VIRTIO_NET=y
> CONFIG_VIRTIO_PCI=y
>+CONFIG_VIRTIO_VSOCKETS_COMMON=y
> CONFIG_VLAN_8021Q=y
>+CONFIG_VSOCKETS=y
>+CONFIG_VSOCKETS_LOOPBACK=y
> CONFIG_XFRM_USER=y
>diff --git a/tools/testing/selftests/bpf/config.x86_64 b/tools/testing/selftests/bpf/config.x86_64
>index dd97d61d325c..b650b2e617b8 100644
>--- a/tools/testing/selftests/bpf/config.x86_64
>+++ b/tools/testing/selftests/bpf/config.x86_64
>@@ -234,7 +234,10 @@ CONFIG_VIRTIO_BLK=y
> CONFIG_VIRTIO_CONSOLE=y
> CONFIG_VIRTIO_NET=y
> CONFIG_VIRTIO_PCI=y
>+CONFIG_VIRTIO_VSOCKETS_COMMON=y
> CONFIG_VLAN_8021Q=y
>+CONFIG_VSOCKETS=y
>+CONFIG_VSOCKETS_LOOPBACK=y
> CONFIG_X86_ACPI_CPUFREQ=y
> CONFIG_X86_CPUID=y
> CONFIG_X86_MSR=y
>
>-- 
>2.35.1
>


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH RFC net-next v2 3/3] selftests/bpf: Add a test case for vsock sockmap
  2023-01-31  4:35 ` [PATCH RFC net-next v2 3/3] selftests/bpf: Add a test case for vsock sockmap Bobby Eshleman
@ 2023-02-16 10:16     ` Stefano Garzarella
  0 siblings, 0 replies; 16+ messages in thread
From: Stefano Garzarella @ 2023-02-16 10:16 UTC (permalink / raw)
  To: Bobby Eshleman
  Cc: Bobby Eshleman, hdanton, cong.wang, kvm, Michael S. Tsirkin,
	Alexei Starovoitov, virtualization, Song Liu, Eric Dumazet,
	Stanislav Fomichev, linux-kselftest, Shuah Khan, Mykola Lysenko,
	jakub, Daniel Borkmann, John Fastabend, Andrii Nakryiko,
	Yonghong Song, Paolo Abeni, KP Singh, Stefan Hajnoczi,
	Jakub Kicinski, Hao Luo, netdev, linux-kernel, Jiri Olsa, bpf,
	Martin KaFai Lau, David S. Miller

On Mon, Jan 30, 2023 at 08:35:14PM -0800, Bobby Eshleman wrote:
>Add a test case testing the redirection from connectible AF_VSOCK
>sockets to connectible AF_UNIX sockets.
>
>Signed-off-by: Bobby Eshleman <bobby.eshleman@bytedance.com>
>---
> .../selftests/bpf/prog_tests/sockmap_listen.c      | 163 +++++++++++++++++++++
> 1 file changed, 163 insertions(+)

For the vsock part:

Acked-by: Stefano Garzarella <sgarzare@redhat.com>

>
>diff --git a/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c b/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c
>index 2cf0c7a3fe23..8b5a2e09c9ed 100644
>--- a/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c
>+++ b/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c
>@@ -18,6 +18,7 @@
> #include <string.h>
> #include <sys/select.h>
> #include <unistd.h>
>+#include <linux/vm_sockets.h>
>
> #include <bpf/bpf.h>
> #include <bpf/libbpf.h>
>@@ -249,6 +250,16 @@ static void init_addr_loopback6(struct sockaddr_storage *ss, socklen_t *len)
> 	*len = sizeof(*addr6);
> }
>
>+static void init_addr_loopback_vsock(struct sockaddr_storage *ss, socklen_t *len)
>+{
>+	struct sockaddr_vm *addr = memset(ss, 0, sizeof(*ss));
>+
>+	addr->svm_family = AF_VSOCK;
>+	addr->svm_port = VMADDR_PORT_ANY;
>+	addr->svm_cid = VMADDR_CID_LOCAL;
>+	*len = sizeof(*addr);
>+}
>+
> static void init_addr_loopback(int family, struct sockaddr_storage *ss,
> 			       socklen_t *len)
> {
>@@ -259,6 +270,9 @@ static void init_addr_loopback(int family, struct sockaddr_storage *ss,
> 	case AF_INET6:
> 		init_addr_loopback6(ss, len);
> 		return;
>+	case AF_VSOCK:
>+		init_addr_loopback_vsock(ss, len);
>+		return;
> 	default:
> 		FAIL("unsupported address family %d", family);
> 	}
>@@ -1434,6 +1448,8 @@ static const char *family_str(sa_family_t family)
> 		return "IPv6";
> 	case AF_UNIX:
> 		return "Unix";
>+	case AF_VSOCK:
>+		return "VSOCK";
> 	default:
> 		return "unknown";
> 	}
>@@ -1644,6 +1660,151 @@ static void test_unix_redir(struct test_sockmap_listen *skel, struct bpf_map *ma
> 	unix_skb_redir_to_connected(skel, map, sotype);
> }
>
>+/* Returns two connected loopback vsock sockets */
>+static int vsock_socketpair_connectible(int sotype, int *v0, int *v1)
>+{
>+	struct sockaddr_storage addr;
>+	socklen_t len = sizeof(addr);
>+	int s, p, c;
>+
>+	s = socket_loopback(AF_VSOCK, sotype);
>+	if (s < 0)
>+		return -1;
>+
>+	c = xsocket(AF_VSOCK, sotype | SOCK_NONBLOCK, 0);
>+	if (c == -1)
>+		goto close_srv;
>+
>+	if (getsockname(s, sockaddr(&addr), &len) < 0)
>+		goto close_cli;
>+
>+	if (connect(c, sockaddr(&addr), len) < 0 && errno != EINPROGRESS) {
>+		FAIL_ERRNO("connect");
>+		goto close_cli;
>+	}
>+
>+	len = sizeof(addr);
>+	p = accept_timeout(s, sockaddr(&addr), &len, IO_TIMEOUT_SEC);
>+	if (p < 0)
>+		goto close_cli;
>+
>+	*v0 = p;
>+	*v1 = c;
>+
>+	return 0;
>+
>+close_cli:
>+	close(c);
>+close_srv:
>+	close(s);
>+
>+	return -1;
>+}
>+
>+static void vsock_unix_redir_connectible(int sock_mapfd, int verd_mapfd,
>+					 enum redir_mode mode, int sotype)
>+{
>+	const char *log_prefix = redir_mode_str(mode);
>+	char a = 'a', b = 'b';
>+	int u0, u1, v0, v1;
>+	int sfd[2];
>+	unsigned int pass;
>+	int err, n;
>+	u32 key;
>+
>+	zero_verdict_count(verd_mapfd);
>+
>+	if (socketpair(AF_UNIX, SOCK_STREAM | SOCK_NONBLOCK, 0, sfd))
>+		return;
>+
>+	u0 = sfd[0];
>+	u1 = sfd[1];
>+
>+	err = vsock_socketpair_connectible(sotype, &v0, &v1);
>+	if (err) {
>+		FAIL("vsock_socketpair_connectible() failed");
>+		goto close_uds;
>+	}
>+
>+	err = add_to_sockmap(sock_mapfd, u0, v0);
>+	if (err) {
>+		FAIL("add_to_sockmap failed");
>+		goto close_vsock;
>+	}
>+
>+	n = write(v1, &a, sizeof(a));
>+	if (n < 0)
>+		FAIL_ERRNO("%s: write", log_prefix);
>+	if (n == 0)
>+		FAIL("%s: incomplete write", log_prefix);
>+	if (n < 1)
>+		goto out;
>+
>+	n = recv(mode == REDIR_INGRESS ? u0 : u1, &b, sizeof(b), MSG_DONTWAIT);
>+	if (n < 0)
>+		FAIL("%s: recv() err, errno=%d", log_prefix, errno);
>+	if (n == 0)
>+		FAIL("%s: incomplete recv", log_prefix);
>+	if (b != a)
>+		FAIL("%s: vsock socket map failed, %c != %c", log_prefix, a, b);
>+
>+	key = SK_PASS;
>+	err = xbpf_map_lookup_elem(verd_mapfd, &key, &pass);
>+	if (err)
>+		goto out;
>+	if (pass != 1)
>+		FAIL("%s: want pass count 1, have %d", log_prefix, pass);
>+out:
>+	key = 0;
>+	bpf_map_delete_elem(sock_mapfd, &key);
>+	key = 1;
>+	bpf_map_delete_elem(sock_mapfd, &key);
>+
>+close_vsock:
>+	close(v0);
>+	close(v1);
>+
>+close_uds:
>+	close(u0);
>+	close(u1);
>+}
>+
>+static void vsock_unix_skb_redir_connectible(struct test_sockmap_listen *skel,
>+					     struct bpf_map *inner_map,
>+					     int sotype)
>+{
>+	int verdict = bpf_program__fd(skel->progs.prog_skb_verdict);
>+	int verdict_map = bpf_map__fd(skel->maps.verdict_map);
>+	int sock_map = bpf_map__fd(inner_map);
>+	int err;
>+
>+	err = xbpf_prog_attach(verdict, sock_map, BPF_SK_SKB_VERDICT, 0);
>+	if (err)
>+		return;
>+
>+	skel->bss->test_ingress = false;
>+	vsock_unix_redir_connectible(sock_map, verdict_map, REDIR_EGRESS, sotype);
>+	skel->bss->test_ingress = true;
>+	vsock_unix_redir_connectible(sock_map, verdict_map, REDIR_INGRESS, sotype);
>+
>+	xbpf_prog_detach2(verdict, sock_map, BPF_SK_SKB_VERDICT);
>+}
>+
>+static void test_vsock_redir(struct test_sockmap_listen *skel, struct bpf_map *map)
>+{
>+	const char *family_name, *map_name;
>+	char s[MAX_TEST_NAME];
>+
>+	family_name = family_str(AF_VSOCK);
>+	map_name = map_type_str(map);
>+	snprintf(s, sizeof(s), "%s %s %s", map_name, family_name, __func__);
>+	if (!test__start_subtest(s))
>+		return;
>+
>+	vsock_unix_skb_redir_connectible(skel, map, SOCK_STREAM);
>+	vsock_unix_skb_redir_connectible(skel, map, SOCK_SEQPACKET);
>+}
>+
> static void test_reuseport(struct test_sockmap_listen *skel,
> 			   struct bpf_map *map, int family, int sotype)
> {
>@@ -2015,12 +2176,14 @@ void serial_test_sockmap_listen(void)
> 	run_tests(skel, skel->maps.sock_map, AF_INET6);
> 	test_unix_redir(skel, skel->maps.sock_map, SOCK_DGRAM);
> 	test_unix_redir(skel, skel->maps.sock_map, SOCK_STREAM);
>+	test_vsock_redir(skel, skel->maps.sock_map);
>
> 	skel->bss->test_sockmap = false;
> 	run_tests(skel, skel->maps.sock_hash, AF_INET);
> 	run_tests(skel, skel->maps.sock_hash, AF_INET6);
> 	test_unix_redir(skel, skel->maps.sock_hash, SOCK_DGRAM);
> 	test_unix_redir(skel, skel->maps.sock_hash, SOCK_STREAM);
>+	test_vsock_redir(skel, skel->maps.sock_hash);
>
> 	test_sockmap_listen__destroy(skel);
> }
>
>-- 
>2.35.1
>

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH RFC net-next v2 3/3] selftests/bpf: Add a test case for vsock sockmap
@ 2023-02-16 10:16     ` Stefano Garzarella
  0 siblings, 0 replies; 16+ messages in thread
From: Stefano Garzarella @ 2023-02-16 10:16 UTC (permalink / raw)
  To: Bobby Eshleman
  Cc: Stefan Hajnoczi, Michael S. Tsirkin, Jason Wang, David S. Miller,
	Eric Dumazet, Jakub Kicinski, Paolo Abeni, Andrii Nakryiko,
	Mykola Lysenko, Alexei Starovoitov, Daniel Borkmann,
	Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Shuah Khan,
	Bobby Eshleman, linux-kernel, kvm, virtualization, netdev, bpf,
	linux-kselftest, jakub, hdanton, cong.wang

On Mon, Jan 30, 2023 at 08:35:14PM -0800, Bobby Eshleman wrote:
>Add a test case testing the redirection from connectible AF_VSOCK
>sockets to connectible AF_UNIX sockets.
>
>Signed-off-by: Bobby Eshleman <bobby.eshleman@bytedance.com>
>---
> .../selftests/bpf/prog_tests/sockmap_listen.c      | 163 +++++++++++++++++++++
> 1 file changed, 163 insertions(+)

For the vsock part:

Acked-by: Stefano Garzarella <sgarzare@redhat.com>

>
>diff --git a/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c b/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c
>index 2cf0c7a3fe23..8b5a2e09c9ed 100644
>--- a/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c
>+++ b/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c
>@@ -18,6 +18,7 @@
> #include <string.h>
> #include <sys/select.h>
> #include <unistd.h>
>+#include <linux/vm_sockets.h>
>
> #include <bpf/bpf.h>
> #include <bpf/libbpf.h>
>@@ -249,6 +250,16 @@ static void init_addr_loopback6(struct sockaddr_storage *ss, socklen_t *len)
> 	*len = sizeof(*addr6);
> }
>
>+static void init_addr_loopback_vsock(struct sockaddr_storage *ss, socklen_t *len)
>+{
>+	struct sockaddr_vm *addr = memset(ss, 0, sizeof(*ss));
>+
>+	addr->svm_family = AF_VSOCK;
>+	addr->svm_port = VMADDR_PORT_ANY;
>+	addr->svm_cid = VMADDR_CID_LOCAL;
>+	*len = sizeof(*addr);
>+}
>+
> static void init_addr_loopback(int family, struct sockaddr_storage *ss,
> 			       socklen_t *len)
> {
>@@ -259,6 +270,9 @@ static void init_addr_loopback(int family, struct sockaddr_storage *ss,
> 	case AF_INET6:
> 		init_addr_loopback6(ss, len);
> 		return;
>+	case AF_VSOCK:
>+		init_addr_loopback_vsock(ss, len);
>+		return;
> 	default:
> 		FAIL("unsupported address family %d", family);
> 	}
>@@ -1434,6 +1448,8 @@ static const char *family_str(sa_family_t family)
> 		return "IPv6";
> 	case AF_UNIX:
> 		return "Unix";
>+	case AF_VSOCK:
>+		return "VSOCK";
> 	default:
> 		return "unknown";
> 	}
>@@ -1644,6 +1660,151 @@ static void test_unix_redir(struct test_sockmap_listen *skel, struct bpf_map *ma
> 	unix_skb_redir_to_connected(skel, map, sotype);
> }
>
>+/* Returns two connected loopback vsock sockets */
>+static int vsock_socketpair_connectible(int sotype, int *v0, int *v1)
>+{
>+	struct sockaddr_storage addr;
>+	socklen_t len = sizeof(addr);
>+	int s, p, c;
>+
>+	s = socket_loopback(AF_VSOCK, sotype);
>+	if (s < 0)
>+		return -1;
>+
>+	c = xsocket(AF_VSOCK, sotype | SOCK_NONBLOCK, 0);
>+	if (c == -1)
>+		goto close_srv;
>+
>+	if (getsockname(s, sockaddr(&addr), &len) < 0)
>+		goto close_cli;
>+
>+	if (connect(c, sockaddr(&addr), len) < 0 && errno != EINPROGRESS) {
>+		FAIL_ERRNO("connect");
>+		goto close_cli;
>+	}
>+
>+	len = sizeof(addr);
>+	p = accept_timeout(s, sockaddr(&addr), &len, IO_TIMEOUT_SEC);
>+	if (p < 0)
>+		goto close_cli;
>+
>+	*v0 = p;
>+	*v1 = c;
>+
>+	return 0;
>+
>+close_cli:
>+	close(c);
>+close_srv:
>+	close(s);
>+
>+	return -1;
>+}
>+
>+static void vsock_unix_redir_connectible(int sock_mapfd, int verd_mapfd,
>+					 enum redir_mode mode, int sotype)
>+{
>+	const char *log_prefix = redir_mode_str(mode);
>+	char a = 'a', b = 'b';
>+	int u0, u1, v0, v1;
>+	int sfd[2];
>+	unsigned int pass;
>+	int err, n;
>+	u32 key;
>+
>+	zero_verdict_count(verd_mapfd);
>+
>+	if (socketpair(AF_UNIX, SOCK_STREAM | SOCK_NONBLOCK, 0, sfd))
>+		return;
>+
>+	u0 = sfd[0];
>+	u1 = sfd[1];
>+
>+	err = vsock_socketpair_connectible(sotype, &v0, &v1);
>+	if (err) {
>+		FAIL("vsock_socketpair_connectible() failed");
>+		goto close_uds;
>+	}
>+
>+	err = add_to_sockmap(sock_mapfd, u0, v0);
>+	if (err) {
>+		FAIL("add_to_sockmap failed");
>+		goto close_vsock;
>+	}
>+
>+	n = write(v1, &a, sizeof(a));
>+	if (n < 0)
>+		FAIL_ERRNO("%s: write", log_prefix);
>+	if (n == 0)
>+		FAIL("%s: incomplete write", log_prefix);
>+	if (n < 1)
>+		goto out;
>+
>+	n = recv(mode == REDIR_INGRESS ? u0 : u1, &b, sizeof(b), MSG_DONTWAIT);
>+	if (n < 0)
>+		FAIL("%s: recv() err, errno=%d", log_prefix, errno);
>+	if (n == 0)
>+		FAIL("%s: incomplete recv", log_prefix);
>+	if (b != a)
>+		FAIL("%s: vsock socket map failed, %c != %c", log_prefix, a, b);
>+
>+	key = SK_PASS;
>+	err = xbpf_map_lookup_elem(verd_mapfd, &key, &pass);
>+	if (err)
>+		goto out;
>+	if (pass != 1)
>+		FAIL("%s: want pass count 1, have %d", log_prefix, pass);
>+out:
>+	key = 0;
>+	bpf_map_delete_elem(sock_mapfd, &key);
>+	key = 1;
>+	bpf_map_delete_elem(sock_mapfd, &key);
>+
>+close_vsock:
>+	close(v0);
>+	close(v1);
>+
>+close_uds:
>+	close(u0);
>+	close(u1);
>+}
>+
>+static void vsock_unix_skb_redir_connectible(struct test_sockmap_listen *skel,
>+					     struct bpf_map *inner_map,
>+					     int sotype)
>+{
>+	int verdict = bpf_program__fd(skel->progs.prog_skb_verdict);
>+	int verdict_map = bpf_map__fd(skel->maps.verdict_map);
>+	int sock_map = bpf_map__fd(inner_map);
>+	int err;
>+
>+	err = xbpf_prog_attach(verdict, sock_map, BPF_SK_SKB_VERDICT, 0);
>+	if (err)
>+		return;
>+
>+	skel->bss->test_ingress = false;
>+	vsock_unix_redir_connectible(sock_map, verdict_map, REDIR_EGRESS, sotype);
>+	skel->bss->test_ingress = true;
>+	vsock_unix_redir_connectible(sock_map, verdict_map, REDIR_INGRESS, sotype);
>+
>+	xbpf_prog_detach2(verdict, sock_map, BPF_SK_SKB_VERDICT);
>+}
>+
>+static void test_vsock_redir(struct test_sockmap_listen *skel, struct bpf_map *map)
>+{
>+	const char *family_name, *map_name;
>+	char s[MAX_TEST_NAME];
>+
>+	family_name = family_str(AF_VSOCK);
>+	map_name = map_type_str(map);
>+	snprintf(s, sizeof(s), "%s %s %s", map_name, family_name, __func__);
>+	if (!test__start_subtest(s))
>+		return;
>+
>+	vsock_unix_skb_redir_connectible(skel, map, SOCK_STREAM);
>+	vsock_unix_skb_redir_connectible(skel, map, SOCK_SEQPACKET);
>+}
>+
> static void test_reuseport(struct test_sockmap_listen *skel,
> 			   struct bpf_map *map, int family, int sotype)
> {
>@@ -2015,12 +2176,14 @@ void serial_test_sockmap_listen(void)
> 	run_tests(skel, skel->maps.sock_map, AF_INET6);
> 	test_unix_redir(skel, skel->maps.sock_map, SOCK_DGRAM);
> 	test_unix_redir(skel, skel->maps.sock_map, SOCK_STREAM);
>+	test_vsock_redir(skel, skel->maps.sock_map);
>
> 	skel->bss->test_sockmap = false;
> 	run_tests(skel, skel->maps.sock_hash, AF_INET);
> 	run_tests(skel, skel->maps.sock_hash, AF_INET6);
> 	test_unix_redir(skel, skel->maps.sock_hash, SOCK_DGRAM);
> 	test_unix_redir(skel, skel->maps.sock_hash, SOCK_STREAM);
>+	test_vsock_redir(skel, skel->maps.sock_hash);
>
> 	test_sockmap_listen__destroy(skel);
> }
>
>-- 
>2.35.1
>


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH RFC net-next v2 0/3] vsock: add support for sockmap
  2023-01-31  4:35 [PATCH RFC net-next v2 0/3] vsock: add support for sockmap Bobby Eshleman
@ 2023-02-16 10:20   ` Stefano Garzarella
  2023-01-31  4:35 ` [PATCH RFC net-next v2 2/3] selftests/bpf: add vsock to vmtest.sh Bobby Eshleman
                     ` (3 subsequent siblings)
  4 siblings, 0 replies; 16+ messages in thread
From: Stefano Garzarella @ 2023-02-16 10:20 UTC (permalink / raw)
  To: Bobby Eshleman
  Cc: Bobby Eshleman, hdanton, cong.wang, kvm, Michael S. Tsirkin,
	Alexei Starovoitov, virtualization, Song Liu, Eric Dumazet,
	Stanislav Fomichev, linux-kselftest, Shuah Khan, Mykola Lysenko,
	jakub, Daniel Borkmann, John Fastabend, Andrii Nakryiko,
	Yonghong Song, Paolo Abeni, KP Singh, Stefan Hajnoczi,
	Jakub Kicinski, Hao Luo, netdev, linux-kernel, Jiri Olsa, bpf,
	Martin KaFai Lau, David S. Miller

Hi Bobby,
sorry for my late reply, but I have been offline these days.
I came back a few days ago and had to work off some accumulated work :-)

On Mon, Jan 30, 2023 at 08:35:11PM -0800, Bobby Eshleman wrote:
>Add support for sockmap to vsock.
>
>We're testing usage of vsock as a way to redirect guest-local UDS requests to
>the host and this patch series greatly improves the performance of such a
>setup.
>
>Compared to copying packets via userspace, this improves throughput by 121% in
>basic testing.
>
>Tested as follows.
>
>Setup: guest unix dgram sender -> guest vsock redirector -> host vsock server
>Threads: 1
>Payload: 64k
>No sockmap:
>- 76.3 MB/s
>- The guest vsock redirector was
>  "socat VSOCK-CONNECT:2:1234 UNIX-RECV:/path/to/sock"
>Using sockmap (this patch):
>- 168.8 MB/s (+121%)
>- The guest redirector was a simple sockmap echo server,
>  redirecting unix ingress to vsock 2:1234 egress.
>- Same sender and server programs
>
>*Note: these numbers are from RFC v1
>
>Only the virtio transport has been tested. The loopback transport was used in
>writing bpf/selftests, but not thoroughly tested otherwise.
>
>This series requires the skb patch.
>
>Changes in v2:
>- vsock/bpf: rename vsock_dgram_* -> vsock_*
>- vsock/bpf: change sk_psock_{get,put} and {lock,release}_sock() order to
>	     minimize slock hold time
>- vsock/bpf: use "new style" wait
>- vsock/bpf: fix bug in wait log
>- vsock/bpf: add check that recvmsg sk_type is one dgram, seqpacket, or stream.
>	     Return error if not one of the three.
>- virtio/vsock: comment __skb_recv_datagram() usage
>- virtio/vsock: do not init copied in read_skb()
>- vsock/bpf: add ifdef guard around struct proto in dgram_recvmsg()
>- selftests/bpf: add vsock loopback config for aarch64
>- selftests/bpf: add vsock loopback config for s390x
>- selftests/bpf: remove vsock device from vmtest.sh qemu machine
>- selftests/bpf: remove CONFIG_VIRTIO_VSOCKETS=y from config.x86_64
>- vsock/bpf: move transport-related (e.g., if (!vsk->transport)) checks out of
>	     fast path

The series looks in a good shape.
I left some small comments on the first patch, but I think the next
version could be without RFC, so we can receive some feedbacks from
net/bpf maintainers.

Great job!

Thanks,
Stefano

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH RFC net-next v2 0/3] vsock: add support for sockmap
@ 2023-02-16 10:20   ` Stefano Garzarella
  0 siblings, 0 replies; 16+ messages in thread
From: Stefano Garzarella @ 2023-02-16 10:20 UTC (permalink / raw)
  To: Bobby Eshleman
  Cc: Stefan Hajnoczi, Michael S. Tsirkin, Jason Wang, David S. Miller,
	Eric Dumazet, Jakub Kicinski, Paolo Abeni, Andrii Nakryiko,
	Mykola Lysenko, Alexei Starovoitov, Daniel Borkmann,
	Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, Shuah Khan,
	Bobby Eshleman, linux-kernel, kvm, virtualization, netdev, bpf,
	linux-kselftest, jakub, hdanton, cong.wang

Hi Bobby,
sorry for my late reply, but I have been offline these days.
I came back a few days ago and had to work off some accumulated work :-)

On Mon, Jan 30, 2023 at 08:35:11PM -0800, Bobby Eshleman wrote:
>Add support for sockmap to vsock.
>
>We're testing usage of vsock as a way to redirect guest-local UDS requests to
>the host and this patch series greatly improves the performance of such a
>setup.
>
>Compared to copying packets via userspace, this improves throughput by 121% in
>basic testing.
>
>Tested as follows.
>
>Setup: guest unix dgram sender -> guest vsock redirector -> host vsock server
>Threads: 1
>Payload: 64k
>No sockmap:
>- 76.3 MB/s
>- The guest vsock redirector was
>  "socat VSOCK-CONNECT:2:1234 UNIX-RECV:/path/to/sock"
>Using sockmap (this patch):
>- 168.8 MB/s (+121%)
>- The guest redirector was a simple sockmap echo server,
>  redirecting unix ingress to vsock 2:1234 egress.
>- Same sender and server programs
>
>*Note: these numbers are from RFC v1
>
>Only the virtio transport has been tested. The loopback transport was used in
>writing bpf/selftests, but not thoroughly tested otherwise.
>
>This series requires the skb patch.
>
>Changes in v2:
>- vsock/bpf: rename vsock_dgram_* -> vsock_*
>- vsock/bpf: change sk_psock_{get,put} and {lock,release}_sock() order to
>	     minimize slock hold time
>- vsock/bpf: use "new style" wait
>- vsock/bpf: fix bug in wait log
>- vsock/bpf: add check that recvmsg sk_type is one dgram, seqpacket, or stream.
>	     Return error if not one of the three.
>- virtio/vsock: comment __skb_recv_datagram() usage
>- virtio/vsock: do not init copied in read_skb()
>- vsock/bpf: add ifdef guard around struct proto in dgram_recvmsg()
>- selftests/bpf: add vsock loopback config for aarch64
>- selftests/bpf: add vsock loopback config for s390x
>- selftests/bpf: remove vsock device from vmtest.sh qemu machine
>- selftests/bpf: remove CONFIG_VIRTIO_VSOCKETS=y from config.x86_64
>- vsock/bpf: move transport-related (e.g., if (!vsk->transport)) checks out of
>	     fast path

The series looks in a good shape.
I left some small comments on the first patch, but I think the next
version could be without RFC, so we can receive some feedbacks from
net/bpf maintainers.

Great job!

Thanks,
Stefano


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH RFC net-next v2 1/3] vsock: support sockmap
  2023-02-16 10:11     ` Stefano Garzarella
  (?)
@ 2023-02-16 17:15     ` Bobby Eshleman
  -1 siblings, 0 replies; 16+ messages in thread
From: Bobby Eshleman @ 2023-02-16 17:15 UTC (permalink / raw)
  To: Stefano Garzarella
  Cc: Bobby Eshleman, hdanton, cong.wang, kvm, Michael S. Tsirkin,
	Alexei Starovoitov, virtualization, Song Liu, Eric Dumazet,
	Stanislav Fomichev, linux-kselftest, Shuah Khan, Mykola Lysenko,
	jakub, Daniel Borkmann, John Fastabend, Andrii Nakryiko,
	Yonghong Song, Paolo Abeni, KP Singh, Stefan Hajnoczi,
	Jakub Kicinski, Hao Luo, netdev, linux-kernel, Jiri Olsa, bpf,
	Martin KaFai Lau, David S. Miller

On Thu, Feb 16, 2023 at 11:11:10AM +0100, Stefano Garzarella wrote:
> On Mon, Jan 30, 2023 at 08:35:12PM -0800, Bobby Eshleman wrote:
> > This patch adds sockmap support for vsock sockets. It is intended to be
> > usable by all transports, but only the virtio transport is implemented.
> > 
> > Signed-off-by: Bobby Eshleman <bobby.eshleman@bytedance.com>
> > ---
> > drivers/vhost/vsock.c                   |   1 +
> > include/linux/virtio_vsock.h            |   1 +
> > include/net/af_vsock.h                  |  17 ++++
> > net/vmw_vsock/Makefile                  |   1 +
> > net/vmw_vsock/af_vsock.c                |  55 ++++++++--
> > net/vmw_vsock/virtio_transport.c        |   2 +
> > net/vmw_vsock/virtio_transport_common.c |  24 +++++
> > net/vmw_vsock/vsock_bpf.c               | 175 ++++++++++++++++++++++++++++++++
> > net/vmw_vsock/vsock_loopback.c          |   2 +
> > 9 files changed, 272 insertions(+), 6 deletions(-)
> > 
> > diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
> > index 1f3b89c885cc..3c6dc036b904 100644
> > --- a/drivers/vhost/vsock.c
> > +++ b/drivers/vhost/vsock.c
> > @@ -439,6 +439,7 @@ static struct virtio_transport vhost_transport = {
> > 		.notify_send_post_enqueue = virtio_transport_notify_send_post_enqueue,
> > 		.notify_buffer_size       = virtio_transport_notify_buffer_size,
> > 
> > +		.read_skb = virtio_transport_read_skb,
> > 	},
> > 
> > 	.send_pkt = vhost_transport_send_pkt,
> > diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h
> > index 3f9c16611306..c58453699ee9 100644
> > --- a/include/linux/virtio_vsock.h
> > +++ b/include/linux/virtio_vsock.h
> > @@ -245,4 +245,5 @@ u32 virtio_transport_get_credit(struct virtio_vsock_sock *vvs, u32 wanted);
> > void virtio_transport_put_credit(struct virtio_vsock_sock *vvs, u32 credit);
> > void virtio_transport_deliver_tap_pkt(struct sk_buff *skb);
> > int virtio_transport_purge_skbs(void *vsk, struct sk_buff_head *list);
> > +int virtio_transport_read_skb(struct vsock_sock *vsk, skb_read_actor_t read_actor);
> > #endif /* _LINUX_VIRTIO_VSOCK_H */
> > diff --git a/include/net/af_vsock.h b/include/net/af_vsock.h
> > index 568a87c5e0d0..a73f5fbd296a 100644
> > --- a/include/net/af_vsock.h
> > +++ b/include/net/af_vsock.h
> > @@ -75,6 +75,7 @@ struct vsock_sock {
> > 	void *trans;
> > };
> > 
> > +s64 vsock_connectible_has_data(struct vsock_sock *vsk);
> > s64 vsock_stream_has_data(struct vsock_sock *vsk);
> > s64 vsock_stream_has_space(struct vsock_sock *vsk);
> > struct sock *vsock_create_connected(struct sock *parent);
> > @@ -173,6 +174,9 @@ struct vsock_transport {
> > 
> > 	/* Addressing. */
> > 	u32 (*get_local_cid)(void);
> > +
> > +	/* Read a single skb */
> > +	int (*read_skb)(struct vsock_sock *, skb_read_actor_t);
> > };
> > 
> > /**** CORE ****/
> > @@ -225,5 +229,18 @@ int vsock_init_tap(void);
> > int vsock_add_tap(struct vsock_tap *vt);
> > int vsock_remove_tap(struct vsock_tap *vt);
> > void vsock_deliver_tap(struct sk_buff *build_skb(void *opaque), void *opaque);
> > +int vsock_connectible_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
> > +			      int flags);
> > +int vsock_dgram_recvmsg(struct socket *sock, struct msghdr *msg,
> > +			       size_t len, int flags);
> 
> checkpatch complains: "CHECK: Alignment should match open parenthesis"
> 
> > +
> > +#ifdef CONFIG_BPF_SYSCALL
> > +extern struct proto vsock_proto;
> > +int vsock_bpf_update_proto(struct sock *sk, struct sk_psock *psock, bool restore);
> > +void __init vsock_bpf_build_proto(void);
> > +#else
> > +static inline void __init vsock_bpf_build_proto(void)
> > +{}
> > +#endif
> > 
> > #endif /* __AF_VSOCK_H__ */
> > diff --git a/net/vmw_vsock/Makefile b/net/vmw_vsock/Makefile
> > index 6a943ec95c4a..5da74c4a9f1d 100644
> > --- a/net/vmw_vsock/Makefile
> > +++ b/net/vmw_vsock/Makefile
> > @@ -8,6 +8,7 @@ obj-$(CONFIG_HYPERV_VSOCKETS) += hv_sock.o
> > obj-$(CONFIG_VSOCKETS_LOOPBACK) += vsock_loopback.o
> > 
> > vsock-y += af_vsock.o af_vsock_tap.o vsock_addr.o
> > +vsock-$(CONFIG_BPF_SYSCALL) += vsock_bpf.o
> > 
> > vsock_diag-y += diag.o
> > 
> > diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
> > index 19aea7cba26e..f2cc04fb8b13 100644
> > --- a/net/vmw_vsock/af_vsock.c
> > +++ b/net/vmw_vsock/af_vsock.c
> > @@ -116,10 +116,13 @@ static void vsock_sk_destruct(struct sock *sk);
> > static int vsock_queue_rcv_skb(struct sock *sk, struct sk_buff *skb);
> > 
> > /* Protocol family. */
> > -static struct proto vsock_proto = {
> > +struct proto vsock_proto = {
> > 	.name = "AF_VSOCK",
> > 	.owner = THIS_MODULE,
> > 	.obj_size = sizeof(struct vsock_sock),
> > +#ifdef CONFIG_BPF_SYSCALL
> > +	.psock_update_sk_prot = vsock_bpf_update_proto,
> > +#endif
> > };
> > 
> > /* The default peer timeout indicates how long we will wait for a peer response
> > @@ -865,7 +868,7 @@ s64 vsock_stream_has_data(struct vsock_sock *vsk)
> > }
> > EXPORT_SYMBOL_GPL(vsock_stream_has_data);
> > 
> > -static s64 vsock_connectible_has_data(struct vsock_sock *vsk)
> > +s64 vsock_connectible_has_data(struct vsock_sock *vsk)
> > {
> > 	struct sock *sk = sk_vsock(vsk);
> > 
> > @@ -874,6 +877,7 @@ static s64 vsock_connectible_has_data(struct vsock_sock *vsk)
> > 	else
> > 		return vsock_stream_has_data(vsk);
> > }
> > +EXPORT_SYMBOL_GPL(vsock_connectible_has_data);
> > 
> > s64 vsock_stream_has_space(struct vsock_sock *vsk)
> > {
> > @@ -1131,6 +1135,13 @@ static __poll_t vsock_poll(struct file *file, struct socket *sock,
> > 	return mask;
> > }
> > 
> > +static int vsock_read_skb(struct sock *sk, skb_read_actor_t read_actor)
> > +{
> > +	struct vsock_sock *vsk = vsock_sk(sk);
> > +
> > +	return vsk->transport->read_skb(vsk, read_actor);
> > +}
> > +
> > static int vsock_dgram_sendmsg(struct socket *sock, struct msghdr *msg,
> > 			       size_t len)
> > {
> > @@ -1241,19 +1252,34 @@ static int vsock_dgram_connect(struct socket *sock,
> > 
> > 	memcpy(&vsk->remote_addr, remote_addr, sizeof(vsk->remote_addr));
> > 	sock->state = SS_CONNECTED;
> > +	sk->sk_state = TCP_ESTABLISHED;
> > 
> > out:
> > 	release_sock(sk);
> > 	return err;
> > }
> > 
> > -static int vsock_dgram_recvmsg(struct socket *sock, struct msghdr *msg,
> > -			       size_t len, int flags)
> > +int vsock_dgram_recvmsg(struct socket *sock, struct msghdr *msg,
> > +			size_t len, int flags)
> > {
> > -	struct vsock_sock *vsk = vsock_sk(sock->sk);
> > +#ifdef CONFIG_BPF_SYSCALL
> > +	const struct proto *prot;
> > +#endif
> > +	struct vsock_sock *vsk;
> > +	struct sock *sk;
> > +
> > +	sk = sock->sk;
> > +	vsk = vsock_sk(sk);
> > +
> > +#ifdef CONFIG_BPF_SYSCALL
> > +	prot = READ_ONCE(sk->sk_prot);
> > +	if (prot != &vsock_proto)
> > +		return prot->recvmsg(sk, msg, len, flags, NULL);
> > +#endif
> > 
> > 	return vsk->transport->dgram_dequeue(vsk, msg, len, flags);
> > }
> > +EXPORT_SYMBOL_GPL(vsock_dgram_recvmsg);
> > 
> > static const struct proto_ops vsock_dgram_ops = {
> > 	.family = PF_VSOCK,
> > @@ -1272,6 +1298,7 @@ static const struct proto_ops vsock_dgram_ops = {
> > 	.recvmsg = vsock_dgram_recvmsg,
> > 	.mmap = sock_no_mmap,
> > 	.sendpage = sock_no_sendpage,
> > +	.read_skb = vsock_read_skb,
> > };
> > 
> > static int vsock_transport_cancel_pkt(struct vsock_sock *vsk)
> > @@ -2086,13 +2113,16 @@ static int __vsock_seqpacket_recvmsg(struct sock *sk, struct msghdr *msg,
> > 	return err;
> > }
> > 
> > -static int
> > +int
> > vsock_connectible_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
> > 			  int flags)
> > {
> > 	struct sock *sk;
> > 	struct vsock_sock *vsk;
> > 	const struct vsock_transport *transport;
> > +#ifdef CONFIG_BPF_SYSCALL
> > +	const struct proto *prot;
> > +#endif
> > 	int err;
> > 
> > 	sk = sock->sk;
> > @@ -2139,6 +2169,14 @@ vsock_connectible_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
> > 		goto out;
> > 	}
> > 
> > +#ifdef CONFIG_BPF_SYSCALL
> > +	prot = READ_ONCE(sk->sk_prot);
> > +	if (prot != &vsock_proto) {
> > +		release_sock(sk);
> > +		return prot->recvmsg(sk, msg, len, flags, NULL);
> > +	}
> > +#endif
> > +
> > 	if (sk->sk_type == SOCK_STREAM)
> > 		err = __vsock_stream_recvmsg(sk, msg, len, flags);
> > 	else
> > @@ -2148,6 +2186,7 @@ vsock_connectible_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
> > 	release_sock(sk);
> > 	return err;
> > }
> > +EXPORT_SYMBOL_GPL(vsock_connectible_recvmsg);
> > 
> > static int vsock_set_rcvlowat(struct sock *sk, int val)
> > {
> > @@ -2188,6 +2227,7 @@ static const struct proto_ops vsock_stream_ops = {
> > 	.mmap = sock_no_mmap,
> > 	.sendpage = sock_no_sendpage,
> > 	.set_rcvlowat = vsock_set_rcvlowat,
> > +	.read_skb = vsock_read_skb,
> > };
> > 
> > static const struct proto_ops vsock_seqpacket_ops = {
> > @@ -2209,6 +2249,7 @@ static const struct proto_ops vsock_seqpacket_ops = {
> > 	.recvmsg = vsock_connectible_recvmsg,
> > 	.mmap = sock_no_mmap,
> > 	.sendpage = sock_no_sendpage,
> > +	.read_skb = vsock_read_skb,
> > };
> > 
> > static int vsock_create(struct net *net, struct socket *sock,
> > @@ -2348,6 +2389,8 @@ static int __init vsock_init(void)
> > 		goto err_unregister_proto;
> > 	}
> > 
> > +	vsock_bpf_build_proto();
> > +
> > 	return 0;
> > 
> > err_unregister_proto:
> > diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
> > index 28b5a8e8e094..e95df847176b 100644
> > --- a/net/vmw_vsock/virtio_transport.c
> > +++ b/net/vmw_vsock/virtio_transport.c
> > @@ -457,6 +457,8 @@ static struct virtio_transport virtio_transport = {
> > 		.notify_send_pre_enqueue  = virtio_transport_notify_send_pre_enqueue,
> > 		.notify_send_post_enqueue = virtio_transport_notify_send_post_enqueue,
> > 		.notify_buffer_size       = virtio_transport_notify_buffer_size,
> > +
> > +		.read_skb = virtio_transport_read_skb,
> > 	},
> > 
> > 	.send_pkt = virtio_transport_send_pkt,
> > diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
> > index a1581c77cf84..82bdaf61ee02 100644
> > --- a/net/vmw_vsock/virtio_transport_common.c
> > +++ b/net/vmw_vsock/virtio_transport_common.c
> > @@ -1388,6 +1388,30 @@ int virtio_transport_purge_skbs(void *vsk, struct sk_buff_head *queue)
> > }
> > EXPORT_SYMBOL_GPL(virtio_transport_purge_skbs);
> > 
> > +int virtio_transport_read_skb(struct vsock_sock *vsk, skb_read_actor_t recv_actor)
> > +{
> > +	struct virtio_vsock_sock *vvs = vsk->trans;
> > +	struct sock *sk = sk_vsock(vsk);
> > +	struct sk_buff *skb;
> > +	int off = 0;
> > +	int copied;
> > +	int err;
> > +
> > +	spin_lock_bh(&vvs->rx_lock);
> > +	/* Use __skb_recv_datagram() for race-free handling of the receive. It
> > +	 * works for types other than dgrams. */
> 
> Checkpatch:
>   WARNING: Block comments use a trailing */ on a separate line
> 
> Since we also use this convention in the other comments in this file,
> I would follow it.
> 
> > +	skb = __skb_recv_datagram(sk, &vvs->rx_queue, MSG_DONTWAIT, &off, &err);
> > +	spin_unlock_bh(&vvs->rx_lock);
> > +
> > +	if (!skb)
> > +		return err;
> > +
> > +	copied = recv_actor(sk, skb);
> > +	kfree_skb(skb);
> > +	return copied;
> > +}
> > +EXPORT_SYMBOL_GPL(virtio_transport_read_skb);
> > +
> > MODULE_LICENSE("GPL v2");
> > MODULE_AUTHOR("Asias He");
> > MODULE_DESCRIPTION("common code for virtio vsock");
> > diff --git a/net/vmw_vsock/vsock_bpf.c b/net/vmw_vsock/vsock_bpf.c
> > new file mode 100644
> > index 000000000000..ee92a1f42031
> > --- /dev/null
> > +++ b/net/vmw_vsock/vsock_bpf.c
> > @@ -0,0 +1,175 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +/* Copyright (c) 2022 Bobby Eshleman <bobby.eshleman@bytedance.com>
> > + *
> > + * Based off of net/unix/unix_bpf.c
> > + */
> > +
> > +#include <linux/bpf.h>
> > +#include <linux/module.h>
> > +#include <linux/skmsg.h>
> > +#include <linux/socket.h>
> > +#include <linux/wait.h>
> > +#include <net/af_vsock.h>
> > +#include <net/sock.h>
> > +
> > +#define vsock_sk_has_data(__sk, __psock)				\
> > +		({	!skb_queue_empty(&__sk->sk_receive_queue) ||	\
> > +			!skb_queue_empty(&__psock->ingress_skb) ||	\
> > +			!list_empty(&__psock->ingress_msg);		\
> > +		})
> 
> Some suggestions from checkpatch that I would fix:
> 
> CHECK: Macro argument '__sk' may be better as '(__sk)' to avoid precedence issues
> CHECK: Macro argument '__psock' may be better as '(__psock)' to avoid precedence issues
> 
> > +
> > +static struct proto *vsock_prot_saved __read_mostly;
> > +static DEFINE_SPINLOCK(vsock_prot_lock);
> > +static struct proto vsock_bpf_prot;
> > +
> > +static bool vsock_has_data(struct sock *sk, struct sk_psock *psock)
> > +{
> > +	struct vsock_sock *vsk = vsock_sk(sk);
> > +	s64 ret;
> > +
> > +	ret = vsock_connectible_has_data(vsk);
> > +	if (ret > 0)
> > +		return true;
> > +
> > +	return vsock_sk_has_data(sk, psock);
> > +}
> > +
> > +static int vsock_msg_wait_data(struct sock *sk, struct sk_psock *psock, long timeo)
> 
> IIUC now this function returns 0 when there isn't data and 1 when there
> is data, maybe better to change the return type to bool.
> 
> > +{
> > +	int ret = 0;
> > +
> > +	DEFINE_WAIT_FUNC(wait, woken_wake_function);
> > +
> > +	if (sk->sk_shutdown & RCV_SHUTDOWN)
> > +		return 1;
> > +
> > +	if (!timeo)
> > +		return ret;
> > +
> > +	add_wait_queue(sk_sleep(sk), &wait);
> > +	sk_set_bit(SOCKWQ_ASYNC_WAITDATA, sk);
> > +	ret = vsock_has_data(sk, psock);
> > +	if (!ret) {
> > +		wait_woken(&wait, TASK_INTERRUPTIBLE, timeo);
> > +		ret = vsock_has_data(sk, psock);
> > +	}
> > +	sk_clear_bit(SOCKWQ_ASYNC_WAITDATA, sk);
> > +	remove_wait_queue(sk_sleep(sk), &wait);
> > +	return ret;
> > +}
> > +
> > +static int __vsock_recvmsg(struct sock *sk, struct msghdr *msg, size_t len, int flags)
> > +{
> > +	struct socket *sock = sk->sk_socket;
> > +	int err;
> > +
> > +	if (sk->sk_type == SOCK_STREAM || sk->sk_type == SOCK_SEQPACKET)
> > +		err = vsock_connectible_recvmsg(sock, msg, len, flags);
> > +	else if (sk->sk_type == SOCK_DGRAM)
> > +		err = vsock_dgram_recvmsg(sock, msg, len, flags);
> > +	else
> > +		err = -EPROTOTYPE;
> > +
> > +	return err;
> > +}
> > +
> > +static int vsock_bpf_recvmsg(struct sock *sk, struct msghdr *msg,
> > +			     size_t len, int flags, int *addr_len)
> > +{
> > +	struct sk_psock *psock;
> > +	int copied;
> > +
> > +	psock = sk_psock_get(sk);
> > +	lock_sock(sk);
> > +	if (unlikely(!psock)) {
> > +		release_sock(sk);
> 
> I'm not sure we need the lock to check psock, so can we move this
> block before the lock_sock?
> 
> > +		return __vsock_recvmsg(sk, msg, len, flags);
> > +	}
> > +
> > +	if (vsock_has_data(sk, psock) && sk_psock_queue_empty(psock)) {
> > +		release_sock(sk);
> > +		sk_psock_put(sk, psock);
> > +		return __vsock_recvmsg(sk, msg, len, flags);
> > +	}
> > +
> > +msg_bytes_ready:
> 
> I saw that this code is also in unix_bpf.c and tcp_bpf.c, so I guess
> it's acceptable, but I would have avoided using a label to go back.
> I prefer a to use goto only for centralized exiting of functions.
> 
> One possible refactoring (not tested):
> 
>         copied = sk_msg_recvmsg(sk, psock, msg, len, flags);
>         while (!copied) {
>                 long timeo = sock_rcvtimeo(sk, flags & MSG_DONTWAIT);
> 
>                 if (!vsock_msg_wait_data(sk, psock, timeo)) {
>                         copied = -EAGAIN;
>                         break;
>                 }
> 
>                 if (sk_psock_queue_empty(psock)) {
>                         release_sock(sk);
>                         sk_psock_put(sk, psock);
>                         return __vsock_recvmsg(sk, msg, len, flags);
>                 }
> 
>                 copied = sk_msg_recvmsg(sk, psock, msg, len, flags);
>         }
>         ...
> 
> I'm not sure why we need to call release_sock() and sk_psock_put()
> before __vsock_recvmsg(). If it is not needed we can simplify more the
> previous code.
> 
> The rest LGTM.
> 
> Thanks,
> Stefano
> 

Thanks Stefano, I'll incorporate your feedback in the next rev and
remove the RFC header as suggested.

Thanks again, hope you had a good time off work!

Best,
Bobby

> > +	copied = sk_msg_recvmsg(sk, psock, msg, len, flags);
> > +	if (!copied) {
> > +		long timeo;
> > +		int data;
> > +
> > +		timeo = sock_rcvtimeo(sk, flags & MSG_DONTWAIT);
> > +		data = vsock_msg_wait_data(sk, psock, timeo);
> > +		if (data) {
> > +			if (!sk_psock_queue_empty(psock))
> > +				goto msg_bytes_ready;
> > +			release_sock(sk);
> > +			sk_psock_put(sk, psock);
> > +			return __vsock_recvmsg(sk, msg, len, flags);
> > +		}
> > +		copied = -EAGAIN;
> > +	}
> > +	release_sock(sk);
> > +	sk_psock_put(sk, psock);
> > +
> > +	return copied;
> > +}
> > +
> > +/* Copy of original proto with updated sock_map methods */
> > +static struct proto vsock_bpf_prot = {
> > +	.close = sock_map_close,
> > +	.recvmsg = vsock_bpf_recvmsg,
> > +	.sock_is_readable = sk_msg_is_readable,
> > +	.unhash = sock_map_unhash,
> > +};
> > +
> > +static void vsock_bpf_rebuild_protos(struct proto *prot, const struct proto *base)
> > +{
> > +	*prot        = *base;
> > +	prot->close  = sock_map_close;
> > +	prot->recvmsg = vsock_bpf_recvmsg;
> > +	prot->sock_is_readable = sk_msg_is_readable;
> > +}
> > +
> > +static void vsock_bpf_check_needs_rebuild(struct proto *ops)
> > +{
> > +	/* Paired with the smp_store_release() below. */
> > +	if (unlikely(ops != smp_load_acquire(&vsock_prot_saved))) {
> > +		spin_lock_bh(&vsock_prot_lock);
> > +		if (likely(ops != vsock_prot_saved)) {
> > +			vsock_bpf_rebuild_protos(&vsock_bpf_prot, ops);
> > +			/* Make sure proto function pointers are updated before publishing the
> > +			 * pointer to the struct.
> > +			 */
> > +			smp_store_release(&vsock_prot_saved, ops);
> > +		}
> > +		spin_unlock_bh(&vsock_prot_lock);
> > +	}
> > +}
> > +
> > +int vsock_bpf_update_proto(struct sock *sk, struct sk_psock *psock, bool restore)
> > +{
> > +	struct vsock_sock *vsk;
> > +
> > +	if (restore) {
> > +		sk->sk_write_space = psock->saved_write_space;
> > +		sock_replace_proto(sk, psock->sk_proto);
> > +		return 0;
> > +	}
> > +
> > +	vsk = vsock_sk(sk);
> > +	if (!vsk->transport)
> > +		return -ENODEV;
> > +
> > +	if (!vsk->transport->read_skb)
> > +		return -EOPNOTSUPP;
> > +
> > +	vsock_bpf_check_needs_rebuild(psock->sk_proto);
> > +	sock_replace_proto(sk, &vsock_bpf_prot);
> > +	return 0;
> > +}
> > +
> > +void __init vsock_bpf_build_proto(void)
> > +{
> > +	vsock_bpf_rebuild_protos(&vsock_bpf_prot, &vsock_proto);
> > +}
> > diff --git a/net/vmw_vsock/vsock_loopback.c b/net/vmw_vsock/vsock_loopback.c
> > index 671e03240fc5..40753b661c13 100644
> > --- a/net/vmw_vsock/vsock_loopback.c
> > +++ b/net/vmw_vsock/vsock_loopback.c
> > @@ -94,6 +94,8 @@ static struct virtio_transport loopback_transport = {
> > 		.notify_send_pre_enqueue  = virtio_transport_notify_send_pre_enqueue,
> > 		.notify_send_post_enqueue = virtio_transport_notify_send_post_enqueue,
> > 		.notify_buffer_size       = virtio_transport_notify_buffer_size,
> > +
> > +		.read_skb = virtio_transport_read_skb,
> > 	},
> > 
> > 	.send_pkt = vsock_loopback_send_pkt,
> > 
> > -- 
> > 2.35.1
> > 
> 
> _______________________________________________
> Virtualization mailing list
> Virtualization@lists.linux-foundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2023-02-16 17:15 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-01-31  4:35 [PATCH RFC net-next v2 0/3] vsock: add support for sockmap Bobby Eshleman
2023-01-31  4:35 ` [PATCH RFC net-next v2 1/3] vsock: support sockmap Bobby Eshleman
2023-02-16 10:11   ` Stefano Garzarella
2023-02-16 10:11     ` Stefano Garzarella
2023-02-16 17:15     ` Bobby Eshleman
2023-01-31  4:35 ` [PATCH RFC net-next v2 2/3] selftests/bpf: add vsock to vmtest.sh Bobby Eshleman
2023-02-16 10:12   ` Stefano Garzarella
2023-02-16 10:12     ` Stefano Garzarella
2023-01-31  4:35 ` [PATCH RFC net-next v2 3/3] selftests/bpf: Add a test case for vsock sockmap Bobby Eshleman
2023-02-16 10:16   ` Stefano Garzarella
2023-02-16 10:16     ` Stefano Garzarella
2023-02-05 20:08 ` [PATCH RFC net-next v2 0/3] vsock: add support for sockmap Cong Wang
2023-02-05 20:08   ` Cong Wang
2023-02-03  5:30   ` Bobby Eshleman
2023-02-16 10:20 ` Stefano Garzarella
2023-02-16 10:20   ` Stefano Garzarella

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.