All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v11 00/18] virtio/vsock: introduce SOCK_SEQPACKET support
@ 2021-06-11 11:07 Arseny Krasnov
  2021-06-11 11:09 ` [PATCH v11 01/18] af_vsock: update functions for connectible socket Arseny Krasnov
                   ` (19 more replies)
  0 siblings, 20 replies; 46+ messages in thread
From: Arseny Krasnov @ 2021-06-11 11:07 UTC (permalink / raw)
  To: Stefan Hajnoczi, Stefano Garzarella, Michael S. Tsirkin,
	Jason Wang, David S. Miller, Jakub Kicinski, Arseny Krasnov,
	Andra Paraschiv, Norbert Slusarek, Colin Ian King
  Cc: kvm, virtualization, netdev, linux-kernel, oxffffaa

	This patchset implements support of SOCK_SEQPACKET for virtio
transport.
	As SOCK_SEQPACKET guarantees to save record boundaries, so to
do it, new bit for field 'flags' was added: SEQ_EOR. This bit is
set to 1 in last RW packet of message.
	Now as  packets of one socket are not reordered neither on vsock
nor on vhost transport layers, such bit allows to restore original
message on receiver's side. If user's buffer is smaller than message
length, when all out of size data is dropped.
	Maximum length of datagram is limited by 'peer_buf_alloc' value.
	Implementation also supports 'MSG_TRUNC' flags.
	Tests also implemented.

	Thanks to stsp2@yandex.ru for encouragements and initial design
recommendations.

 Arseny Krasnov (18):
  af_vsock: update functions for connectible socket
  af_vsock: separate wait data loop
  af_vsock: separate receive data loop
  af_vsock: implement SEQPACKET receive loop
  af_vsock: implement send logic for SEQPACKET
  af_vsock: rest of SEQPACKET support
  af_vsock: update comments for stream sockets
  virtio/vsock: set packet's type in virtio_transport_send_pkt_info()
  virtio/vsock: simplify credit update function API
  virtio/vsock: defines and constants for SEQPACKET
  virtio/vsock: dequeue callback for SOCK_SEQPACKET
  virtio/vsock: add SEQPACKET receive logic
  virtio/vsock: rest of SOCK_SEQPACKET support
  virtio/vsock: enable SEQPACKET for transport
  vhost/vsock: enable SEQPACKET for transport
  vsock/loopback: enable SEQPACKET for transport
  vsock_test: add SOCK_SEQPACKET tests
  virtio/vsock: update trace event for SEQPACKET

 drivers/vhost/vsock.c                              |  56 ++-
 include/linux/virtio_vsock.h                       |  10 +
 include/net/af_vsock.h                             |   8 +
 .../trace/events/vsock_virtio_transport_common.h   |   5 +-
 include/uapi/linux/virtio_vsock.h                  |   9 +
 net/vmw_vsock/af_vsock.c                           | 464 ++++++++++++------
 net/vmw_vsock/virtio_transport.c                   |  26 ++
 net/vmw_vsock/virtio_transport_common.c            | 179 +++++++-
 net/vmw_vsock/vsock_loopback.c                     |  12 +
 tools/testing/vsock/util.c                         |  32 +-
 tools/testing/vsock/util.h                         |   3 +
 tools/testing/vsock/vsock_test.c                   | 116 ++++++
 12 files changed, 730 insertions(+), 190 deletions(-)

 v10 -> v11:
 General changelog:
  - now data is copied to user's buffer only when
    whole message is received.
  - reader is woken up when EOR packet is received.
  - if read syscall was interrupted by signal or
    timeout, error is returned(not 0).

 Per patch changelog:
  see every patch after '---' line.

 v9 -> v10:
 General changelog:
 - patch for write serialization removed from patchset.
 - commit messages rephrased

 Per patch changelog:
  see every patch after '---' line.

 v8 -> v9:
 General changelog:
 - see per patch change log.

 Per patch changelog:
  see every patch after '---' line.

 v7 -> v8:
 General changelog:
 - whole idea is simplified: channel now considered reliable,
   so SEQ_BEGIN, SEQ_END, 'msg_len' and 'msg_id' were removed.
   Only thing that is used to mark end of message is bit in
   'flags' field of packet header: VIRTIO_VSOCK_SEQ_EOR. Packet
   with such bit set to 1 means, that this is last packet of
   message.

 - POSIX MSG_EOR support is removed, as there is no exact
   description how it works.

 - all changes to 'include/uapi/linux/virtio_vsock.h' moved
   to dedicated patch, as these changes linked with patch to
   spec.

 - patch 'virtio/vsock: SEQPACKET feature bit support' now merged
   to 'virtio/vsock: setup SEQPACKET ops for transport'.

 - patch 'vhost/vsock: SEQPACKET feature bit support' now merged
   to 'vhost/vsock: setup SEQPACKET ops for transport'.

 Per patch changelog:
  see every patch after '---' line.

 v6 -> v7:
 General changelog:
 - virtio transport callback for message length now removed
   from transport. Length of record is returned by dequeue
   callback.

 - function which tries to get message length now returns 0
   when rx queue is empty. Also length of current message in
   progress is set to 0, when message processed or error
   happens.

 - patches for virtio feature bit moved after patches with
   transport ops.

 Per patch changelog:
  see every patch after '---' line.

 v5 -> v6:
 General changelog:
 - virtio transport specific callbacks which send SEQ_BEGIN or
   SEQ_END now hidden inside virtio transport. Only enqueue,
   dequeue and record length callbacks are provided by transport.

 - virtio feature bit for SEQPACKET socket support introduced:
   VIRTIO_VSOCK_F_SEQPACKET.

 - 'msg_cnt' field in 'struct virtio_vsock_seq_hdr' renamed to
   'msg_id' and used as id.

 Per patch changelog:
 - 'af_vsock: separate wait data loop':
    1) Commit message updated.
    2) 'prepare_to_wait()' moved inside while loop(thanks to
      Jorgen Hansen).
    Marked 'Reviewed-by' with 1), but as 2) I removed R-b.

 - 'af_vsock: separate receive data loop': commit message
    updated.
    Marked 'Reviewed-by' with that fix.

 - 'af_vsock: implement SEQPACKET receive loop': style fixes.

 - 'af_vsock: rest of SEQPACKET support':
    1) 'module_put()' added when transport callback check failed.
    2) Now only 'seqpacket_allow()' callback called to check
       support of SEQPACKET by transport.

 - 'af_vsock: update comments for stream sockets': commit message
    updated.
    Marked 'Reviewed-by' with that fix.

 - 'virtio/vsock: set packet's type in send':
    1) Commit message updated.
    2) Parameter 'type' from 'virtio_transport_send_credit_update()'
       also removed in this patch instead of in next.

 - 'virtio/vsock: dequeue callback for SOCK_SEQPACKET': SEQPACKET
    related state wrapped to special struct.

 - 'virtio/vsock: update trace event for SEQPACKET': format strings
    now not broken by new lines.

 v4 -> v5:
 - patches reorganized:
   1) Setting of packet's type in 'virtio_transport_send_pkt_info()'
      is moved to separate patch.
   2) Simplifying of 'virtio_transport_send_credit_update()' is
      moved to separate patch and before main virtio/vsock patches.
 - style problem fixed
 - in 'af_vsock: separate receive data loop' extra 'release_sock()'
   removed
 - added trace event fields for SEQPACKET
 - in 'af_vsock: separate wait data loop':
   1) 'vsock_wait_data()' removed 'goto out;'
   2) Comment for invalid data amount is changed.
 - in 'af_vsock: rest of SEQPACKET support', 'new_transport' pointer
   check is moved after 'try_module_get()'
 - in 'af_vsock: update comments for stream sockets', 'connect-oriented'
   replaced with 'connection-oriented'
 - in 'loopback/vsock: setup SEQPACKET ops for transport',
   'loopback/vsock' replaced with 'vsock/loopback'

 v3 -> v4:
 - SEQPACKET specific metadata moved from packet header to payload
   and called 'virtio_vsock_seq_hdr'
 - record integrity check:
   1) SEQ_END operation was added, which marks end of record.
   2) Both SEQ_BEGIN and SEQ_END carries counter which is incremented
      on every marker send.
 - af_vsock.c: socket operations for STREAM and SEQPACKET call same
   functions instead of having own "gates" differs only by names:
   'vsock_seqpacket/stream_getsockopt()' now replaced with
   'vsock_connectible_getsockopt()'.
 - af_vsock.c: 'seqpacket_dequeue' callback returns error and flag that
   record ready. There is no need to return number of copied bytes,
   because case when record received successfully is checked at virtio
   transport layer, when SEQ_END is processed. Also user doesn't need
   number of copied bytes, because 'recv()' from SEQPACKET could return
   error, length of users's buffer or length of whole record(both are
   known in af_vsock.c).
 - af_vsock.c: both wait loops in af_vsock.c(for data and space) moved
   to separate functions because now both called from several places.
 - af_vsock.c: 'vsock_assign_transport()' checks that 'new_transport'
   pointer is not NULL and returns 'ESOCKTNOSUPPORT' instead of 'ENODEV'
   if failed to use transport.
 - tools/testing/vsock/vsock_test.c: rename tests

 v2 -> v3:
 - patches reorganized: split for prepare and implementation patches
 - local variables are declared in "Reverse Christmas tree" manner
 - virtio_transport_common.c: valid leXX_to_cpu() for vsock header
   fields access
 - af_vsock.c: 'vsock_connectible_*sockopt()' added as shared code
   between stream and seqpacket sockets.
 - af_vsock.c: loops in '__vsock_*_recvmsg()' refactored.
 - af_vsock.c: 'vsock_wait_data()' refactored.

 v1 -> v2:
 - patches reordered: af_vsock.c related changes now before virtio vsock
 - patches reorganized: more small patches, where +/- are not mixed
 - tests for SOCK_SEQPACKET added
 - all commit messages updated
 - af_vsock.c: 'vsock_pre_recv_check()' inlined to
   'vsock_connectible_recvmsg()'
 - af_vsock.c: 'vsock_assign_transport()' returns ENODEV if transport
   was not found
 - virtio_transport_common.c: transport callback for seqpacket dequeue
 - virtio_transport_common.c: simplified
   'virtio_transport_recv_connected()'
 - virtio_transport_common.c: send reset on socket and packet type
			      mismatch.

Signed-off-by: Arseny Krasnov <arseny.krasnov@kaspersky.com>

-- 
2.25.1


^ permalink raw reply	[flat|nested] 46+ messages in thread

* [PATCH v11 01/18] af_vsock: update functions for connectible socket
  2021-06-11 11:07 [PATCH v11 00/18] virtio/vsock: introduce SOCK_SEQPACKET support Arseny Krasnov
@ 2021-06-11 11:09 ` Arseny Krasnov
  2021-06-11 11:10 ` [PATCH v11 02/18] af_vsock: separate wait data loop Arseny Krasnov
                   ` (18 subsequent siblings)
  19 siblings, 0 replies; 46+ messages in thread
From: Arseny Krasnov @ 2021-06-11 11:09 UTC (permalink / raw)
  To: Stefan Hajnoczi, Stefano Garzarella, Michael S. Tsirkin,
	Jason Wang, David S. Miller, Jakub Kicinski, Arseny Krasnov,
	Colin Ian King, Andra Paraschiv, Norbert Slusarek
  Cc: kvm, virtualization, netdev, linux-kernel, oxffffaa

Prepare af_vsock.c for SEQPACKET support: rename some functions such
as setsockopt(), getsockopt(), connect(), recvmsg(), sendmsg() in general
manner, because they are shared with stream sockets.

Signed-off-by: Arseny Krasnov <arseny.krasnov@kaspersky.com>
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
---
 net/vmw_vsock/af_vsock.c | 64 +++++++++++++++++++++-------------------
 1 file changed, 34 insertions(+), 30 deletions(-)

diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
index 92a72f0e0d94..7dd8e70d78cd 100644
--- a/net/vmw_vsock/af_vsock.c
+++ b/net/vmw_vsock/af_vsock.c
@@ -604,8 +604,8 @@ static void vsock_pending_work(struct work_struct *work)
 
 /**** SOCKET OPERATIONS ****/
 
-static int __vsock_bind_stream(struct vsock_sock *vsk,
-			       struct sockaddr_vm *addr)
+static int __vsock_bind_connectible(struct vsock_sock *vsk,
+				    struct sockaddr_vm *addr)
 {
 	static u32 port;
 	struct sockaddr_vm new_addr;
@@ -685,7 +685,7 @@ static int __vsock_bind(struct sock *sk, struct sockaddr_vm *addr)
 	switch (sk->sk_socket->type) {
 	case SOCK_STREAM:
 		spin_lock_bh(&vsock_table_lock);
-		retval = __vsock_bind_stream(vsk, addr);
+		retval = __vsock_bind_connectible(vsk, addr);
 		spin_unlock_bh(&vsock_table_lock);
 		break;
 
@@ -768,6 +768,11 @@ static struct sock *__vsock_create(struct net *net,
 	return sk;
 }
 
+static bool sock_type_connectible(u16 type)
+{
+	return type == SOCK_STREAM;
+}
+
 static void __vsock_release(struct sock *sk, int level)
 {
 	if (sk) {
@@ -786,7 +791,7 @@ static void __vsock_release(struct sock *sk, int level)
 
 		if (vsk->transport)
 			vsk->transport->release(vsk);
-		else if (sk->sk_type == SOCK_STREAM)
+		else if (sock_type_connectible(sk->sk_type))
 			vsock_remove_sock(vsk);
 
 		sock_orphan(sk);
@@ -948,7 +953,7 @@ static int vsock_shutdown(struct socket *sock, int mode)
 	lock_sock(sk);
 	if (sock->state == SS_UNCONNECTED) {
 		err = -ENOTCONN;
-		if (sk->sk_type == SOCK_STREAM)
+		if (sock_type_connectible(sk->sk_type))
 			goto out;
 	} else {
 		sock->state = SS_DISCONNECTING;
@@ -961,7 +966,7 @@ static int vsock_shutdown(struct socket *sock, int mode)
 		sk->sk_shutdown |= mode;
 		sk->sk_state_change(sk);
 
-		if (sk->sk_type == SOCK_STREAM) {
+		if (sock_type_connectible(sk->sk_type)) {
 			sock_reset_flag(sk, SOCK_DONE);
 			vsock_send_shutdown(sk, mode);
 		}
@@ -1016,7 +1021,7 @@ static __poll_t vsock_poll(struct file *file, struct socket *sock,
 		if (!(sk->sk_shutdown & SEND_SHUTDOWN))
 			mask |= EPOLLOUT | EPOLLWRNORM | EPOLLWRBAND;
 
-	} else if (sock->type == SOCK_STREAM) {
+	} else if (sock_type_connectible(sk->sk_type)) {
 		const struct vsock_transport *transport;
 
 		lock_sock(sk);
@@ -1263,8 +1268,8 @@ static void vsock_connect_timeout(struct work_struct *work)
 	sock_put(sk);
 }
 
-static int vsock_stream_connect(struct socket *sock, struct sockaddr *addr,
-				int addr_len, int flags)
+static int vsock_connect(struct socket *sock, struct sockaddr *addr,
+			 int addr_len, int flags)
 {
 	int err;
 	struct sock *sk;
@@ -1414,7 +1419,7 @@ static int vsock_accept(struct socket *sock, struct socket *newsock, int flags,
 
 	lock_sock(listener);
 
-	if (sock->type != SOCK_STREAM) {
+	if (!sock_type_connectible(sock->type)) {
 		err = -EOPNOTSUPP;
 		goto out;
 	}
@@ -1491,7 +1496,7 @@ static int vsock_listen(struct socket *sock, int backlog)
 
 	lock_sock(sk);
 
-	if (sock->type != SOCK_STREAM) {
+	if (!sock_type_connectible(sk->sk_type)) {
 		err = -EOPNOTSUPP;
 		goto out;
 	}
@@ -1535,11 +1540,11 @@ static void vsock_update_buffer_size(struct vsock_sock *vsk,
 	vsk->buffer_size = val;
 }
 
-static int vsock_stream_setsockopt(struct socket *sock,
-				   int level,
-				   int optname,
-				   sockptr_t optval,
-				   unsigned int optlen)
+static int vsock_connectible_setsockopt(struct socket *sock,
+					int level,
+					int optname,
+					sockptr_t optval,
+					unsigned int optlen)
 {
 	int err;
 	struct sock *sk;
@@ -1617,10 +1622,10 @@ static int vsock_stream_setsockopt(struct socket *sock,
 	return err;
 }
 
-static int vsock_stream_getsockopt(struct socket *sock,
-				   int level, int optname,
-				   char __user *optval,
-				   int __user *optlen)
+static int vsock_connectible_getsockopt(struct socket *sock,
+					int level, int optname,
+					char __user *optval,
+					int __user *optlen)
 {
 	int err;
 	int len;
@@ -1688,8 +1693,8 @@ static int vsock_stream_getsockopt(struct socket *sock,
 	return 0;
 }
 
-static int vsock_stream_sendmsg(struct socket *sock, struct msghdr *msg,
-				size_t len)
+static int vsock_connectible_sendmsg(struct socket *sock, struct msghdr *msg,
+				     size_t len)
 {
 	struct sock *sk;
 	struct vsock_sock *vsk;
@@ -1828,10 +1833,9 @@ static int vsock_stream_sendmsg(struct socket *sock, struct msghdr *msg,
 	return err;
 }
 
-
 static int
-vsock_stream_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
-		     int flags)
+vsock_connectible_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
+			  int flags)
 {
 	struct sock *sk;
 	struct vsock_sock *vsk;
@@ -2007,7 +2011,7 @@ static const struct proto_ops vsock_stream_ops = {
 	.owner = THIS_MODULE,
 	.release = vsock_release,
 	.bind = vsock_bind,
-	.connect = vsock_stream_connect,
+	.connect = vsock_connect,
 	.socketpair = sock_no_socketpair,
 	.accept = vsock_accept,
 	.getname = vsock_getname,
@@ -2015,10 +2019,10 @@ static const struct proto_ops vsock_stream_ops = {
 	.ioctl = sock_no_ioctl,
 	.listen = vsock_listen,
 	.shutdown = vsock_shutdown,
-	.setsockopt = vsock_stream_setsockopt,
-	.getsockopt = vsock_stream_getsockopt,
-	.sendmsg = vsock_stream_sendmsg,
-	.recvmsg = vsock_stream_recvmsg,
+	.setsockopt = vsock_connectible_setsockopt,
+	.getsockopt = vsock_connectible_getsockopt,
+	.sendmsg = vsock_connectible_sendmsg,
+	.recvmsg = vsock_connectible_recvmsg,
 	.mmap = sock_no_mmap,
 	.sendpage = sock_no_sendpage,
 };
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v11 02/18] af_vsock: separate wait data loop
  2021-06-11 11:07 [PATCH v11 00/18] virtio/vsock: introduce SOCK_SEQPACKET support Arseny Krasnov
  2021-06-11 11:09 ` [PATCH v11 01/18] af_vsock: update functions for connectible socket Arseny Krasnov
@ 2021-06-11 11:10 ` Arseny Krasnov
  2021-06-11 11:10 ` [PATCH v11 03/18] af_vsock: separate receive " Arseny Krasnov
                   ` (17 subsequent siblings)
  19 siblings, 0 replies; 46+ messages in thread
From: Arseny Krasnov @ 2021-06-11 11:10 UTC (permalink / raw)
  To: Stefan Hajnoczi, Stefano Garzarella, Michael S. Tsirkin,
	Jason Wang, David S. Miller, Jakub Kicinski, Arseny Krasnov,
	Norbert Slusarek, Colin Ian King, Andra Paraschiv
  Cc: kvm, virtualization, netdev, linux-kernel, oxffffaa

Wait loop for data could be shared between SEQPACKET and STREAM
sockets, so move it to dedicated function. While moving the code
around, let's update an old comment.

Signed-off-by: Arseny Krasnov <arseny.krasnov@kaspersky.com>
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
---
 net/vmw_vsock/af_vsock.c | 156 +++++++++++++++++++++------------------
 1 file changed, 84 insertions(+), 72 deletions(-)

diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
index 7dd8e70d78cd..4269e80b02cd 100644
--- a/net/vmw_vsock/af_vsock.c
+++ b/net/vmw_vsock/af_vsock.c
@@ -1833,6 +1833,69 @@ static int vsock_connectible_sendmsg(struct socket *sock, struct msghdr *msg,
 	return err;
 }
 
+static int vsock_wait_data(struct sock *sk, struct wait_queue_entry *wait,
+			   long timeout,
+			   struct vsock_transport_recv_notify_data *recv_data,
+			   size_t target)
+{
+	const struct vsock_transport *transport;
+	struct vsock_sock *vsk;
+	s64 data;
+	int err;
+
+	vsk = vsock_sk(sk);
+	err = 0;
+	transport = vsk->transport;
+
+	while ((data = vsock_stream_has_data(vsk)) == 0) {
+		prepare_to_wait(sk_sleep(sk), wait, TASK_INTERRUPTIBLE);
+
+		if (sk->sk_err != 0 ||
+		    (sk->sk_shutdown & RCV_SHUTDOWN) ||
+		    (vsk->peer_shutdown & SEND_SHUTDOWN)) {
+			break;
+		}
+
+		/* Don't wait for non-blocking sockets. */
+		if (timeout == 0) {
+			err = -EAGAIN;
+			break;
+		}
+
+		if (recv_data) {
+			err = transport->notify_recv_pre_block(vsk, target, recv_data);
+			if (err < 0)
+				break;
+		}
+
+		release_sock(sk);
+		timeout = schedule_timeout(timeout);
+		lock_sock(sk);
+
+		if (signal_pending(current)) {
+			err = sock_intr_errno(timeout);
+			break;
+		} else if (timeout == 0) {
+			err = -EAGAIN;
+			break;
+		}
+	}
+
+	finish_wait(sk_sleep(sk), wait);
+
+	if (err)
+		return err;
+
+	/* Internal transport error when checking for available
+	 * data. XXX This should be changed to a connection
+	 * reset in a later change.
+	 */
+	if (data < 0)
+		return -ENOMEM;
+
+	return data;
+}
+
 static int
 vsock_connectible_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
 			  int flags)
@@ -1912,85 +1975,34 @@ vsock_connectible_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
 
 
 	while (1) {
-		s64 ready;
+		ssize_t read;
 
-		prepare_to_wait(sk_sleep(sk), &wait, TASK_INTERRUPTIBLE);
-		ready = vsock_stream_has_data(vsk);
+		err = vsock_wait_data(sk, &wait, timeout, &recv_data, target);
+		if (err <= 0)
+			break;
 
-		if (ready == 0) {
-			if (sk->sk_err != 0 ||
-			    (sk->sk_shutdown & RCV_SHUTDOWN) ||
-			    (vsk->peer_shutdown & SEND_SHUTDOWN)) {
-				finish_wait(sk_sleep(sk), &wait);
-				break;
-			}
-			/* Don't wait for non-blocking sockets. */
-			if (timeout == 0) {
-				err = -EAGAIN;
-				finish_wait(sk_sleep(sk), &wait);
-				break;
-			}
-
-			err = transport->notify_recv_pre_block(
-					vsk, target, &recv_data);
-			if (err < 0) {
-				finish_wait(sk_sleep(sk), &wait);
-				break;
-			}
-			release_sock(sk);
-			timeout = schedule_timeout(timeout);
-			lock_sock(sk);
-
-			if (signal_pending(current)) {
-				err = sock_intr_errno(timeout);
-				finish_wait(sk_sleep(sk), &wait);
-				break;
-			} else if (timeout == 0) {
-				err = -EAGAIN;
-				finish_wait(sk_sleep(sk), &wait);
-				break;
-			}
-		} else {
-			ssize_t read;
-
-			finish_wait(sk_sleep(sk), &wait);
-
-			if (ready < 0) {
-				/* Invalid queue pair content. XXX This should
-				* be changed to a connection reset in a later
-				* change.
-				*/
-
-				err = -ENOMEM;
-				goto out;
-			}
-
-			err = transport->notify_recv_pre_dequeue(
-					vsk, target, &recv_data);
-			if (err < 0)
-				break;
+		err = transport->notify_recv_pre_dequeue(vsk, target,
+							 &recv_data);
+		if (err < 0)
+			break;
 
-			read = transport->stream_dequeue(
-					vsk, msg,
-					len - copied, flags);
-			if (read < 0) {
-				err = -ENOMEM;
-				break;
-			}
+		read = transport->stream_dequeue(vsk, msg, len - copied, flags);
+		if (read < 0) {
+			err = -ENOMEM;
+			break;
+		}
 
-			copied += read;
+		copied += read;
 
-			err = transport->notify_recv_post_dequeue(
-					vsk, target, read,
-					!(flags & MSG_PEEK), &recv_data);
-			if (err < 0)
-				goto out;
+		err = transport->notify_recv_post_dequeue(vsk, target, read,
+						!(flags & MSG_PEEK), &recv_data);
+		if (err < 0)
+			goto out;
 
-			if (read >= target || flags & MSG_PEEK)
-				break;
+		if (read >= target || flags & MSG_PEEK)
+			break;
 
-			target -= read;
-		}
+		target -= read;
 	}
 
 	if (sk->sk_err)
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v11 03/18] af_vsock: separate receive data loop
  2021-06-11 11:07 [PATCH v11 00/18] virtio/vsock: introduce SOCK_SEQPACKET support Arseny Krasnov
  2021-06-11 11:09 ` [PATCH v11 01/18] af_vsock: update functions for connectible socket Arseny Krasnov
  2021-06-11 11:10 ` [PATCH v11 02/18] af_vsock: separate wait data loop Arseny Krasnov
@ 2021-06-11 11:10 ` Arseny Krasnov
  2021-06-11 11:10 ` [PATCH v11 04/18] af_vsock: implement SEQPACKET receive loop Arseny Krasnov
                   ` (16 subsequent siblings)
  19 siblings, 0 replies; 46+ messages in thread
From: Arseny Krasnov @ 2021-06-11 11:10 UTC (permalink / raw)
  To: Stefan Hajnoczi, Stefano Garzarella, Michael S. Tsirkin,
	Jason Wang, David S. Miller, Jakub Kicinski, Arseny Krasnov,
	Andra Paraschiv, Colin Ian King, Norbert Slusarek
  Cc: kvm, virtualization, netdev, linux-kernel, oxffffaa

Some code in receive data loop could be shared between SEQPACKET
and STREAM sockets, while another part is type specific, so move STREAM
specific data receive logic to '__vsock_stream_recvmsg()' dedicated
function, while checks, that will be same for both STREAM and SEQPACKET
sockets, stays in 'vsock_connectible_recvmsg()'.

Signed-off-by: Arseny Krasnov <arseny.krasnov@kaspersky.com>
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
---
 net/vmw_vsock/af_vsock.c | 116 ++++++++++++++++++++++-----------------
 1 file changed, 67 insertions(+), 49 deletions(-)

diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
index 4269e80b02cd..c4f6bfa1e381 100644
--- a/net/vmw_vsock/af_vsock.c
+++ b/net/vmw_vsock/af_vsock.c
@@ -1896,65 +1896,22 @@ static int vsock_wait_data(struct sock *sk, struct wait_queue_entry *wait,
 	return data;
 }
 
-static int
-vsock_connectible_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
-			  int flags)
+static int __vsock_stream_recvmsg(struct sock *sk, struct msghdr *msg,
+				  size_t len, int flags)
 {
-	struct sock *sk;
-	struct vsock_sock *vsk;
+	struct vsock_transport_recv_notify_data recv_data;
 	const struct vsock_transport *transport;
-	int err;
-	size_t target;
+	struct vsock_sock *vsk;
 	ssize_t copied;
+	size_t target;
 	long timeout;
-	struct vsock_transport_recv_notify_data recv_data;
+	int err;
 
 	DEFINE_WAIT(wait);
 
-	sk = sock->sk;
 	vsk = vsock_sk(sk);
-	err = 0;
-
-	lock_sock(sk);
-
 	transport = vsk->transport;
 
-	if (!transport || sk->sk_state != TCP_ESTABLISHED) {
-		/* Recvmsg is supposed to return 0 if a peer performs an
-		 * orderly shutdown. Differentiate between that case and when a
-		 * peer has not connected or a local shutdown occurred with the
-		 * SOCK_DONE flag.
-		 */
-		if (sock_flag(sk, SOCK_DONE))
-			err = 0;
-		else
-			err = -ENOTCONN;
-
-		goto out;
-	}
-
-	if (flags & MSG_OOB) {
-		err = -EOPNOTSUPP;
-		goto out;
-	}
-
-	/* We don't check peer_shutdown flag here since peer may actually shut
-	 * down, but there can be data in the queue that a local socket can
-	 * receive.
-	 */
-	if (sk->sk_shutdown & RCV_SHUTDOWN) {
-		err = 0;
-		goto out;
-	}
-
-	/* It is valid on Linux to pass in a zero-length receive buffer.  This
-	 * is not an error.  We may as well bail out now.
-	 */
-	if (!len) {
-		err = 0;
-		goto out;
-	}
-
 	/* We must not copy less than target bytes into the user's buffer
 	 * before returning successfully, so we wait for the consume queue to
 	 * have that much data to consume before dequeueing.  Note that this
@@ -2013,6 +1970,67 @@ vsock_connectible_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
 	if (copied > 0)
 		err = copied;
 
+out:
+	return err;
+}
+
+static int
+vsock_connectible_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
+			  int flags)
+{
+	struct sock *sk;
+	struct vsock_sock *vsk;
+	const struct vsock_transport *transport;
+	int err;
+
+	DEFINE_WAIT(wait);
+
+	sk = sock->sk;
+	vsk = vsock_sk(sk);
+	err = 0;
+
+	lock_sock(sk);
+
+	transport = vsk->transport;
+
+	if (!transport || sk->sk_state != TCP_ESTABLISHED) {
+		/* Recvmsg is supposed to return 0 if a peer performs an
+		 * orderly shutdown. Differentiate between that case and when a
+		 * peer has not connected or a local shutdown occurred with the
+		 * SOCK_DONE flag.
+		 */
+		if (sock_flag(sk, SOCK_DONE))
+			err = 0;
+		else
+			err = -ENOTCONN;
+
+		goto out;
+	}
+
+	if (flags & MSG_OOB) {
+		err = -EOPNOTSUPP;
+		goto out;
+	}
+
+	/* We don't check peer_shutdown flag here since peer may actually shut
+	 * down, but there can be data in the queue that a local socket can
+	 * receive.
+	 */
+	if (sk->sk_shutdown & RCV_SHUTDOWN) {
+		err = 0;
+		goto out;
+	}
+
+	/* It is valid on Linux to pass in a zero-length receive buffer.  This
+	 * is not an error.  We may as well bail out now.
+	 */
+	if (!len) {
+		err = 0;
+		goto out;
+	}
+
+	err = __vsock_stream_recvmsg(sk, msg, len, flags);
+
 out:
 	release_sock(sk);
 	return err;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v11 04/18] af_vsock: implement SEQPACKET receive loop
  2021-06-11 11:07 [PATCH v11 00/18] virtio/vsock: introduce SOCK_SEQPACKET support Arseny Krasnov
                   ` (2 preceding siblings ...)
  2021-06-11 11:10 ` [PATCH v11 03/18] af_vsock: separate receive " Arseny Krasnov
@ 2021-06-11 11:10 ` Arseny Krasnov
  2021-06-11 11:10 ` [PATCH v11 05/18] af_vsock: implement send logic for SEQPACKET Arseny Krasnov
                   ` (15 subsequent siblings)
  19 siblings, 0 replies; 46+ messages in thread
From: Arseny Krasnov @ 2021-06-11 11:10 UTC (permalink / raw)
  To: Stefan Hajnoczi, Stefano Garzarella, Michael S. Tsirkin,
	Jason Wang, David S. Miller, Jakub Kicinski, Arseny Krasnov,
	Jorgen Hansen, Norbert Slusarek, Colin Ian King, Andra Paraschiv
  Cc: kvm, virtualization, netdev, linux-kernel, oxffffaa

Add receive loop for SEQPACKET. It looks like receive loop for
STREAM, but there are differences:
1) It doesn't call notify callbacks.
2) It doesn't care about 'SO_SNDLOWAT' and 'SO_RCVLOWAT' values, because
   there is no sense for these values in SEQPACKET case.
3) It waits until whole record is received.
4) It processes and sets 'MSG_TRUNC' flag.

So to avoid extra conditions for two types of socket inside one loop, two
independent functions were created.

Signed-off-by: Arseny Krasnov <arseny.krasnov@kaspersky.com>
---
 v10 -> v11:
 1) 'msg_ready' input argument removed.
 2) On 'vsock_wait_data()' error, error is returned(not 0).
 3) Receive loop removed as user is woken up when it is possible
    to dequeue whole message.

 include/net/af_vsock.h   |  4 +++
 net/vmw_vsock/af_vsock.c | 55 +++++++++++++++++++++++++++++++++++++++-
 2 files changed, 58 insertions(+), 1 deletion(-)

diff --git a/include/net/af_vsock.h b/include/net/af_vsock.h
index b1c717286993..4d7cf6b2aca2 100644
--- a/include/net/af_vsock.h
+++ b/include/net/af_vsock.h
@@ -135,6 +135,10 @@ struct vsock_transport {
 	bool (*stream_is_active)(struct vsock_sock *);
 	bool (*stream_allow)(u32 cid, u32 port);
 
+	/* SEQ_PACKET. */
+	ssize_t (*seqpacket_dequeue)(struct vsock_sock *vsk, struct msghdr *msg,
+				     int flags);
+
 	/* Notification. */
 	int (*notify_poll_in)(struct vsock_sock *, size_t, bool *);
 	int (*notify_poll_out)(struct vsock_sock *, size_t, bool *);
diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
index c4f6bfa1e381..87ae26b2e3e1 100644
--- a/net/vmw_vsock/af_vsock.c
+++ b/net/vmw_vsock/af_vsock.c
@@ -1974,6 +1974,56 @@ static int __vsock_stream_recvmsg(struct sock *sk, struct msghdr *msg,
 	return err;
 }
 
+static int __vsock_seqpacket_recvmsg(struct sock *sk, struct msghdr *msg,
+				     size_t len, int flags)
+{
+	const struct vsock_transport *transport;
+	struct vsock_sock *vsk;
+	ssize_t record_len;
+	long timeout;
+	int err = 0;
+	DEFINE_WAIT(wait);
+
+	vsk = vsock_sk(sk);
+	transport = vsk->transport;
+
+	timeout = sock_rcvtimeo(sk, flags & MSG_DONTWAIT);
+
+	err = vsock_wait_data(sk, &wait, timeout, NULL, 0);
+	if (err <= 0)
+		goto out;
+
+	record_len = transport->seqpacket_dequeue(vsk, msg, flags);
+
+	if (record_len < 0) {
+		err = -ENOMEM;
+		goto out;
+	}
+
+	if (sk->sk_err) {
+		err = -sk->sk_err;
+	} else if (sk->sk_shutdown & RCV_SHUTDOWN) {
+		err = 0;
+	} else {
+		/* User sets MSG_TRUNC, so return real length of
+		 * packet.
+		 */
+		if (flags & MSG_TRUNC)
+			err = record_len;
+		else
+			err = len - msg_data_left(msg);
+
+		/* Always set MSG_TRUNC if real length of packet is
+		 * bigger than user's buffer.
+		 */
+		if (record_len > len)
+			msg->msg_flags |= MSG_TRUNC;
+	}
+
+out:
+	return err;
+}
+
 static int
 vsock_connectible_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
 			  int flags)
@@ -2029,7 +2079,10 @@ vsock_connectible_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
 		goto out;
 	}
 
-	err = __vsock_stream_recvmsg(sk, msg, len, flags);
+	if (sk->sk_type == SOCK_STREAM)
+		err = __vsock_stream_recvmsg(sk, msg, len, flags);
+	else
+		err = __vsock_seqpacket_recvmsg(sk, msg, len, flags);
 
 out:
 	release_sock(sk);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v11 05/18] af_vsock: implement send logic for SEQPACKET
  2021-06-11 11:07 [PATCH v11 00/18] virtio/vsock: introduce SOCK_SEQPACKET support Arseny Krasnov
                   ` (3 preceding siblings ...)
  2021-06-11 11:10 ` [PATCH v11 04/18] af_vsock: implement SEQPACKET receive loop Arseny Krasnov
@ 2021-06-11 11:10 ` Arseny Krasnov
  2021-06-11 11:11 ` [PATCH v11 06/18] af_vsock: rest of SEQPACKET support Arseny Krasnov
                   ` (14 subsequent siblings)
  19 siblings, 0 replies; 46+ messages in thread
From: Arseny Krasnov @ 2021-06-11 11:10 UTC (permalink / raw)
  To: Stefan Hajnoczi, Stefano Garzarella, Michael S. Tsirkin,
	Jason Wang, David S. Miller, Jakub Kicinski, Arseny Krasnov,
	Colin Ian King, Andra Paraschiv, Norbert Slusarek
  Cc: kvm, virtualization, netdev, linux-kernel, oxffffaa

Update current stream enqueue function for SEQPACKET
support:
1) Call transport's seqpacket enqueue callback.
2) Return value from enqueue function is whole record length or error
   for SOCK_SEQPACKET.

Signed-off-by: Arseny Krasnov <arseny.krasnov@kaspersky.com>
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
---
 include/net/af_vsock.h   |  2 ++
 net/vmw_vsock/af_vsock.c | 20 +++++++++++++++-----
 2 files changed, 17 insertions(+), 5 deletions(-)

diff --git a/include/net/af_vsock.h b/include/net/af_vsock.h
index 4d7cf6b2aca2..d6745d8b8f3e 100644
--- a/include/net/af_vsock.h
+++ b/include/net/af_vsock.h
@@ -138,6 +138,8 @@ struct vsock_transport {
 	/* SEQ_PACKET. */
 	ssize_t (*seqpacket_dequeue)(struct vsock_sock *vsk, struct msghdr *msg,
 				     int flags);
+	int (*seqpacket_enqueue)(struct vsock_sock *vsk, struct msghdr *msg,
+				 size_t len);
 
 	/* Notification. */
 	int (*notify_poll_in)(struct vsock_sock *, size_t, bool *);
diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
index 87ae26b2e3e1..9e0cc07e3caf 100644
--- a/net/vmw_vsock/af_vsock.c
+++ b/net/vmw_vsock/af_vsock.c
@@ -1808,9 +1808,13 @@ static int vsock_connectible_sendmsg(struct socket *sock, struct msghdr *msg,
 		 * responsibility to check how many bytes we were able to send.
 		 */
 
-		written = transport->stream_enqueue(
-				vsk, msg,
-				len - total_written);
+		if (sk->sk_type == SOCK_SEQPACKET) {
+			written = transport->seqpacket_enqueue(vsk,
+						msg, len - total_written);
+		} else {
+			written = transport->stream_enqueue(vsk,
+					msg, len - total_written);
+		}
 		if (written < 0) {
 			err = -ENOMEM;
 			goto out_err;
@@ -1826,8 +1830,14 @@ static int vsock_connectible_sendmsg(struct socket *sock, struct msghdr *msg,
 	}
 
 out_err:
-	if (total_written > 0)
-		err = total_written;
+	if (total_written > 0) {
+		/* Return number of written bytes only if:
+		 * 1) SOCK_STREAM socket.
+		 * 2) SOCK_SEQPACKET socket when whole buffer is sent.
+		 */
+		if (sk->sk_type == SOCK_STREAM || total_written == len)
+			err = total_written;
+	}
 out:
 	release_sock(sk);
 	return err;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v11 06/18] af_vsock: rest of SEQPACKET support
  2021-06-11 11:07 [PATCH v11 00/18] virtio/vsock: introduce SOCK_SEQPACKET support Arseny Krasnov
                   ` (4 preceding siblings ...)
  2021-06-11 11:10 ` [PATCH v11 05/18] af_vsock: implement send logic for SEQPACKET Arseny Krasnov
@ 2021-06-11 11:11 ` Arseny Krasnov
  2021-06-11 11:11 ` [PATCH v11 07/18] af_vsock: update comments for stream sockets Arseny Krasnov
                   ` (13 subsequent siblings)
  19 siblings, 0 replies; 46+ messages in thread
From: Arseny Krasnov @ 2021-06-11 11:11 UTC (permalink / raw)
  To: Stefan Hajnoczi, Stefano Garzarella, Michael S. Tsirkin,
	Jason Wang, David S. Miller, Jakub Kicinski, Arseny Krasnov,
	Jorgen Hansen, Norbert Slusarek, Colin Ian King, Andra Paraschiv
  Cc: kvm, virtualization, netdev, linux-kernel, oxffffaa

Add socket ops for SEQPACKET type and .seqpacket_allow() callback
to query transports if they support SEQPACKET. Also split path
for data check for STREAM and SEQPACKET branches.

Signed-off-by: Arseny Krasnov <arseny.krasnov@kaspersky.com>
---
 v10 -> v11:
 1) 'vsock_has_data()' function added.
 2) Commit message updated.

 include/net/af_vsock.h   |  2 ++
 net/vmw_vsock/af_vsock.c | 48 ++++++++++++++++++++++++++++++++++++++--
 2 files changed, 48 insertions(+), 2 deletions(-)

diff --git a/include/net/af_vsock.h b/include/net/af_vsock.h
index d6745d8b8f3e..ab207677e0a8 100644
--- a/include/net/af_vsock.h
+++ b/include/net/af_vsock.h
@@ -140,6 +140,8 @@ struct vsock_transport {
 				     int flags);
 	int (*seqpacket_enqueue)(struct vsock_sock *vsk, struct msghdr *msg,
 				 size_t len);
+	bool (*seqpacket_allow)(u32 remote_cid);
+	u32 (*seqpacket_has_data)(struct vsock_sock *vsk);
 
 	/* Notification. */
 	int (*notify_poll_in)(struct vsock_sock *, size_t, bool *);
diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
index 9e0cc07e3caf..21a56f52d683 100644
--- a/net/vmw_vsock/af_vsock.c
+++ b/net/vmw_vsock/af_vsock.c
@@ -452,6 +452,7 @@ int vsock_assign_transport(struct vsock_sock *vsk, struct vsock_sock *psk)
 		new_transport = transport_dgram;
 		break;
 	case SOCK_STREAM:
+	case SOCK_SEQPACKET:
 		if (vsock_use_local_transport(remote_cid))
 			new_transport = transport_local;
 		else if (remote_cid <= VMADDR_CID_HOST || !transport_h2g ||
@@ -484,6 +485,14 @@ int vsock_assign_transport(struct vsock_sock *vsk, struct vsock_sock *psk)
 	if (!new_transport || !try_module_get(new_transport->module))
 		return -ENODEV;
 
+	if (sk->sk_type == SOCK_SEQPACKET) {
+		if (!new_transport->seqpacket_allow ||
+		    !new_transport->seqpacket_allow(remote_cid)) {
+			module_put(new_transport->module);
+			return -ESOCKTNOSUPPORT;
+		}
+	}
+
 	ret = new_transport->init(vsk, psk);
 	if (ret) {
 		module_put(new_transport->module);
@@ -684,6 +693,7 @@ static int __vsock_bind(struct sock *sk, struct sockaddr_vm *addr)
 
 	switch (sk->sk_socket->type) {
 	case SOCK_STREAM:
+	case SOCK_SEQPACKET:
 		spin_lock_bh(&vsock_table_lock);
 		retval = __vsock_bind_connectible(vsk, addr);
 		spin_unlock_bh(&vsock_table_lock);
@@ -770,7 +780,7 @@ static struct sock *__vsock_create(struct net *net,
 
 static bool sock_type_connectible(u16 type)
 {
-	return type == SOCK_STREAM;
+	return (type == SOCK_STREAM) || (type == SOCK_SEQPACKET);
 }
 
 static void __vsock_release(struct sock *sk, int level)
@@ -849,6 +859,16 @@ s64 vsock_stream_has_data(struct vsock_sock *vsk)
 }
 EXPORT_SYMBOL_GPL(vsock_stream_has_data);
 
+static s64 vsock_has_data(struct vsock_sock *vsk)
+{
+	struct sock *sk = sk_vsock(vsk);
+
+	if (sk->sk_type == SOCK_SEQPACKET)
+		return vsk->transport->seqpacket_has_data(vsk);
+	else
+		return vsock_stream_has_data(vsk);
+}
+
 s64 vsock_stream_has_space(struct vsock_sock *vsk)
 {
 	return vsk->transport->stream_has_space(vsk);
@@ -1857,7 +1877,7 @@ static int vsock_wait_data(struct sock *sk, struct wait_queue_entry *wait,
 	err = 0;
 	transport = vsk->transport;
 
-	while ((data = vsock_stream_has_data(vsk)) == 0) {
+	while ((data = vsock_has_data(vsk)) == 0) {
 		prepare_to_wait(sk_sleep(sk), wait, TASK_INTERRUPTIBLE);
 
 		if (sk->sk_err != 0 ||
@@ -2120,6 +2140,27 @@ static const struct proto_ops vsock_stream_ops = {
 	.sendpage = sock_no_sendpage,
 };
 
+static const struct proto_ops vsock_seqpacket_ops = {
+	.family = PF_VSOCK,
+	.owner = THIS_MODULE,
+	.release = vsock_release,
+	.bind = vsock_bind,
+	.connect = vsock_connect,
+	.socketpair = sock_no_socketpair,
+	.accept = vsock_accept,
+	.getname = vsock_getname,
+	.poll = vsock_poll,
+	.ioctl = sock_no_ioctl,
+	.listen = vsock_listen,
+	.shutdown = vsock_shutdown,
+	.setsockopt = vsock_connectible_setsockopt,
+	.getsockopt = vsock_connectible_getsockopt,
+	.sendmsg = vsock_connectible_sendmsg,
+	.recvmsg = vsock_connectible_recvmsg,
+	.mmap = sock_no_mmap,
+	.sendpage = sock_no_sendpage,
+};
+
 static int vsock_create(struct net *net, struct socket *sock,
 			int protocol, int kern)
 {
@@ -2140,6 +2181,9 @@ static int vsock_create(struct net *net, struct socket *sock,
 	case SOCK_STREAM:
 		sock->ops = &vsock_stream_ops;
 		break;
+	case SOCK_SEQPACKET:
+		sock->ops = &vsock_seqpacket_ops;
+		break;
 	default:
 		return -ESOCKTNOSUPPORT;
 	}
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v11 07/18] af_vsock: update comments for stream sockets
  2021-06-11 11:07 [PATCH v11 00/18] virtio/vsock: introduce SOCK_SEQPACKET support Arseny Krasnov
                   ` (5 preceding siblings ...)
  2021-06-11 11:11 ` [PATCH v11 06/18] af_vsock: rest of SEQPACKET support Arseny Krasnov
@ 2021-06-11 11:11 ` Arseny Krasnov
  2021-06-11 11:11 ` [PATCH v11 08/18] virtio/vsock: set packet's type in virtio_transport_send_pkt_info() Arseny Krasnov
                   ` (12 subsequent siblings)
  19 siblings, 0 replies; 46+ messages in thread
From: Arseny Krasnov @ 2021-06-11 11:11 UTC (permalink / raw)
  To: Stefan Hajnoczi, Stefano Garzarella, Michael S. Tsirkin,
	Jason Wang, David S. Miller, Jakub Kicinski, Arseny Krasnov,
	Colin Ian King, Norbert Slusarek, Andra Paraschiv
  Cc: kvm, virtualization, netdev, linux-kernel, oxffffaa

Replace 'stream' to 'connection oriented' in comments as
SEQPACKET is also connection oriented.

Signed-off-by: Arseny Krasnov <arseny.krasnov@kaspersky.com>
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
---
 net/vmw_vsock/af_vsock.c | 31 +++++++++++++++++--------------
 1 file changed, 17 insertions(+), 14 deletions(-)

diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
index 21a56f52d683..67954afef4e1 100644
--- a/net/vmw_vsock/af_vsock.c
+++ b/net/vmw_vsock/af_vsock.c
@@ -415,8 +415,8 @@ static void vsock_deassign_transport(struct vsock_sock *vsk)
 
 /* Assign a transport to a socket and call the .init transport callback.
  *
- * Note: for stream socket this must be called when vsk->remote_addr is set
- * (e.g. during the connect() or when a connection request on a listener
+ * Note: for connection oriented socket this must be called when vsk->remote_addr
+ * is set (e.g. during the connect() or when a connection request on a listener
  * socket is received).
  * The vsk->remote_addr is used to decide which transport to use:
  *  - remote CID == VMADDR_CID_LOCAL or g2h->local_cid or VMADDR_CID_HOST if
@@ -470,10 +470,10 @@ int vsock_assign_transport(struct vsock_sock *vsk, struct vsock_sock *psk)
 			return 0;
 
 		/* transport->release() must be called with sock lock acquired.
-		 * This path can only be taken during vsock_stream_connect(),
-		 * where we have already held the sock lock.
-		 * In the other cases, this function is called on a new socket
-		 * which is not assigned to any transport.
+		 * This path can only be taken during vsock_connect(), where we
+		 * have already held the sock lock. In the other cases, this
+		 * function is called on a new socket which is not assigned to
+		 * any transport.
 		 */
 		vsk->transport->release(vsk);
 		vsock_deassign_transport(vsk);
@@ -658,9 +658,10 @@ static int __vsock_bind_connectible(struct vsock_sock *vsk,
 
 	vsock_addr_init(&vsk->local_addr, new_addr.svm_cid, new_addr.svm_port);
 
-	/* Remove stream sockets from the unbound list and add them to the hash
-	 * table for easy lookup by its address.  The unbound list is simply an
-	 * extra entry at the end of the hash table, a trick used by AF_UNIX.
+	/* Remove connection oriented sockets from the unbound list and add them
+	 * to the hash table for easy lookup by its address.  The unbound list
+	 * is simply an extra entry at the end of the hash table, a trick used
+	 * by AF_UNIX.
 	 */
 	__vsock_remove_bound(vsk);
 	__vsock_insert_bound(vsock_bound_sockets(&vsk->local_addr), vsk);
@@ -962,10 +963,10 @@ static int vsock_shutdown(struct socket *sock, int mode)
 	if ((mode & ~SHUTDOWN_MASK) || !mode)
 		return -EINVAL;
 
-	/* If this is a STREAM socket and it is not connected then bail out
-	 * immediately.  If it is a DGRAM socket then we must first kick the
-	 * socket so that it wakes up from any sleeping calls, for example
-	 * recv(), and then afterwards return the error.
+	/* If this is a connection oriented socket and it is not connected then
+	 * bail out immediately.  If it is a DGRAM socket then we must first
+	 * kick the socket so that it wakes up from any sleeping calls, for
+	 * example recv(), and then afterwards return the error.
 	 */
 
 	sk = sock->sk;
@@ -1737,7 +1738,9 @@ static int vsock_connectible_sendmsg(struct socket *sock, struct msghdr *msg,
 
 	transport = vsk->transport;
 
-	/* Callers should not provide a destination with stream sockets. */
+	/* Callers should not provide a destination with connection oriented
+	 * sockets.
+	 */
 	if (msg->msg_namelen) {
 		err = sk->sk_state == TCP_ESTABLISHED ? -EISCONN : -EOPNOTSUPP;
 		goto out;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v11 08/18] virtio/vsock: set packet's type in virtio_transport_send_pkt_info()
  2021-06-11 11:07 [PATCH v11 00/18] virtio/vsock: introduce SOCK_SEQPACKET support Arseny Krasnov
                   ` (6 preceding siblings ...)
  2021-06-11 11:11 ` [PATCH v11 07/18] af_vsock: update comments for stream sockets Arseny Krasnov
@ 2021-06-11 11:11 ` Arseny Krasnov
  2021-06-11 11:12 ` [PATCH v11 09/18] virtio/vsock: simplify credit update function API Arseny Krasnov
                   ` (11 subsequent siblings)
  19 siblings, 0 replies; 46+ messages in thread
From: Arseny Krasnov @ 2021-06-11 11:11 UTC (permalink / raw)
  To: Stefan Hajnoczi, Stefano Garzarella, Michael S. Tsirkin,
	Jason Wang, David S. Miller, Jakub Kicinski, Arseny Krasnov,
	Andra Paraschiv, Colin Ian King, Norbert Slusarek
  Cc: kvm, virtualization, netdev, linux-kernel, oxffffaa

There is no need to set type of packet which differs from type
of socket, so move passing type of packet from 'info' structure
to  'virtio_transport_send_pkt_info()' function. Since at current
time only stream type is supported, set it directly in 'virtio_
transport_send_pkt_info()', so callers don't need to set it.

Signed-off-by: Arseny Krasnov <arseny.krasnov@kaspersky.com>
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
---
 net/vmw_vsock/virtio_transport_common.c | 19 +++++--------------
 1 file changed, 5 insertions(+), 14 deletions(-)

diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
index 902cb6dd710b..6503a8370130 100644
--- a/net/vmw_vsock/virtio_transport_common.c
+++ b/net/vmw_vsock/virtio_transport_common.c
@@ -179,6 +179,8 @@ static int virtio_transport_send_pkt_info(struct vsock_sock *vsk,
 	struct virtio_vsock_pkt *pkt;
 	u32 pkt_len = info->pkt_len;
 
+	info->type = VIRTIO_VSOCK_TYPE_STREAM;
+
 	t_ops = virtio_transport_get_ops(vsk);
 	if (unlikely(!t_ops))
 		return -EFAULT;
@@ -270,12 +272,10 @@ void virtio_transport_put_credit(struct virtio_vsock_sock *vvs, u32 credit)
 EXPORT_SYMBOL_GPL(virtio_transport_put_credit);
 
 static int virtio_transport_send_credit_update(struct vsock_sock *vsk,
-					       int type,
 					       struct virtio_vsock_hdr *hdr)
 {
 	struct virtio_vsock_pkt_info info = {
 		.op = VIRTIO_VSOCK_OP_CREDIT_UPDATE,
-		.type = type,
 		.vsk = vsk,
 	};
 
@@ -383,11 +383,8 @@ virtio_transport_stream_do_dequeue(struct vsock_sock *vsk,
 	 * messages, we set the limit to a high value. TODO: experiment
 	 * with different values.
 	 */
-	if (free_space < VIRTIO_VSOCK_MAX_PKT_BUF_SIZE) {
-		virtio_transport_send_credit_update(vsk,
-						    VIRTIO_VSOCK_TYPE_STREAM,
-						    NULL);
-	}
+	if (free_space < VIRTIO_VSOCK_MAX_PKT_BUF_SIZE)
+		virtio_transport_send_credit_update(vsk, NULL);
 
 	return total;
 
@@ -496,8 +493,7 @@ void virtio_transport_notify_buffer_size(struct vsock_sock *vsk, u64 *val)
 
 	vvs->buf_alloc = *val;
 
-	virtio_transport_send_credit_update(vsk, VIRTIO_VSOCK_TYPE_STREAM,
-					    NULL);
+	virtio_transport_send_credit_update(vsk, NULL);
 }
 EXPORT_SYMBOL_GPL(virtio_transport_notify_buffer_size);
 
@@ -624,7 +620,6 @@ int virtio_transport_connect(struct vsock_sock *vsk)
 {
 	struct virtio_vsock_pkt_info info = {
 		.op = VIRTIO_VSOCK_OP_REQUEST,
-		.type = VIRTIO_VSOCK_TYPE_STREAM,
 		.vsk = vsk,
 	};
 
@@ -636,7 +631,6 @@ int virtio_transport_shutdown(struct vsock_sock *vsk, int mode)
 {
 	struct virtio_vsock_pkt_info info = {
 		.op = VIRTIO_VSOCK_OP_SHUTDOWN,
-		.type = VIRTIO_VSOCK_TYPE_STREAM,
 		.flags = (mode & RCV_SHUTDOWN ?
 			  VIRTIO_VSOCK_SHUTDOWN_RCV : 0) |
 			 (mode & SEND_SHUTDOWN ?
@@ -665,7 +659,6 @@ virtio_transport_stream_enqueue(struct vsock_sock *vsk,
 {
 	struct virtio_vsock_pkt_info info = {
 		.op = VIRTIO_VSOCK_OP_RW,
-		.type = VIRTIO_VSOCK_TYPE_STREAM,
 		.msg = msg,
 		.pkt_len = len,
 		.vsk = vsk,
@@ -688,7 +681,6 @@ static int virtio_transport_reset(struct vsock_sock *vsk,
 {
 	struct virtio_vsock_pkt_info info = {
 		.op = VIRTIO_VSOCK_OP_RST,
-		.type = VIRTIO_VSOCK_TYPE_STREAM,
 		.reply = !!pkt,
 		.vsk = vsk,
 	};
@@ -1000,7 +992,6 @@ virtio_transport_send_response(struct vsock_sock *vsk,
 {
 	struct virtio_vsock_pkt_info info = {
 		.op = VIRTIO_VSOCK_OP_RESPONSE,
-		.type = VIRTIO_VSOCK_TYPE_STREAM,
 		.remote_cid = le64_to_cpu(pkt->hdr.src_cid),
 		.remote_port = le32_to_cpu(pkt->hdr.src_port),
 		.reply = true,
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v11 09/18] virtio/vsock: simplify credit update function API
  2021-06-11 11:07 [PATCH v11 00/18] virtio/vsock: introduce SOCK_SEQPACKET support Arseny Krasnov
                   ` (7 preceding siblings ...)
  2021-06-11 11:11 ` [PATCH v11 08/18] virtio/vsock: set packet's type in virtio_transport_send_pkt_info() Arseny Krasnov
@ 2021-06-11 11:12 ` Arseny Krasnov
  2021-06-11 11:12 ` [PATCH v11 10/18] virtio/vsock: defines and constants for SEQPACKET Arseny Krasnov
                   ` (10 subsequent siblings)
  19 siblings, 0 replies; 46+ messages in thread
From: Arseny Krasnov @ 2021-06-11 11:12 UTC (permalink / raw)
  To: Stefan Hajnoczi, Stefano Garzarella, Michael S. Tsirkin,
	Jason Wang, David S. Miller, Jakub Kicinski, Arseny Krasnov,
	Jorgen Hansen, Andra Paraschiv, Colin Ian King, Norbert Slusarek
  Cc: kvm, virtualization, netdev, linux-kernel, oxffffaa

This function is static and 'hdr' arg was always NULL.

Signed-off-by: Arseny Krasnov <arseny.krasnov@kaspersky.com>
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
---
 net/vmw_vsock/virtio_transport_common.c | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
index 6503a8370130..ad0d34d41444 100644
--- a/net/vmw_vsock/virtio_transport_common.c
+++ b/net/vmw_vsock/virtio_transport_common.c
@@ -271,8 +271,7 @@ void virtio_transport_put_credit(struct virtio_vsock_sock *vvs, u32 credit)
 }
 EXPORT_SYMBOL_GPL(virtio_transport_put_credit);
 
-static int virtio_transport_send_credit_update(struct vsock_sock *vsk,
-					       struct virtio_vsock_hdr *hdr)
+static int virtio_transport_send_credit_update(struct vsock_sock *vsk)
 {
 	struct virtio_vsock_pkt_info info = {
 		.op = VIRTIO_VSOCK_OP_CREDIT_UPDATE,
@@ -384,7 +383,7 @@ virtio_transport_stream_do_dequeue(struct vsock_sock *vsk,
 	 * with different values.
 	 */
 	if (free_space < VIRTIO_VSOCK_MAX_PKT_BUF_SIZE)
-		virtio_transport_send_credit_update(vsk, NULL);
+		virtio_transport_send_credit_update(vsk);
 
 	return total;
 
@@ -493,7 +492,7 @@ void virtio_transport_notify_buffer_size(struct vsock_sock *vsk, u64 *val)
 
 	vvs->buf_alloc = *val;
 
-	virtio_transport_send_credit_update(vsk, NULL);
+	virtio_transport_send_credit_update(vsk);
 }
 EXPORT_SYMBOL_GPL(virtio_transport_notify_buffer_size);
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v11 10/18] virtio/vsock: defines and constants for SEQPACKET
  2021-06-11 11:07 [PATCH v11 00/18] virtio/vsock: introduce SOCK_SEQPACKET support Arseny Krasnov
                   ` (8 preceding siblings ...)
  2021-06-11 11:12 ` [PATCH v11 09/18] virtio/vsock: simplify credit update function API Arseny Krasnov
@ 2021-06-11 11:12 ` Arseny Krasnov
  2021-06-11 11:12 ` [PATCH v11 11/18] virtio/vsock: dequeue callback for SOCK_SEQPACKET Arseny Krasnov
                   ` (9 subsequent siblings)
  19 siblings, 0 replies; 46+ messages in thread
From: Arseny Krasnov @ 2021-06-11 11:12 UTC (permalink / raw)
  To: Stefan Hajnoczi, Stefano Garzarella, Michael S. Tsirkin,
	Jason Wang, David S. Miller, Jakub Kicinski, Arseny Krasnov,
	Norbert Slusarek, Andra Paraschiv, Colin Ian King
  Cc: kvm, virtualization, netdev, linux-kernel, oxffffaa

Add set of defines and constants for SOCK_SEQPACKET support
in vsock.

Signed-off-by: Arseny Krasnov <arseny.krasnov@kaspersky.com>
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
---
 include/uapi/linux/virtio_vsock.h | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/include/uapi/linux/virtio_vsock.h b/include/uapi/linux/virtio_vsock.h
index 1d57ed3d84d2..3dd3555b2740 100644
--- a/include/uapi/linux/virtio_vsock.h
+++ b/include/uapi/linux/virtio_vsock.h
@@ -38,6 +38,9 @@
 #include <linux/virtio_ids.h>
 #include <linux/virtio_config.h>
 
+/* The feature bitmap for virtio vsock */
+#define VIRTIO_VSOCK_F_SEQPACKET	1	/* SOCK_SEQPACKET supported */
+
 struct virtio_vsock_config {
 	__le64 guest_cid;
 } __attribute__((packed));
@@ -65,6 +68,7 @@ struct virtio_vsock_hdr {
 
 enum virtio_vsock_type {
 	VIRTIO_VSOCK_TYPE_STREAM = 1,
+	VIRTIO_VSOCK_TYPE_SEQPACKET = 2,
 };
 
 enum virtio_vsock_op {
@@ -91,4 +95,9 @@ enum virtio_vsock_shutdown {
 	VIRTIO_VSOCK_SHUTDOWN_SEND = 2,
 };
 
+/* VIRTIO_VSOCK_OP_RW flags values */
+enum virtio_vsock_rw {
+	VIRTIO_VSOCK_SEQ_EOR = 1,
+};
+
 #endif /* _UAPI_LINUX_VIRTIO_VSOCK_H */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v11 11/18] virtio/vsock: dequeue callback for SOCK_SEQPACKET
  2021-06-11 11:07 [PATCH v11 00/18] virtio/vsock: introduce SOCK_SEQPACKET support Arseny Krasnov
                   ` (9 preceding siblings ...)
  2021-06-11 11:12 ` [PATCH v11 10/18] virtio/vsock: defines and constants for SEQPACKET Arseny Krasnov
@ 2021-06-11 11:12 ` Arseny Krasnov
  2021-06-18 13:44     ` Stefano Garzarella
  2021-06-11 11:12 ` [PATCH v11 12/18] virtio/vsock: add SEQPACKET receive logic Arseny Krasnov
                   ` (8 subsequent siblings)
  19 siblings, 1 reply; 46+ messages in thread
From: Arseny Krasnov @ 2021-06-11 11:12 UTC (permalink / raw)
  To: Stefan Hajnoczi, Stefano Garzarella, Michael S. Tsirkin,
	Jason Wang, David S. Miller, Jakub Kicinski, Arseny Krasnov,
	Norbert Slusarek, Andra Paraschiv, Colin Ian King
  Cc: kvm, virtualization, netdev, linux-kernel, oxffffaa

Callback fetches RW packets from rx queue of socket until whole record
is copied(if user's buffer is full, user is not woken up). This is done
to not stall sender, because if we wake up user and it leaves syscall,
nobody will send credit update for rest of record, and sender will wait
for next enter of read syscall at receiver's side. So if user buffer is
full, we just send credit update and drop data.

Signed-off-by: Arseny Krasnov <arseny.krasnov@kaspersky.com>
---
 v10 -> v11:
 1) 'msg_count' field added to count current number of EORs.
 2) 'msg_ready' argument removed from callback.
 3) If 'memcpy_to_msg()' failed during copy loop, there will be
    no next attempts to copy data, rest of record will be freed.

 include/linux/virtio_vsock.h            |  5 ++
 net/vmw_vsock/virtio_transport_common.c | 84 +++++++++++++++++++++++++
 2 files changed, 89 insertions(+)

diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h
index dc636b727179..1d9a302cb91d 100644
--- a/include/linux/virtio_vsock.h
+++ b/include/linux/virtio_vsock.h
@@ -36,6 +36,7 @@ struct virtio_vsock_sock {
 	u32 rx_bytes;
 	u32 buf_alloc;
 	struct list_head rx_queue;
+	u32 msg_count;
 };
 
 struct virtio_vsock_pkt {
@@ -80,6 +81,10 @@ virtio_transport_dgram_dequeue(struct vsock_sock *vsk,
 			       struct msghdr *msg,
 			       size_t len, int flags);
 
+ssize_t
+virtio_transport_seqpacket_dequeue(struct vsock_sock *vsk,
+				   struct msghdr *msg,
+				   int flags);
 s64 virtio_transport_stream_has_data(struct vsock_sock *vsk);
 s64 virtio_transport_stream_has_space(struct vsock_sock *vsk);
 
diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
index ad0d34d41444..1e1df19ec164 100644
--- a/net/vmw_vsock/virtio_transport_common.c
+++ b/net/vmw_vsock/virtio_transport_common.c
@@ -393,6 +393,78 @@ virtio_transport_stream_do_dequeue(struct vsock_sock *vsk,
 	return err;
 }
 
+static int virtio_transport_seqpacket_do_dequeue(struct vsock_sock *vsk,
+						 struct msghdr *msg,
+						 int flags)
+{
+	struct virtio_vsock_sock *vvs = vsk->trans;
+	struct virtio_vsock_pkt *pkt;
+	int dequeued_len = 0;
+	size_t user_buf_len = msg_data_left(msg);
+	bool copy_failed = false;
+	bool msg_ready = false;
+
+	spin_lock_bh(&vvs->rx_lock);
+
+	if (vvs->msg_count == 0) {
+		spin_unlock_bh(&vvs->rx_lock);
+		return 0;
+	}
+
+	while (!msg_ready) {
+		pkt = list_first_entry(&vvs->rx_queue, struct virtio_vsock_pkt, list);
+
+		if (!copy_failed) {
+			size_t pkt_len;
+			size_t bytes_to_copy;
+
+			pkt_len = (size_t)le32_to_cpu(pkt->hdr.len);
+			bytes_to_copy = min(user_buf_len, pkt_len);
+
+			if (bytes_to_copy) {
+				int err;
+
+				/* sk_lock is held by caller so no one else can dequeue.
+				 * Unlock rx_lock since memcpy_to_msg() may sleep.
+				 */
+				spin_unlock_bh(&vvs->rx_lock);
+
+				err = memcpy_to_msg(msg, pkt->buf, bytes_to_copy);
+				if (err) {
+					/* Copy of message failed, set flag to skip
+					 * copy path for rest of fragments. Rest of
+					 * fragments will be freed without copy.
+					 */
+					copy_failed = true;
+					dequeued_len = err;
+				} else {
+					user_buf_len -= bytes_to_copy;
+				}
+
+				spin_lock_bh(&vvs->rx_lock);
+			}
+
+			if (dequeued_len >= 0)
+				dequeued_len += pkt_len;
+		}
+
+		if (le32_to_cpu(pkt->hdr.flags) & VIRTIO_VSOCK_SEQ_EOR) {
+			msg_ready = true;
+			vvs->msg_count--;
+		}
+
+		virtio_transport_dec_rx_pkt(vvs, pkt);
+		list_del(&pkt->list);
+		virtio_transport_free_pkt(pkt);
+	}
+
+	spin_unlock_bh(&vvs->rx_lock);
+
+	virtio_transport_send_credit_update(vsk);
+
+	return dequeued_len;
+}
+
 ssize_t
 virtio_transport_stream_dequeue(struct vsock_sock *vsk,
 				struct msghdr *msg,
@@ -405,6 +477,18 @@ virtio_transport_stream_dequeue(struct vsock_sock *vsk,
 }
 EXPORT_SYMBOL_GPL(virtio_transport_stream_dequeue);
 
+ssize_t
+virtio_transport_seqpacket_dequeue(struct vsock_sock *vsk,
+				   struct msghdr *msg,
+				   int flags)
+{
+	if (flags & MSG_PEEK)
+		return -EOPNOTSUPP;
+
+	return virtio_transport_seqpacket_do_dequeue(vsk, msg, flags);
+}
+EXPORT_SYMBOL_GPL(virtio_transport_seqpacket_dequeue);
+
 int
 virtio_transport_dgram_dequeue(struct vsock_sock *vsk,
 			       struct msghdr *msg,
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v11 12/18] virtio/vsock: add SEQPACKET receive logic
  2021-06-11 11:07 [PATCH v11 00/18] virtio/vsock: introduce SOCK_SEQPACKET support Arseny Krasnov
                   ` (10 preceding siblings ...)
  2021-06-11 11:12 ` [PATCH v11 11/18] virtio/vsock: dequeue callback for SOCK_SEQPACKET Arseny Krasnov
@ 2021-06-11 11:12 ` Arseny Krasnov
  2021-06-11 11:13 ` [PATCH v11 13/18] virtio/vsock: rest of SOCK_SEQPACKET support Arseny Krasnov
                   ` (7 subsequent siblings)
  19 siblings, 0 replies; 46+ messages in thread
From: Arseny Krasnov @ 2021-06-11 11:12 UTC (permalink / raw)
  To: Stefan Hajnoczi, Stefano Garzarella, Michael S. Tsirkin,
	Jason Wang, David S. Miller, Jakub Kicinski, Arseny Krasnov,
	Jorgen Hansen, Norbert Slusarek, Colin Ian King, Andra Paraschiv
  Cc: kvm, virtualization, netdev, linux-kernel, oxffffaa

Update current receive logic for SEQPACKET support: performs
check for packet and socket types on receive(if mismatch, then
reset connection). Increment EOR counter on receive. Also if
buffer of new packet was appended to buffer of last packet in
rx queue, update flags of last packet with flags of new packet.

Signed-off-by: Arseny Krasnov <arseny.krasnov@kaspersky.com>
---
 v10 -> v11:
 1) 'msg_count' processing added.
 2) Comment updated.
 3) Commit message updated.

 net/vmw_vsock/virtio_transport_common.c | 34 ++++++++++++++++++++++---
 1 file changed, 31 insertions(+), 3 deletions(-)

diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
index 1e1df19ec164..3a658ff8fccb 100644
--- a/net/vmw_vsock/virtio_transport_common.c
+++ b/net/vmw_vsock/virtio_transport_common.c
@@ -165,6 +165,14 @@ void virtio_transport_deliver_tap_pkt(struct virtio_vsock_pkt *pkt)
 }
 EXPORT_SYMBOL_GPL(virtio_transport_deliver_tap_pkt);
 
+static u16 virtio_transport_get_type(struct sock *sk)
+{
+	if (sk->sk_type == SOCK_STREAM)
+		return VIRTIO_VSOCK_TYPE_STREAM;
+	else
+		return VIRTIO_VSOCK_TYPE_SEQPACKET;
+}
+
 /* This function can only be used on connecting/connected sockets,
  * since a socket assigned to a transport is required.
  *
@@ -987,6 +995,9 @@ virtio_transport_recv_enqueue(struct vsock_sock *vsk,
 		goto out;
 	}
 
+	if (le32_to_cpu(pkt->hdr.flags) & VIRTIO_VSOCK_SEQ_EOR)
+		vvs->msg_count++;
+
 	/* Try to copy small packets into the buffer of last packet queued,
 	 * to avoid wasting memory queueing the entire buffer with a small
 	 * payload.
@@ -998,13 +1009,18 @@ virtio_transport_recv_enqueue(struct vsock_sock *vsk,
 					   struct virtio_vsock_pkt, list);
 
 		/* If there is space in the last packet queued, we copy the
-		 * new packet in its buffer.
+		 * new packet in its buffer. We avoid this if the last packet
+		 * queued has VIRTIO_VSOCK_SEQ_EOR set, because this is
+		 * delimiter of SEQPACKET record, so 'pkt' is the first packet
+		 * of a new record.
 		 */
-		if (pkt->len <= last_pkt->buf_len - last_pkt->len) {
+		if ((pkt->len <= last_pkt->buf_len - last_pkt->len) &&
+		    !(le32_to_cpu(last_pkt->hdr.flags) & VIRTIO_VSOCK_SEQ_EOR)) {
 			memcpy(last_pkt->buf + last_pkt->len, pkt->buf,
 			       pkt->len);
 			last_pkt->len += pkt->len;
 			free_pkt = true;
+			last_pkt->hdr.flags |= pkt->hdr.flags;
 			goto out;
 		}
 	}
@@ -1170,6 +1186,12 @@ virtio_transport_recv_listen(struct sock *sk, struct virtio_vsock_pkt *pkt,
 	return 0;
 }
 
+static bool virtio_transport_valid_type(u16 type)
+{
+	return (type == VIRTIO_VSOCK_TYPE_STREAM) ||
+	       (type == VIRTIO_VSOCK_TYPE_SEQPACKET);
+}
+
 /* We are under the virtio-vsock's vsock->rx_lock or vhost-vsock's vq->mutex
  * lock.
  */
@@ -1195,7 +1217,7 @@ void virtio_transport_recv_pkt(struct virtio_transport *t,
 					le32_to_cpu(pkt->hdr.buf_alloc),
 					le32_to_cpu(pkt->hdr.fwd_cnt));
 
-	if (le16_to_cpu(pkt->hdr.type) != VIRTIO_VSOCK_TYPE_STREAM) {
+	if (!virtio_transport_valid_type(le16_to_cpu(pkt->hdr.type))) {
 		(void)virtio_transport_reset_no_sock(t, pkt);
 		goto free_pkt;
 	}
@@ -1212,6 +1234,12 @@ void virtio_transport_recv_pkt(struct virtio_transport *t,
 		}
 	}
 
+	if (virtio_transport_get_type(sk) != le16_to_cpu(pkt->hdr.type)) {
+		(void)virtio_transport_reset_no_sock(t, pkt);
+		sock_put(sk);
+		goto free_pkt;
+	}
+
 	vsk = vsock_sk(sk);
 
 	lock_sock(sk);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v11 13/18] virtio/vsock: rest of SOCK_SEQPACKET support
  2021-06-11 11:07 [PATCH v11 00/18] virtio/vsock: introduce SOCK_SEQPACKET support Arseny Krasnov
                   ` (11 preceding siblings ...)
  2021-06-11 11:12 ` [PATCH v11 12/18] virtio/vsock: add SEQPACKET receive logic Arseny Krasnov
@ 2021-06-11 11:13 ` Arseny Krasnov
  2021-06-11 11:13 ` [PATCH v11 14/18] virtio/vsock: enable SEQPACKET for transport Arseny Krasnov
                   ` (6 subsequent siblings)
  19 siblings, 0 replies; 46+ messages in thread
From: Arseny Krasnov @ 2021-06-11 11:13 UTC (permalink / raw)
  To: Stefan Hajnoczi, Stefano Garzarella, Michael S. Tsirkin,
	Jason Wang, David S. Miller, Jakub Kicinski, Arseny Krasnov,
	Colin Ian King, Norbert Slusarek, Andra Paraschiv
  Cc: kvm, virtualization, netdev, linux-kernel, oxffffaa

Small updates to make SOCK_SEQPACKET work:
1) Send SHUTDOWN on socket close for SEQPACKET type.
2) Set SEQPACKET packet type during send.
3) Set 'VIRTIO_VSOCK_SEQ_EOR' bit in flags for last
   packet of message.
4) Implement data check function for SEQPACKET.
5) Check for max datagram size.

Signed-off-by: Arseny Krasnov <arseny.krasnov@kaspersky.com>
---
 v10 -> v11:
 1) 'flags' set way was changed, now '|=' used.
 2) Check for max datagram length added, EMSGSIZE
    returned.
 3) 'virtio_transport_seqpacket_has_data()' function added.

 include/linux/virtio_vsock.h            |  5 +++
 net/vmw_vsock/virtio_transport_common.c | 41 +++++++++++++++++++++++--
 2 files changed, 44 insertions(+), 2 deletions(-)

diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h
index 1d9a302cb91d..35d7eedb5e8e 100644
--- a/include/linux/virtio_vsock.h
+++ b/include/linux/virtio_vsock.h
@@ -81,12 +81,17 @@ virtio_transport_dgram_dequeue(struct vsock_sock *vsk,
 			       struct msghdr *msg,
 			       size_t len, int flags);
 
+int
+virtio_transport_seqpacket_enqueue(struct vsock_sock *vsk,
+				   struct msghdr *msg,
+				   size_t len);
 ssize_t
 virtio_transport_seqpacket_dequeue(struct vsock_sock *vsk,
 				   struct msghdr *msg,
 				   int flags);
 s64 virtio_transport_stream_has_data(struct vsock_sock *vsk);
 s64 virtio_transport_stream_has_space(struct vsock_sock *vsk);
+u32 virtio_transport_seqpacket_has_data(struct vsock_sock *vsk);
 
 int virtio_transport_do_socket_init(struct vsock_sock *vsk,
 				 struct vsock_sock *psk);
diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
index 3a658ff8fccb..23704a6bc437 100644
--- a/net/vmw_vsock/virtio_transport_common.c
+++ b/net/vmw_vsock/virtio_transport_common.c
@@ -74,6 +74,10 @@ virtio_transport_alloc_pkt(struct virtio_vsock_pkt_info *info,
 		err = memcpy_from_msg(pkt->buf, info->msg, len);
 		if (err)
 			goto out;
+
+		if (msg_data_left(info->msg) == 0 &&
+		    info->type == VIRTIO_VSOCK_TYPE_SEQPACKET)
+			pkt->hdr.flags |= cpu_to_le32(VIRTIO_VSOCK_SEQ_EOR);
 	}
 
 	trace_virtio_transport_alloc_pkt(src_cid, src_port,
@@ -187,7 +191,7 @@ static int virtio_transport_send_pkt_info(struct vsock_sock *vsk,
 	struct virtio_vsock_pkt *pkt;
 	u32 pkt_len = info->pkt_len;
 
-	info->type = VIRTIO_VSOCK_TYPE_STREAM;
+	info->type = virtio_transport_get_type(sk_vsock(vsk));
 
 	t_ops = virtio_transport_get_ops(vsk);
 	if (unlikely(!t_ops))
@@ -497,6 +501,26 @@ virtio_transport_seqpacket_dequeue(struct vsock_sock *vsk,
 }
 EXPORT_SYMBOL_GPL(virtio_transport_seqpacket_dequeue);
 
+int
+virtio_transport_seqpacket_enqueue(struct vsock_sock *vsk,
+				   struct msghdr *msg,
+				   size_t len)
+{
+	struct virtio_vsock_sock *vvs = vsk->trans;
+
+	spin_lock_bh(&vvs->tx_lock);
+
+	if (len > vvs->peer_buf_alloc) {
+		spin_unlock_bh(&vvs->tx_lock);
+		return -EMSGSIZE;
+	}
+
+	spin_unlock_bh(&vvs->tx_lock);
+
+	return virtio_transport_stream_enqueue(vsk, msg, len);
+}
+EXPORT_SYMBOL_GPL(virtio_transport_seqpacket_enqueue);
+
 int
 virtio_transport_dgram_dequeue(struct vsock_sock *vsk,
 			       struct msghdr *msg,
@@ -519,6 +543,19 @@ s64 virtio_transport_stream_has_data(struct vsock_sock *vsk)
 }
 EXPORT_SYMBOL_GPL(virtio_transport_stream_has_data);
 
+u32 virtio_transport_seqpacket_has_data(struct vsock_sock *vsk)
+{
+	struct virtio_vsock_sock *vvs = vsk->trans;
+	u32 msg_count;
+
+	spin_lock_bh(&vvs->rx_lock);
+	msg_count = vvs->msg_count;
+	spin_unlock_bh(&vvs->rx_lock);
+
+	return msg_count;
+}
+EXPORT_SYMBOL_GPL(virtio_transport_seqpacket_has_data);
+
 static s64 virtio_transport_has_space(struct vsock_sock *vsk)
 {
 	struct virtio_vsock_sock *vvs = vsk->trans;
@@ -931,7 +968,7 @@ void virtio_transport_release(struct vsock_sock *vsk)
 	struct sock *sk = &vsk->sk;
 	bool remove_sock = true;
 
-	if (sk->sk_type == SOCK_STREAM)
+	if (sk->sk_type == SOCK_STREAM || sk->sk_type == SOCK_SEQPACKET)
 		remove_sock = virtio_transport_close(vsk);
 
 	if (remove_sock) {
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v11 14/18] virtio/vsock: enable SEQPACKET for transport
  2021-06-11 11:07 [PATCH v11 00/18] virtio/vsock: introduce SOCK_SEQPACKET support Arseny Krasnov
                   ` (12 preceding siblings ...)
  2021-06-11 11:13 ` [PATCH v11 13/18] virtio/vsock: rest of SOCK_SEQPACKET support Arseny Krasnov
@ 2021-06-11 11:13 ` Arseny Krasnov
  2021-06-11 11:13 ` [PATCH v11 15/18] vhost/vsock: support " Arseny Krasnov
                   ` (5 subsequent siblings)
  19 siblings, 0 replies; 46+ messages in thread
From: Arseny Krasnov @ 2021-06-11 11:13 UTC (permalink / raw)
  To: Stefan Hajnoczi, Stefano Garzarella, Michael S. Tsirkin,
	Jason Wang, David S. Miller, Jakub Kicinski, Arseny Krasnov,
	Colin Ian King, Andra Paraschiv, Norbert Slusarek
  Cc: kvm, virtualization, netdev, linux-kernel, oxffffaa

To make transport work with SOCK_SEQPACKET add two things:
1) SOCK_SEQPACKET ops for virtio transport and 'seqpacket_allow()'
   callback.
2) Handling of SEQPACKET bit: guest tries to negotiate it with vhost,
   so feature will be enabled only if bit is negotiated with device.

Signed-off-by: Arseny Krasnov <arseny.krasnov@kaspersky.com>
---
 v10 -> v11:
 1) 'seqpacket_has_data()' callback set.
 2) Commit message updated.

 net/vmw_vsock/virtio_transport.c | 26 ++++++++++++++++++++++++++
 1 file changed, 26 insertions(+)

diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
index 2700a63ab095..e73ce652bf3c 100644
--- a/net/vmw_vsock/virtio_transport.c
+++ b/net/vmw_vsock/virtio_transport.c
@@ -62,6 +62,7 @@ struct virtio_vsock {
 	struct virtio_vsock_event event_list[8];
 
 	u32 guest_cid;
+	bool seqpacket_allow;
 };
 
 static u32 virtio_transport_get_local_cid(void)
@@ -443,6 +444,8 @@ static void virtio_vsock_rx_done(struct virtqueue *vq)
 	queue_work(virtio_vsock_workqueue, &vsock->rx_work);
 }
 
+static bool virtio_transport_seqpacket_allow(u32 remote_cid);
+
 static struct virtio_transport virtio_transport = {
 	.transport = {
 		.module                   = THIS_MODULE,
@@ -469,6 +472,11 @@ static struct virtio_transport virtio_transport = {
 		.stream_is_active         = virtio_transport_stream_is_active,
 		.stream_allow             = virtio_transport_stream_allow,
 
+		.seqpacket_dequeue        = virtio_transport_seqpacket_dequeue,
+		.seqpacket_enqueue        = virtio_transport_seqpacket_enqueue,
+		.seqpacket_allow          = virtio_transport_seqpacket_allow,
+		.seqpacket_has_data       = virtio_transport_seqpacket_has_data,
+
 		.notify_poll_in           = virtio_transport_notify_poll_in,
 		.notify_poll_out          = virtio_transport_notify_poll_out,
 		.notify_recv_init         = virtio_transport_notify_recv_init,
@@ -485,6 +493,19 @@ static struct virtio_transport virtio_transport = {
 	.send_pkt = virtio_transport_send_pkt,
 };
 
+static bool virtio_transport_seqpacket_allow(u32 remote_cid)
+{
+	struct virtio_vsock *vsock;
+	bool seqpacket_allow;
+
+	rcu_read_lock();
+	vsock = rcu_dereference(the_virtio_vsock);
+	seqpacket_allow = vsock->seqpacket_allow;
+	rcu_read_unlock();
+
+	return seqpacket_allow;
+}
+
 static void virtio_transport_rx_work(struct work_struct *work)
 {
 	struct virtio_vsock *vsock =
@@ -608,10 +629,14 @@ static int virtio_vsock_probe(struct virtio_device *vdev)
 	vsock->event_run = true;
 	mutex_unlock(&vsock->event_lock);
 
+	if (virtio_has_feature(vdev, VIRTIO_VSOCK_F_SEQPACKET))
+		vsock->seqpacket_allow = true;
+
 	vdev->priv = vsock;
 	rcu_assign_pointer(the_virtio_vsock, vsock);
 
 	mutex_unlock(&the_virtio_vsock_mutex);
+
 	return 0;
 
 out:
@@ -695,6 +720,7 @@ static struct virtio_device_id id_table[] = {
 };
 
 static unsigned int features[] = {
+	VIRTIO_VSOCK_F_SEQPACKET
 };
 
 static struct virtio_driver virtio_vsock_driver = {
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v11 15/18] vhost/vsock: support SEQPACKET for transport
  2021-06-11 11:07 [PATCH v11 00/18] virtio/vsock: introduce SOCK_SEQPACKET support Arseny Krasnov
                   ` (13 preceding siblings ...)
  2021-06-11 11:13 ` [PATCH v11 14/18] virtio/vsock: enable SEQPACKET for transport Arseny Krasnov
@ 2021-06-11 11:13 ` Arseny Krasnov
  2021-06-11 11:13 ` [PATCH v11 16/18] vsock/loopback: enable " Arseny Krasnov
                   ` (4 subsequent siblings)
  19 siblings, 0 replies; 46+ messages in thread
From: Arseny Krasnov @ 2021-06-11 11:13 UTC (permalink / raw)
  To: Stefan Hajnoczi, Stefano Garzarella, Michael S. Tsirkin,
	Jason Wang, David S. Miller, Jakub Kicinski, Arseny Krasnov,
	Norbert Slusarek, Andra Paraschiv, Colin Ian King
  Cc: kvm, virtualization, netdev, linux-kernel, oxffffaa

When received packet is copied to guests's rx queue, data buffers
of rx queue could be smaller that data buffer of input packet, so
data of input packet is copied to each rx buffer, thus each rx
buffer will be a packet with dynamically created header. Fields
of such header are initialized from header of input packet(except
length field which value is depends on number of bytes copied to
rx buffer). But in SEQPACKET case, we also need to take care of
record delimeter bit: if input packet has this bit set, we don't
copy it to header of packet in rx buffer, except case when such
rx buffer is last part of input packet. Otherwise, we will get
sequence of packets with delimeter bit set, thus braking record
bounds.
Also remove ignore of non-stream type of packets, handle SEQPACKET
feature bit.

Signed-off-by: Arseny Krasnov <arseny.krasnov@kaspersky.com>
---
 v10 -> v11:
 1) Large comment added to describe idea of patch.
 2) Comment message updated.
 3) 'seqpacket_has_data()' callback set.

 drivers/vhost/vsock.c | 56 +++++++++++++++++++++++++++++++++++++++----
 1 file changed, 52 insertions(+), 4 deletions(-)

diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
index 5e78fb719602..119f08491d3c 100644
--- a/drivers/vhost/vsock.c
+++ b/drivers/vhost/vsock.c
@@ -31,7 +31,8 @@
 
 enum {
 	VHOST_VSOCK_FEATURES = VHOST_FEATURES |
-			       (1ULL << VIRTIO_F_ACCESS_PLATFORM)
+			       (1ULL << VIRTIO_F_ACCESS_PLATFORM) |
+			       (1ULL << VIRTIO_VSOCK_F_SEQPACKET)
 };
 
 enum {
@@ -56,6 +57,7 @@ struct vhost_vsock {
 	atomic_t queued_replies;
 
 	u32 guest_cid;
+	bool seqpacket_allow;
 };
 
 static u32 vhost_transport_get_local_cid(void)
@@ -112,6 +114,7 @@ vhost_transport_do_send_pkt(struct vhost_vsock *vsock,
 		size_t nbytes;
 		size_t iov_len, payload_len;
 		int head;
+		bool restore_flag = false;
 
 		spin_lock_bh(&vsock->send_pkt_list_lock);
 		if (list_empty(&vsock->send_pkt_list)) {
@@ -168,9 +171,26 @@ vhost_transport_do_send_pkt(struct vhost_vsock *vsock,
 		/* If the packet is greater than the space available in the
 		 * buffer, we split it using multiple buffers.
 		 */
-		if (payload_len > iov_len - sizeof(pkt->hdr))
+		if (payload_len > iov_len - sizeof(pkt->hdr)) {
 			payload_len = iov_len - sizeof(pkt->hdr);
 
+			/* As we are copying pieces of large packet's buffer to
+			 * small rx buffers, headers of packets in rx queue are
+			 * created dynamically and are initialized with header
+			 * of current packet(except length). But in case of
+			 * SOCK_SEQPACKET, we also must clear record delimeter
+			 * bit(VIRTIO_VSOCK_SEQ_EOR). Otherwise, instead of one
+			 * packet with delimeter(which marks end of record),
+			 * there will be sequence of packets with delimeter
+			 * bit set. After initialized header will be copied to
+			 * rx buffer, this bit will be restored.
+			 */
+			if (le32_to_cpu(pkt->hdr.flags) & VIRTIO_VSOCK_SEQ_EOR) {
+				pkt->hdr.flags &= ~cpu_to_le32(VIRTIO_VSOCK_SEQ_EOR);
+				restore_flag = true;
+			}
+		}
+
 		/* Set the correct length in the header */
 		pkt->hdr.len = cpu_to_le32(payload_len);
 
@@ -204,6 +224,9 @@ vhost_transport_do_send_pkt(struct vhost_vsock *vsock,
 		 * to send it with the next available buffer.
 		 */
 		if (pkt->off < pkt->len) {
+			if (restore_flag)
+				pkt->hdr.flags |= cpu_to_le32(VIRTIO_VSOCK_SEQ_EOR);
+
 			/* We are queueing the same virtio_vsock_pkt to handle
 			 * the remaining bytes, and we want to deliver it
 			 * to monitoring devices in the next iteration.
@@ -354,8 +377,7 @@ vhost_vsock_alloc_pkt(struct vhost_virtqueue *vq,
 		return NULL;
 	}
 
-	if (le16_to_cpu(pkt->hdr.type) == VIRTIO_VSOCK_TYPE_STREAM)
-		pkt->len = le32_to_cpu(pkt->hdr.len);
+	pkt->len = le32_to_cpu(pkt->hdr.len);
 
 	/* No payload */
 	if (!pkt->len)
@@ -398,6 +420,8 @@ static bool vhost_vsock_more_replies(struct vhost_vsock *vsock)
 	return val < vq->num;
 }
 
+static bool vhost_transport_seqpacket_allow(u32 remote_cid);
+
 static struct virtio_transport vhost_transport = {
 	.transport = {
 		.module                   = THIS_MODULE,
@@ -424,6 +448,11 @@ static struct virtio_transport vhost_transport = {
 		.stream_is_active         = virtio_transport_stream_is_active,
 		.stream_allow             = virtio_transport_stream_allow,
 
+		.seqpacket_dequeue        = virtio_transport_seqpacket_dequeue,
+		.seqpacket_enqueue        = virtio_transport_seqpacket_enqueue,
+		.seqpacket_allow          = vhost_transport_seqpacket_allow,
+		.seqpacket_has_data       = virtio_transport_seqpacket_has_data,
+
 		.notify_poll_in           = virtio_transport_notify_poll_in,
 		.notify_poll_out          = virtio_transport_notify_poll_out,
 		.notify_recv_init         = virtio_transport_notify_recv_init,
@@ -441,6 +470,22 @@ static struct virtio_transport vhost_transport = {
 	.send_pkt = vhost_transport_send_pkt,
 };
 
+static bool vhost_transport_seqpacket_allow(u32 remote_cid)
+{
+	struct vhost_vsock *vsock;
+	bool seqpacket_allow = false;
+
+	rcu_read_lock();
+	vsock = vhost_vsock_get(remote_cid);
+
+	if (vsock)
+		seqpacket_allow = vsock->seqpacket_allow;
+
+	rcu_read_unlock();
+
+	return seqpacket_allow;
+}
+
 static void vhost_vsock_handle_tx_kick(struct vhost_work *work)
 {
 	struct vhost_virtqueue *vq = container_of(work, struct vhost_virtqueue,
@@ -785,6 +830,9 @@ static int vhost_vsock_set_features(struct vhost_vsock *vsock, u64 features)
 			goto err;
 	}
 
+	if (features & (1ULL << VIRTIO_VSOCK_F_SEQPACKET))
+		vsock->seqpacket_allow = true;
+
 	for (i = 0; i < ARRAY_SIZE(vsock->vqs); i++) {
 		vq = &vsock->vqs[i];
 		mutex_lock(&vq->mutex);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v11 16/18] vsock/loopback: enable SEQPACKET for transport
  2021-06-11 11:07 [PATCH v11 00/18] virtio/vsock: introduce SOCK_SEQPACKET support Arseny Krasnov
                   ` (14 preceding siblings ...)
  2021-06-11 11:13 ` [PATCH v11 15/18] vhost/vsock: support " Arseny Krasnov
@ 2021-06-11 11:13 ` Arseny Krasnov
  2021-06-11 11:14 ` [PATCH v11 17/18] vsock_test: add SOCK_SEQPACKET tests Arseny Krasnov
                   ` (3 subsequent siblings)
  19 siblings, 0 replies; 46+ messages in thread
From: Arseny Krasnov @ 2021-06-11 11:13 UTC (permalink / raw)
  To: Stefan Hajnoczi, Stefano Garzarella, Michael S. Tsirkin,
	Jason Wang, David S. Miller, Jakub Kicinski, Arseny Krasnov,
	Andra Paraschiv, Colin Ian King, Norbert Slusarek
  Cc: kvm, virtualization, netdev, linux-kernel, oxffffaa

Add SEQPACKET ops for loopback transport and 'seqpacket_allow()'
callback.

Signed-off-by: Arseny Krasnov <arseny.krasnov@kaspersky.com>
---
 v10 -> v11:
 1) 'seqpacket_has_data()' callback set.
 2) Reviewed-By removed.

 net/vmw_vsock/vsock_loopback.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/net/vmw_vsock/vsock_loopback.c b/net/vmw_vsock/vsock_loopback.c
index a45f7ffca8c5..169a8cf65b39 100644
--- a/net/vmw_vsock/vsock_loopback.c
+++ b/net/vmw_vsock/vsock_loopback.c
@@ -63,6 +63,8 @@ static int vsock_loopback_cancel_pkt(struct vsock_sock *vsk)
 	return 0;
 }
 
+static bool vsock_loopback_seqpacket_allow(u32 remote_cid);
+
 static struct virtio_transport loopback_transport = {
 	.transport = {
 		.module                   = THIS_MODULE,
@@ -89,6 +91,11 @@ static struct virtio_transport loopback_transport = {
 		.stream_is_active         = virtio_transport_stream_is_active,
 		.stream_allow             = virtio_transport_stream_allow,
 
+		.seqpacket_dequeue        = virtio_transport_seqpacket_dequeue,
+		.seqpacket_enqueue        = virtio_transport_seqpacket_enqueue,
+		.seqpacket_allow          = vsock_loopback_seqpacket_allow,
+		.seqpacket_has_data       = virtio_transport_seqpacket_has_data,
+
 		.notify_poll_in           = virtio_transport_notify_poll_in,
 		.notify_poll_out          = virtio_transport_notify_poll_out,
 		.notify_recv_init         = virtio_transport_notify_recv_init,
@@ -105,6 +112,11 @@ static struct virtio_transport loopback_transport = {
 	.send_pkt = vsock_loopback_send_pkt,
 };
 
+static bool vsock_loopback_seqpacket_allow(u32 remote_cid)
+{
+	return true;
+}
+
 static void vsock_loopback_work(struct work_struct *work)
 {
 	struct vsock_loopback *vsock =
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v11 17/18] vsock_test: add SOCK_SEQPACKET tests
  2021-06-11 11:07 [PATCH v11 00/18] virtio/vsock: introduce SOCK_SEQPACKET support Arseny Krasnov
                   ` (15 preceding siblings ...)
  2021-06-11 11:13 ` [PATCH v11 16/18] vsock/loopback: enable " Arseny Krasnov
@ 2021-06-11 11:14 ` Arseny Krasnov
  2021-06-11 11:14 ` [PATCH v11 18/18] virtio/vsock: update trace event for SEQPACKET Arseny Krasnov
                   ` (2 subsequent siblings)
  19 siblings, 0 replies; 46+ messages in thread
From: Arseny Krasnov @ 2021-06-11 11:14 UTC (permalink / raw)
  To: Stefan Hajnoczi, Stefano Garzarella, Michael S. Tsirkin,
	Jason Wang, David S. Miller, Jakub Kicinski, Arseny Krasnov,
	Jorgen Hansen, Norbert Slusarek, Andra Paraschiv, Colin Ian King
  Cc: kvm, virtualization, netdev, linux-kernel, oxffffaa

Implement two tests of SOCK_SEQPACKET socket: first sends data by
several 'write()'s and checks that number of 'read()' were same.
Second test checks MSG_TRUNC flag. Cases for connect(), bind(),
etc. are not tested, because it is same as for stream socket.

Signed-off-by: Arseny Krasnov <arseny.krasnov@kaspersky.com>
---
 tools/testing/vsock/util.c       |  32 +++++++--
 tools/testing/vsock/util.h       |   3 +
 tools/testing/vsock/vsock_test.c | 116 +++++++++++++++++++++++++++++++
 3 files changed, 146 insertions(+), 5 deletions(-)

diff --git a/tools/testing/vsock/util.c b/tools/testing/vsock/util.c
index 93cbd6f603f9..2acbb7703c6a 100644
--- a/tools/testing/vsock/util.c
+++ b/tools/testing/vsock/util.c
@@ -84,7 +84,7 @@ void vsock_wait_remote_close(int fd)
 }
 
 /* Connect to <cid, port> and return the file descriptor. */
-int vsock_stream_connect(unsigned int cid, unsigned int port)
+static int vsock_connect(unsigned int cid, unsigned int port, int type)
 {
 	union {
 		struct sockaddr sa;
@@ -101,7 +101,7 @@ int vsock_stream_connect(unsigned int cid, unsigned int port)
 
 	control_expectln("LISTENING");
 
-	fd = socket(AF_VSOCK, SOCK_STREAM, 0);
+	fd = socket(AF_VSOCK, type, 0);
 
 	timeout_begin(TIMEOUT);
 	do {
@@ -120,11 +120,21 @@ int vsock_stream_connect(unsigned int cid, unsigned int port)
 	return fd;
 }
 
+int vsock_stream_connect(unsigned int cid, unsigned int port)
+{
+	return vsock_connect(cid, port, SOCK_STREAM);
+}
+
+int vsock_seqpacket_connect(unsigned int cid, unsigned int port)
+{
+	return vsock_connect(cid, port, SOCK_SEQPACKET);
+}
+
 /* Listen on <cid, port> and return the first incoming connection.  The remote
  * address is stored to clientaddrp.  clientaddrp may be NULL.
  */
-int vsock_stream_accept(unsigned int cid, unsigned int port,
-			struct sockaddr_vm *clientaddrp)
+static int vsock_accept(unsigned int cid, unsigned int port,
+			struct sockaddr_vm *clientaddrp, int type)
 {
 	union {
 		struct sockaddr sa;
@@ -145,7 +155,7 @@ int vsock_stream_accept(unsigned int cid, unsigned int port,
 	int client_fd;
 	int old_errno;
 
-	fd = socket(AF_VSOCK, SOCK_STREAM, 0);
+	fd = socket(AF_VSOCK, type, 0);
 
 	if (bind(fd, &addr.sa, sizeof(addr.svm)) < 0) {
 		perror("bind");
@@ -189,6 +199,18 @@ int vsock_stream_accept(unsigned int cid, unsigned int port,
 	return client_fd;
 }
 
+int vsock_stream_accept(unsigned int cid, unsigned int port,
+			struct sockaddr_vm *clientaddrp)
+{
+	return vsock_accept(cid, port, clientaddrp, SOCK_STREAM);
+}
+
+int vsock_seqpacket_accept(unsigned int cid, unsigned int port,
+			   struct sockaddr_vm *clientaddrp)
+{
+	return vsock_accept(cid, port, clientaddrp, SOCK_SEQPACKET);
+}
+
 /* Transmit one byte and check the return value.
  *
  * expected_ret:
diff --git a/tools/testing/vsock/util.h b/tools/testing/vsock/util.h
index e53dd09d26d9..a3375ad2fb7f 100644
--- a/tools/testing/vsock/util.h
+++ b/tools/testing/vsock/util.h
@@ -36,8 +36,11 @@ struct test_case {
 void init_signals(void);
 unsigned int parse_cid(const char *str);
 int vsock_stream_connect(unsigned int cid, unsigned int port);
+int vsock_seqpacket_connect(unsigned int cid, unsigned int port);
 int vsock_stream_accept(unsigned int cid, unsigned int port,
 			struct sockaddr_vm *clientaddrp);
+int vsock_seqpacket_accept(unsigned int cid, unsigned int port,
+			   struct sockaddr_vm *clientaddrp);
 void vsock_wait_remote_close(int fd);
 void send_byte(int fd, int expected_ret, int flags);
 void recv_byte(int fd, int expected_ret, int flags);
diff --git a/tools/testing/vsock/vsock_test.c b/tools/testing/vsock/vsock_test.c
index 5a4fb80fa832..67766bfe176f 100644
--- a/tools/testing/vsock/vsock_test.c
+++ b/tools/testing/vsock/vsock_test.c
@@ -14,6 +14,8 @@
 #include <errno.h>
 #include <unistd.h>
 #include <linux/kernel.h>
+#include <sys/types.h>
+#include <sys/socket.h>
 
 #include "timeout.h"
 #include "control.h"
@@ -279,6 +281,110 @@ static void test_stream_msg_peek_server(const struct test_opts *opts)
 	close(fd);
 }
 
+#define MESSAGES_CNT 7
+static void test_seqpacket_msg_bounds_client(const struct test_opts *opts)
+{
+	int fd;
+
+	fd = vsock_seqpacket_connect(opts->peer_cid, 1234);
+	if (fd < 0) {
+		perror("connect");
+		exit(EXIT_FAILURE);
+	}
+
+	/* Send several messages, one with MSG_EOR flag */
+	for (int i = 0; i < MESSAGES_CNT; i++)
+		send_byte(fd, 1, 0);
+
+	control_writeln("SENDDONE");
+	close(fd);
+}
+
+static void test_seqpacket_msg_bounds_server(const struct test_opts *opts)
+{
+	int fd;
+	char buf[16];
+	struct msghdr msg = {0};
+	struct iovec iov = {0};
+
+	fd = vsock_seqpacket_accept(VMADDR_CID_ANY, 1234, NULL);
+	if (fd < 0) {
+		perror("accept");
+		exit(EXIT_FAILURE);
+	}
+
+	control_expectln("SENDDONE");
+	iov.iov_base = buf;
+	iov.iov_len = sizeof(buf);
+	msg.msg_iov = &iov;
+	msg.msg_iovlen = 1;
+
+	for (int i = 0; i < MESSAGES_CNT; i++) {
+		if (recvmsg(fd, &msg, 0) != 1) {
+			perror("message bound violated");
+			exit(EXIT_FAILURE);
+		}
+	}
+
+	close(fd);
+}
+
+#define MESSAGE_TRUNC_SZ 32
+static void test_seqpacket_msg_trunc_client(const struct test_opts *opts)
+{
+	int fd;
+	char buf[MESSAGE_TRUNC_SZ];
+
+	fd = vsock_seqpacket_connect(opts->peer_cid, 1234);
+	if (fd < 0) {
+		perror("connect");
+		exit(EXIT_FAILURE);
+	}
+
+	if (send(fd, buf, sizeof(buf), 0) != sizeof(buf)) {
+		perror("send failed");
+		exit(EXIT_FAILURE);
+	}
+
+	control_writeln("SENDDONE");
+	close(fd);
+}
+
+static void test_seqpacket_msg_trunc_server(const struct test_opts *opts)
+{
+	int fd;
+	char buf[MESSAGE_TRUNC_SZ / 2];
+	struct msghdr msg = {0};
+	struct iovec iov = {0};
+
+	fd = vsock_seqpacket_accept(VMADDR_CID_ANY, 1234, NULL);
+	if (fd < 0) {
+		perror("accept");
+		exit(EXIT_FAILURE);
+	}
+
+	control_expectln("SENDDONE");
+	iov.iov_base = buf;
+	iov.iov_len = sizeof(buf);
+	msg.msg_iov = &iov;
+	msg.msg_iovlen = 1;
+
+	ssize_t ret = recvmsg(fd, &msg, MSG_TRUNC);
+
+	if (ret != MESSAGE_TRUNC_SZ) {
+		printf("%zi\n", ret);
+		perror("MSG_TRUNC doesn't work");
+		exit(EXIT_FAILURE);
+	}
+
+	if (!(msg.msg_flags & MSG_TRUNC)) {
+		fprintf(stderr, "MSG_TRUNC expected\n");
+		exit(EXIT_FAILURE);
+	}
+
+	close(fd);
+}
+
 static struct test_case test_cases[] = {
 	{
 		.name = "SOCK_STREAM connection reset",
@@ -309,6 +415,16 @@ static struct test_case test_cases[] = {
 		.run_client = test_stream_msg_peek_client,
 		.run_server = test_stream_msg_peek_server,
 	},
+	{
+		.name = "SOCK_SEQPACKET msg bounds",
+		.run_client = test_seqpacket_msg_bounds_client,
+		.run_server = test_seqpacket_msg_bounds_server,
+	},
+	{
+		.name = "SOCK_SEQPACKET MSG_TRUNC flag",
+		.run_client = test_seqpacket_msg_trunc_client,
+		.run_server = test_seqpacket_msg_trunc_server,
+	},
 	{},
 };
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v11 18/18] virtio/vsock: update trace event for SEQPACKET
  2021-06-11 11:07 [PATCH v11 00/18] virtio/vsock: introduce SOCK_SEQPACKET support Arseny Krasnov
                   ` (16 preceding siblings ...)
  2021-06-11 11:14 ` [PATCH v11 17/18] vsock_test: add SOCK_SEQPACKET tests Arseny Krasnov
@ 2021-06-11 11:14 ` Arseny Krasnov
  2021-06-11 11:17 ` [PATCH v11 00/18] virtio/vsock: introduce SOCK_SEQPACKET support Arseny Krasnov
  2021-06-11 21:00 ` patchwork-bot+netdevbpf
  19 siblings, 0 replies; 46+ messages in thread
From: Arseny Krasnov @ 2021-06-11 11:14 UTC (permalink / raw)
  To: Stefan Hajnoczi, Stefano Garzarella, Michael S. Tsirkin,
	Jason Wang, David S. Miller, Jakub Kicinski, Steven Rostedt,
	Ingo Molnar, Arseny Krasnov, Jorgen Hansen, Andra Paraschiv,
	Norbert Slusarek, Colin Ian King
  Cc: kvm, virtualization, netdev, linux-kernel, oxffffaa

Add SEQPACKET socket type to vsock trace event.

Signed-off-by: Arseny Krasnov <arseny.krasnov@kaspersky.com>
---
 v10 -> v11:
 1) Indentation fixed.

 include/trace/events/vsock_virtio_transport_common.h | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/include/trace/events/vsock_virtio_transport_common.h b/include/trace/events/vsock_virtio_transport_common.h
index 6782213778be..d0b3f0ea9ba1 100644
--- a/include/trace/events/vsock_virtio_transport_common.h
+++ b/include/trace/events/vsock_virtio_transport_common.h
@@ -9,9 +9,12 @@
 #include <linux/tracepoint.h>
 
 TRACE_DEFINE_ENUM(VIRTIO_VSOCK_TYPE_STREAM);
+TRACE_DEFINE_ENUM(VIRTIO_VSOCK_TYPE_SEQPACKET);
 
 #define show_type(val) \
-	__print_symbolic(val, { VIRTIO_VSOCK_TYPE_STREAM, "STREAM" })
+	__print_symbolic(val, \
+			 { VIRTIO_VSOCK_TYPE_STREAM, "STREAM" }, \
+			 { VIRTIO_VSOCK_TYPE_SEQPACKET, "SEQPACKET" })
 
 TRACE_DEFINE_ENUM(VIRTIO_VSOCK_OP_INVALID);
 TRACE_DEFINE_ENUM(VIRTIO_VSOCK_OP_REQUEST);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* Re: [PATCH v11 00/18] virtio/vsock: introduce SOCK_SEQPACKET support
  2021-06-11 11:07 [PATCH v11 00/18] virtio/vsock: introduce SOCK_SEQPACKET support Arseny Krasnov
                   ` (17 preceding siblings ...)
  2021-06-11 11:14 ` [PATCH v11 18/18] virtio/vsock: update trace event for SEQPACKET Arseny Krasnov
@ 2021-06-11 11:17 ` Arseny Krasnov
  2021-06-11 12:25     ` Stefano Garzarella
  2021-06-11 21:00 ` patchwork-bot+netdevbpf
  19 siblings, 1 reply; 46+ messages in thread
From: Arseny Krasnov @ 2021-06-11 11:17 UTC (permalink / raw)
  To: Stefan Hajnoczi, Stefano Garzarella, Michael S. Tsirkin,
	Jason Wang, David S. Miller, Jakub Kicinski, Andra Paraschiv,
	Norbert Slusarek, Colin Ian King
  Cc: kvm, virtualization, netdev, linux-kernel, oxffffaa


On 11.06.2021 14:07, Arseny Krasnov wrote:
> 	This patchset implements support of SOCK_SEQPACKET for virtio
> transport.
> 	As SOCK_SEQPACKET guarantees to save record boundaries, so to
> do it, new bit for field 'flags' was added: SEQ_EOR. This bit is
> set to 1 in last RW packet of message.
> 	Now as  packets of one socket are not reordered neither on vsock
> nor on vhost transport layers, such bit allows to restore original
> message on receiver's side. If user's buffer is smaller than message
> length, when all out of size data is dropped.
> 	Maximum length of datagram is limited by 'peer_buf_alloc' value.
> 	Implementation also supports 'MSG_TRUNC' flags.
> 	Tests also implemented.
>
> 	Thanks to stsp2@yandex.ru for encouragements and initial design
> recommendations.
>
>  Arseny Krasnov (18):
>   af_vsock: update functions for connectible socket
>   af_vsock: separate wait data loop
>   af_vsock: separate receive data loop
>   af_vsock: implement SEQPACKET receive loop
>   af_vsock: implement send logic for SEQPACKET
>   af_vsock: rest of SEQPACKET support
>   af_vsock: update comments for stream sockets
>   virtio/vsock: set packet's type in virtio_transport_send_pkt_info()
>   virtio/vsock: simplify credit update function API
>   virtio/vsock: defines and constants for SEQPACKET
>   virtio/vsock: dequeue callback for SOCK_SEQPACKET
>   virtio/vsock: add SEQPACKET receive logic
>   virtio/vsock: rest of SOCK_SEQPACKET support
>   virtio/vsock: enable SEQPACKET for transport
>   vhost/vsock: enable SEQPACKET for transport
>   vsock/loopback: enable SEQPACKET for transport
>   vsock_test: add SOCK_SEQPACKET tests
>   virtio/vsock: update trace event for SEQPACKET
>
>  drivers/vhost/vsock.c                              |  56 ++-
>  include/linux/virtio_vsock.h                       |  10 +
>  include/net/af_vsock.h                             |   8 +
>  .../trace/events/vsock_virtio_transport_common.h   |   5 +-
>  include/uapi/linux/virtio_vsock.h                  |   9 +
>  net/vmw_vsock/af_vsock.c                           | 464 ++++++++++++------
>  net/vmw_vsock/virtio_transport.c                   |  26 ++
>  net/vmw_vsock/virtio_transport_common.c            | 179 +++++++-
>  net/vmw_vsock/vsock_loopback.c                     |  12 +
>  tools/testing/vsock/util.c                         |  32 +-
>  tools/testing/vsock/util.h                         |   3 +
>  tools/testing/vsock/vsock_test.c                   | 116 ++++++
>  12 files changed, 730 insertions(+), 190 deletions(-)
>
>  v10 -> v11:
>  General changelog:
>   - now data is copied to user's buffer only when
>     whole message is received.
>   - reader is woken up when EOR packet is received.
>   - if read syscall was interrupted by signal or
>     timeout, error is returned(not 0).
>
>  Per patch changelog:
>   see every patch after '---' line.
So here is new version for review with updates discussed earlier :)
>
>  v9 -> v10:
>  General changelog:
>  - patch for write serialization removed from patchset.
>  - commit messages rephrased
>
>  Per patch changelog:
>   see every patch after '---' line.
>
>  v8 -> v9:
>  General changelog:
>  - see per patch change log.
>
>  Per patch changelog:
>   see every patch after '---' line.
>
>  v7 -> v8:
>  General changelog:
>  - whole idea is simplified: channel now considered reliable,
>    so SEQ_BEGIN, SEQ_END, 'msg_len' and 'msg_id' were removed.
>    Only thing that is used to mark end of message is bit in
>    'flags' field of packet header: VIRTIO_VSOCK_SEQ_EOR. Packet
>    with such bit set to 1 means, that this is last packet of
>    message.
>
>  - POSIX MSG_EOR support is removed, as there is no exact
>    description how it works.
>
>  - all changes to 'include/uapi/linux/virtio_vsock.h' moved
>    to dedicated patch, as these changes linked with patch to
>    spec.
>
>  - patch 'virtio/vsock: SEQPACKET feature bit support' now merged
>    to 'virtio/vsock: setup SEQPACKET ops for transport'.
>
>  - patch 'vhost/vsock: SEQPACKET feature bit support' now merged
>    to 'vhost/vsock: setup SEQPACKET ops for transport'.
>
>  Per patch changelog:
>   see every patch after '---' line.
>
>  v6 -> v7:
>  General changelog:
>  - virtio transport callback for message length now removed
>    from transport. Length of record is returned by dequeue
>    callback.
>
>  - function which tries to get message length now returns 0
>    when rx queue is empty. Also length of current message in
>    progress is set to 0, when message processed or error
>    happens.
>
>  - patches for virtio feature bit moved after patches with
>    transport ops.
>
>  Per patch changelog:
>   see every patch after '---' line.
>
>  v5 -> v6:
>  General changelog:
>  - virtio transport specific callbacks which send SEQ_BEGIN or
>    SEQ_END now hidden inside virtio transport. Only enqueue,
>    dequeue and record length callbacks are provided by transport.
>
>  - virtio feature bit for SEQPACKET socket support introduced:
>    VIRTIO_VSOCK_F_SEQPACKET.
>
>  - 'msg_cnt' field in 'struct virtio_vsock_seq_hdr' renamed to
>    'msg_id' and used as id.
>
>  Per patch changelog:
>  - 'af_vsock: separate wait data loop':
>     1) Commit message updated.
>     2) 'prepare_to_wait()' moved inside while loop(thanks to
>       Jorgen Hansen).
>     Marked 'Reviewed-by' with 1), but as 2) I removed R-b.
>
>  - 'af_vsock: separate receive data loop': commit message
>     updated.
>     Marked 'Reviewed-by' with that fix.
>
>  - 'af_vsock: implement SEQPACKET receive loop': style fixes.
>
>  - 'af_vsock: rest of SEQPACKET support':
>     1) 'module_put()' added when transport callback check failed.
>     2) Now only 'seqpacket_allow()' callback called to check
>        support of SEQPACKET by transport.
>
>  - 'af_vsock: update comments for stream sockets': commit message
>     updated.
>     Marked 'Reviewed-by' with that fix.
>
>  - 'virtio/vsock: set packet's type in send':
>     1) Commit message updated.
>     2) Parameter 'type' from 'virtio_transport_send_credit_update()'
>        also removed in this patch instead of in next.
>
>  - 'virtio/vsock: dequeue callback for SOCK_SEQPACKET': SEQPACKET
>     related state wrapped to special struct.
>
>  - 'virtio/vsock: update trace event for SEQPACKET': format strings
>     now not broken by new lines.
>
>  v4 -> v5:
>  - patches reorganized:
>    1) Setting of packet's type in 'virtio_transport_send_pkt_info()'
>       is moved to separate patch.
>    2) Simplifying of 'virtio_transport_send_credit_update()' is
>       moved to separate patch and before main virtio/vsock patches.
>  - style problem fixed
>  - in 'af_vsock: separate receive data loop' extra 'release_sock()'
>    removed
>  - added trace event fields for SEQPACKET
>  - in 'af_vsock: separate wait data loop':
>    1) 'vsock_wait_data()' removed 'goto out;'
>    2) Comment for invalid data amount is changed.
>  - in 'af_vsock: rest of SEQPACKET support', 'new_transport' pointer
>    check is moved after 'try_module_get()'
>  - in 'af_vsock: update comments for stream sockets', 'connect-oriented'
>    replaced with 'connection-oriented'
>  - in 'loopback/vsock: setup SEQPACKET ops for transport',
>    'loopback/vsock' replaced with 'vsock/loopback'
>
>  v3 -> v4:
>  - SEQPACKET specific metadata moved from packet header to payload
>    and called 'virtio_vsock_seq_hdr'
>  - record integrity check:
>    1) SEQ_END operation was added, which marks end of record.
>    2) Both SEQ_BEGIN and SEQ_END carries counter which is incremented
>       on every marker send.
>  - af_vsock.c: socket operations for STREAM and SEQPACKET call same
>    functions instead of having own "gates" differs only by names:
>    'vsock_seqpacket/stream_getsockopt()' now replaced with
>    'vsock_connectible_getsockopt()'.
>  - af_vsock.c: 'seqpacket_dequeue' callback returns error and flag that
>    record ready. There is no need to return number of copied bytes,
>    because case when record received successfully is checked at virtio
>    transport layer, when SEQ_END is processed. Also user doesn't need
>    number of copied bytes, because 'recv()' from SEQPACKET could return
>    error, length of users's buffer or length of whole record(both are
>    known in af_vsock.c).
>  - af_vsock.c: both wait loops in af_vsock.c(for data and space) moved
>    to separate functions because now both called from several places.
>  - af_vsock.c: 'vsock_assign_transport()' checks that 'new_transport'
>    pointer is not NULL and returns 'ESOCKTNOSUPPORT' instead of 'ENODEV'
>    if failed to use transport.
>  - tools/testing/vsock/vsock_test.c: rename tests
>
>  v2 -> v3:
>  - patches reorganized: split for prepare and implementation patches
>  - local variables are declared in "Reverse Christmas tree" manner
>  - virtio_transport_common.c: valid leXX_to_cpu() for vsock header
>    fields access
>  - af_vsock.c: 'vsock_connectible_*sockopt()' added as shared code
>    between stream and seqpacket sockets.
>  - af_vsock.c: loops in '__vsock_*_recvmsg()' refactored.
>  - af_vsock.c: 'vsock_wait_data()' refactored.
>
>  v1 -> v2:
>  - patches reordered: af_vsock.c related changes now before virtio vsock
>  - patches reorganized: more small patches, where +/- are not mixed
>  - tests for SOCK_SEQPACKET added
>  - all commit messages updated
>  - af_vsock.c: 'vsock_pre_recv_check()' inlined to
>    'vsock_connectible_recvmsg()'
>  - af_vsock.c: 'vsock_assign_transport()' returns ENODEV if transport
>    was not found
>  - virtio_transport_common.c: transport callback for seqpacket dequeue
>  - virtio_transport_common.c: simplified
>    'virtio_transport_recv_connected()'
>  - virtio_transport_common.c: send reset on socket and packet type
> 			      mismatch.
>
> Signed-off-by: Arseny Krasnov <arseny.krasnov@kaspersky.com>
>

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v11 00/18] virtio/vsock: introduce SOCK_SEQPACKET support
  2021-06-11 11:17 ` [PATCH v11 00/18] virtio/vsock: introduce SOCK_SEQPACKET support Arseny Krasnov
@ 2021-06-11 12:25     ` Stefano Garzarella
  0 siblings, 0 replies; 46+ messages in thread
From: Stefano Garzarella @ 2021-06-11 12:25 UTC (permalink / raw)
  To: Arseny Krasnov
  Cc: Stefan Hajnoczi, Michael S. Tsirkin, Jason Wang, David S. Miller,
	Jakub Kicinski, Andra Paraschiv, Norbert Slusarek,
	Colin Ian King, kvm, virtualization, netdev, linux-kernel,
	oxffffaa

Hi Arseny,

On Fri, Jun 11, 2021 at 02:17:00PM +0300, Arseny Krasnov wrote:
>
>On 11.06.2021 14:07, Arseny Krasnov wrote:
>> 	This patchset implements support of SOCK_SEQPACKET for virtio
>> transport.
>> 	As SOCK_SEQPACKET guarantees to save record boundaries, so to
>> do it, new bit for field 'flags' was added: SEQ_EOR. This bit is
>> set to 1 in last RW packet of message.
>> 	Now as  packets of one socket are not reordered neither on vsock
>> nor on vhost transport layers, such bit allows to restore original
>> message on receiver's side. If user's buffer is smaller than message
>> length, when all out of size data is dropped.
>> 	Maximum length of datagram is limited by 'peer_buf_alloc' value.
>> 	Implementation also supports 'MSG_TRUNC' flags.
>> 	Tests also implemented.
>>
>> 	Thanks to stsp2@yandex.ru for encouragements and initial design
>> recommendations.
>>
>>  Arseny Krasnov (18):
>>   af_vsock: update functions for connectible socket
>>   af_vsock: separate wait data loop
>>   af_vsock: separate receive data loop
>>   af_vsock: implement SEQPACKET receive loop
>>   af_vsock: implement send logic for SEQPACKET
>>   af_vsock: rest of SEQPACKET support
>>   af_vsock: update comments for stream sockets
>>   virtio/vsock: set packet's type in virtio_transport_send_pkt_info()
>>   virtio/vsock: simplify credit update function API
>>   virtio/vsock: defines and constants for SEQPACKET
>>   virtio/vsock: dequeue callback for SOCK_SEQPACKET
>>   virtio/vsock: add SEQPACKET receive logic
>>   virtio/vsock: rest of SOCK_SEQPACKET support
>>   virtio/vsock: enable SEQPACKET for transport
>>   vhost/vsock: enable SEQPACKET for transport
>>   vsock/loopback: enable SEQPACKET for transport
>>   vsock_test: add SOCK_SEQPACKET tests
>>   virtio/vsock: update trace event for SEQPACKET
>>
>>  drivers/vhost/vsock.c                              |  56 ++-
>>  include/linux/virtio_vsock.h                       |  10 +
>>  include/net/af_vsock.h                             |   8 +
>>  .../trace/events/vsock_virtio_transport_common.h   |   5 +-
>>  include/uapi/linux/virtio_vsock.h                  |   9 +
>>  net/vmw_vsock/af_vsock.c                           | 464 ++++++++++++------
>>  net/vmw_vsock/virtio_transport.c                   |  26 ++
>>  net/vmw_vsock/virtio_transport_common.c            | 179 +++++++-
>>  net/vmw_vsock/vsock_loopback.c                     |  12 +
>>  tools/testing/vsock/util.c                         |  32 +-
>>  tools/testing/vsock/util.h                         |   3 +
>>  tools/testing/vsock/vsock_test.c                   | 116 ++++++
>>  12 files changed, 730 insertions(+), 190 deletions(-)
>>
>>  v10 -> v11:
>>  General changelog:
>>   - now data is copied to user's buffer only when
>>     whole message is received.
>>   - reader is woken up when EOR packet is received.
>>   - if read syscall was interrupted by signal or
>>     timeout, error is returned(not 0).
>>
>>  Per patch changelog:
>>   see every patch after '---' line.
>So here is new version for review with updates discussed earlier :)

Thanks, I'll review next week, but I suggest you again to split in two 
series, since patchwork (and netdev maintainers) are not happy with a 
series of 18 patches.

If you still prefer to keep them together during development, then 
please use the RFC tag.

Also did you take a look at the FAQ for netdev that I linked last time?
I don't see the net-next tag...

Thanks,
Stefano


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v11 00/18] virtio/vsock: introduce SOCK_SEQPACKET support
@ 2021-06-11 12:25     ` Stefano Garzarella
  0 siblings, 0 replies; 46+ messages in thread
From: Stefano Garzarella @ 2021-06-11 12:25 UTC (permalink / raw)
  To: Arseny Krasnov
  Cc: Andra Paraschiv, kvm, Michael S. Tsirkin, netdev, linux-kernel,
	virtualization, oxffffaa, Norbert Slusarek, Stefan Hajnoczi,
	Colin Ian King, Jakub Kicinski, David S. Miller

Hi Arseny,

On Fri, Jun 11, 2021 at 02:17:00PM +0300, Arseny Krasnov wrote:
>
>On 11.06.2021 14:07, Arseny Krasnov wrote:
>> 	This patchset implements support of SOCK_SEQPACKET for virtio
>> transport.
>> 	As SOCK_SEQPACKET guarantees to save record boundaries, so to
>> do it, new bit for field 'flags' was added: SEQ_EOR. This bit is
>> set to 1 in last RW packet of message.
>> 	Now as  packets of one socket are not reordered neither on vsock
>> nor on vhost transport layers, such bit allows to restore original
>> message on receiver's side. If user's buffer is smaller than message
>> length, when all out of size data is dropped.
>> 	Maximum length of datagram is limited by 'peer_buf_alloc' value.
>> 	Implementation also supports 'MSG_TRUNC' flags.
>> 	Tests also implemented.
>>
>> 	Thanks to stsp2@yandex.ru for encouragements and initial design
>> recommendations.
>>
>>  Arseny Krasnov (18):
>>   af_vsock: update functions for connectible socket
>>   af_vsock: separate wait data loop
>>   af_vsock: separate receive data loop
>>   af_vsock: implement SEQPACKET receive loop
>>   af_vsock: implement send logic for SEQPACKET
>>   af_vsock: rest of SEQPACKET support
>>   af_vsock: update comments for stream sockets
>>   virtio/vsock: set packet's type in virtio_transport_send_pkt_info()
>>   virtio/vsock: simplify credit update function API
>>   virtio/vsock: defines and constants for SEQPACKET
>>   virtio/vsock: dequeue callback for SOCK_SEQPACKET
>>   virtio/vsock: add SEQPACKET receive logic
>>   virtio/vsock: rest of SOCK_SEQPACKET support
>>   virtio/vsock: enable SEQPACKET for transport
>>   vhost/vsock: enable SEQPACKET for transport
>>   vsock/loopback: enable SEQPACKET for transport
>>   vsock_test: add SOCK_SEQPACKET tests
>>   virtio/vsock: update trace event for SEQPACKET
>>
>>  drivers/vhost/vsock.c                              |  56 ++-
>>  include/linux/virtio_vsock.h                       |  10 +
>>  include/net/af_vsock.h                             |   8 +
>>  .../trace/events/vsock_virtio_transport_common.h   |   5 +-
>>  include/uapi/linux/virtio_vsock.h                  |   9 +
>>  net/vmw_vsock/af_vsock.c                           | 464 ++++++++++++------
>>  net/vmw_vsock/virtio_transport.c                   |  26 ++
>>  net/vmw_vsock/virtio_transport_common.c            | 179 +++++++-
>>  net/vmw_vsock/vsock_loopback.c                     |  12 +
>>  tools/testing/vsock/util.c                         |  32 +-
>>  tools/testing/vsock/util.h                         |   3 +
>>  tools/testing/vsock/vsock_test.c                   | 116 ++++++
>>  12 files changed, 730 insertions(+), 190 deletions(-)
>>
>>  v10 -> v11:
>>  General changelog:
>>   - now data is copied to user's buffer only when
>>     whole message is received.
>>   - reader is woken up when EOR packet is received.
>>   - if read syscall was interrupted by signal or
>>     timeout, error is returned(not 0).
>>
>>  Per patch changelog:
>>   see every patch after '---' line.
>So here is new version for review with updates discussed earlier :)

Thanks, I'll review next week, but I suggest you again to split in two 
series, since patchwork (and netdev maintainers) are not happy with a 
series of 18 patches.

If you still prefer to keep them together during development, then 
please use the RFC tag.

Also did you take a look at the FAQ for netdev that I linked last time?
I don't see the net-next tag...

Thanks,
Stefano

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v11 00/18] virtio/vsock: introduce SOCK_SEQPACKET support
  2021-06-11 12:25     ` Stefano Garzarella
  (?)
@ 2021-06-11 14:39     ` Arseny Krasnov
  2021-06-11 14:57         ` Stefano Garzarella
  -1 siblings, 1 reply; 46+ messages in thread
From: Arseny Krasnov @ 2021-06-11 14:39 UTC (permalink / raw)
  To: Stefano Garzarella
  Cc: Stefan Hajnoczi, Michael S. Tsirkin, Jason Wang, David S. Miller,
	Jakub Kicinski, Andra Paraschiv, Norbert Slusarek,
	Colin Ian King, kvm, virtualization, netdev, linux-kernel,
	oxffffaa


On 11.06.2021 15:25, Stefano Garzarella wrote:
> Hi Arseny,
>
> On Fri, Jun 11, 2021 at 02:17:00PM +0300, Arseny Krasnov wrote:
>> On 11.06.2021 14:07, Arseny Krasnov wrote:
>>> 	This patchset implements support of SOCK_SEQPACKET for virtio
>>> transport.
>>> 	As SOCK_SEQPACKET guarantees to save record boundaries, so to
>>> do it, new bit for field 'flags' was added: SEQ_EOR. This bit is
>>> set to 1 in last RW packet of message.
>>> 	Now as  packets of one socket are not reordered neither on vsock
>>> nor on vhost transport layers, such bit allows to restore original
>>> message on receiver's side. If user's buffer is smaller than message
>>> length, when all out of size data is dropped.
>>> 	Maximum length of datagram is limited by 'peer_buf_alloc' value.
>>> 	Implementation also supports 'MSG_TRUNC' flags.
>>> 	Tests also implemented.
>>>
>>> 	Thanks to stsp2@yandex.ru for encouragements and initial design
>>> recommendations.
>>>
>>>  Arseny Krasnov (18):
>>>   af_vsock: update functions for connectible socket
>>>   af_vsock: separate wait data loop
>>>   af_vsock: separate receive data loop
>>>   af_vsock: implement SEQPACKET receive loop
>>>   af_vsock: implement send logic for SEQPACKET
>>>   af_vsock: rest of SEQPACKET support
>>>   af_vsock: update comments for stream sockets
>>>   virtio/vsock: set packet's type in virtio_transport_send_pkt_info()
>>>   virtio/vsock: simplify credit update function API
>>>   virtio/vsock: defines and constants for SEQPACKET
>>>   virtio/vsock: dequeue callback for SOCK_SEQPACKET
>>>   virtio/vsock: add SEQPACKET receive logic
>>>   virtio/vsock: rest of SOCK_SEQPACKET support
>>>   virtio/vsock: enable SEQPACKET for transport
>>>   vhost/vsock: enable SEQPACKET for transport
>>>   vsock/loopback: enable SEQPACKET for transport
>>>   vsock_test: add SOCK_SEQPACKET tests
>>>   virtio/vsock: update trace event for SEQPACKET
>>>
>>>  drivers/vhost/vsock.c                              |  56 ++-
>>>  include/linux/virtio_vsock.h                       |  10 +
>>>  include/net/af_vsock.h                             |   8 +
>>>  .../trace/events/vsock_virtio_transport_common.h   |   5 +-
>>>  include/uapi/linux/virtio_vsock.h                  |   9 +
>>>  net/vmw_vsock/af_vsock.c                           | 464 ++++++++++++------
>>>  net/vmw_vsock/virtio_transport.c                   |  26 ++
>>>  net/vmw_vsock/virtio_transport_common.c            | 179 +++++++-
>>>  net/vmw_vsock/vsock_loopback.c                     |  12 +
>>>  tools/testing/vsock/util.c                         |  32 +-
>>>  tools/testing/vsock/util.h                         |   3 +
>>>  tools/testing/vsock/vsock_test.c                   | 116 ++++++
>>>  12 files changed, 730 insertions(+), 190 deletions(-)
>>>
>>>  v10 -> v11:
>>>  General changelog:
>>>   - now data is copied to user's buffer only when
>>>     whole message is received.
>>>   - reader is woken up when EOR packet is received.
>>>   - if read syscall was interrupted by signal or
>>>     timeout, error is returned(not 0).
>>>
>>>  Per patch changelog:
>>>   see every patch after '---' line.
>> So here is new version for review with updates discussed earlier :)
> Thanks, I'll review next week, but I suggest you again to split in two 
> series, since patchwork (and netdev maintainers) are not happy with a 
> series of 18 patches.
>
> If you still prefer to keep them together during development, then 
> please use the RFC tag.
>
> Also did you take a look at the FAQ for netdev that I linked last time?
> I don't see the net-next tag...

I didn't use next tag because two patches from first seven(which was

considered to be sent to netdev) - 0004 and 0006

were changed in this patchset(because of last ideas about queueing

whole message). So i removed R-b line and now there is no sense to

use net-next tag for first patches. When it will be R-b - i'll send it to

netdev with such tag and we can continue discussing second part

of patches(virtio specific).


Thank You

>
> Thanks,
> Stefano
>
>

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v11 00/18] virtio/vsock: introduce SOCK_SEQPACKET support
  2021-06-11 14:39     ` Arseny Krasnov
@ 2021-06-11 14:57         ` Stefano Garzarella
  0 siblings, 0 replies; 46+ messages in thread
From: Stefano Garzarella @ 2021-06-11 14:57 UTC (permalink / raw)
  To: Arseny Krasnov
  Cc: Stefan Hajnoczi, Michael S. Tsirkin, Jason Wang, David S. Miller,
	Jakub Kicinski, Andra Paraschiv, Norbert Slusarek,
	Colin Ian King, kvm, virtualization, netdev, linux-kernel,
	oxffffaa

On Fri, Jun 11, 2021 at 05:39:01PM +0300, Arseny Krasnov wrote:
>
>On 11.06.2021 15:25, Stefano Garzarella wrote:
>> Hi Arseny,
>>
>> On Fri, Jun 11, 2021 at 02:17:00PM +0300, Arseny Krasnov wrote:
>>> On 11.06.2021 14:07, Arseny Krasnov wrote:
>>>> 	This patchset implements support of SOCK_SEQPACKET for virtio
>>>> transport.
>>>> 	As SOCK_SEQPACKET guarantees to save record boundaries, so to
>>>> do it, new bit for field 'flags' was added: SEQ_EOR. This bit is
>>>> set to 1 in last RW packet of message.
>>>> 	Now as  packets of one socket are not reordered neither on vsock
>>>> nor on vhost transport layers, such bit allows to restore original
>>>> message on receiver's side. If user's buffer is smaller than message
>>>> length, when all out of size data is dropped.
>>>> 	Maximum length of datagram is limited by 'peer_buf_alloc' value.
>>>> 	Implementation also supports 'MSG_TRUNC' flags.
>>>> 	Tests also implemented.
>>>>
>>>> 	Thanks to stsp2@yandex.ru for encouragements and initial design
>>>> recommendations.
>>>>
>>>>  Arseny Krasnov (18):
>>>>   af_vsock: update functions for connectible socket
>>>>   af_vsock: separate wait data loop
>>>>   af_vsock: separate receive data loop
>>>>   af_vsock: implement SEQPACKET receive loop
>>>>   af_vsock: implement send logic for SEQPACKET
>>>>   af_vsock: rest of SEQPACKET support
>>>>   af_vsock: update comments for stream sockets
>>>>   virtio/vsock: set packet's type in virtio_transport_send_pkt_info()
>>>>   virtio/vsock: simplify credit update function API
>>>>   virtio/vsock: defines and constants for SEQPACKET
>>>>   virtio/vsock: dequeue callback for SOCK_SEQPACKET
>>>>   virtio/vsock: add SEQPACKET receive logic
>>>>   virtio/vsock: rest of SOCK_SEQPACKET support
>>>>   virtio/vsock: enable SEQPACKET for transport
>>>>   vhost/vsock: enable SEQPACKET for transport
>>>>   vsock/loopback: enable SEQPACKET for transport
>>>>   vsock_test: add SOCK_SEQPACKET tests
>>>>   virtio/vsock: update trace event for SEQPACKET
>>>>
>>>>  drivers/vhost/vsock.c                              |  56 ++-
>>>>  include/linux/virtio_vsock.h                       |  10 +
>>>>  include/net/af_vsock.h                             |   8 +
>>>>  .../trace/events/vsock_virtio_transport_common.h   |   5 +-
>>>>  include/uapi/linux/virtio_vsock.h                  |   9 +
>>>>  net/vmw_vsock/af_vsock.c                           | 464 ++++++++++++------
>>>>  net/vmw_vsock/virtio_transport.c                   |  26 ++
>>>>  net/vmw_vsock/virtio_transport_common.c            | 179 +++++++-
>>>>  net/vmw_vsock/vsock_loopback.c                     |  12 +
>>>>  tools/testing/vsock/util.c                         |  32 +-
>>>>  tools/testing/vsock/util.h                         |   3 +
>>>>  tools/testing/vsock/vsock_test.c                   | 116 ++++++
>>>>  12 files changed, 730 insertions(+), 190 deletions(-)
>>>>
>>>>  v10 -> v11:
>>>>  General changelog:
>>>>   - now data is copied to user's buffer only when
>>>>     whole message is received.
>>>>   - reader is woken up when EOR packet is received.
>>>>   - if read syscall was interrupted by signal or
>>>>     timeout, error is returned(not 0).
>>>>
>>>>  Per patch changelog:
>>>>   see every patch after '---' line.
>>> So here is new version for review with updates discussed earlier :)
>> Thanks, I'll review next week, but I suggest you again to split in two
>> series, since patchwork (and netdev maintainers) are not happy with a
>> series of 18 patches.
>>
>> If you still prefer to keep them together during development, then
>> please use the RFC tag.
>>
>> Also did you take a look at the FAQ for netdev that I linked last 
>> time?
>> I don't see the net-next tag...
>
>I didn't use next tag because two patches from first seven(which was
>
>considered to be sent to netdev) - 0004 and 0006
>
>were changed in this patchset(because of last ideas about queueing
>
>whole message). So i removed R-b line and now there is no sense to
>
>use net-next tag for first patches. When it will be R-b - i'll send it 

Okay, in that case better to use RFC tag.

>to
>
>netdev with such tag and we can continue discussing second part
>
>of patches(virtio specific).

Don't worry for now. You can do it for the next round, but I think all 
the patches will go through netdev and would be better to split in 2 
series, both of them with net-next tag.

Thanks,
Stefano


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v11 00/18] virtio/vsock: introduce SOCK_SEQPACKET support
@ 2021-06-11 14:57         ` Stefano Garzarella
  0 siblings, 0 replies; 46+ messages in thread
From: Stefano Garzarella @ 2021-06-11 14:57 UTC (permalink / raw)
  To: Arseny Krasnov
  Cc: Andra Paraschiv, kvm, Michael S. Tsirkin, netdev, linux-kernel,
	virtualization, oxffffaa, Norbert Slusarek, Stefan Hajnoczi,
	Colin Ian King, Jakub Kicinski, David S. Miller

On Fri, Jun 11, 2021 at 05:39:01PM +0300, Arseny Krasnov wrote:
>
>On 11.06.2021 15:25, Stefano Garzarella wrote:
>> Hi Arseny,
>>
>> On Fri, Jun 11, 2021 at 02:17:00PM +0300, Arseny Krasnov wrote:
>>> On 11.06.2021 14:07, Arseny Krasnov wrote:
>>>> 	This patchset implements support of SOCK_SEQPACKET for virtio
>>>> transport.
>>>> 	As SOCK_SEQPACKET guarantees to save record boundaries, so to
>>>> do it, new bit for field 'flags' was added: SEQ_EOR. This bit is
>>>> set to 1 in last RW packet of message.
>>>> 	Now as  packets of one socket are not reordered neither on vsock
>>>> nor on vhost transport layers, such bit allows to restore original
>>>> message on receiver's side. If user's buffer is smaller than message
>>>> length, when all out of size data is dropped.
>>>> 	Maximum length of datagram is limited by 'peer_buf_alloc' value.
>>>> 	Implementation also supports 'MSG_TRUNC' flags.
>>>> 	Tests also implemented.
>>>>
>>>> 	Thanks to stsp2@yandex.ru for encouragements and initial design
>>>> recommendations.
>>>>
>>>>  Arseny Krasnov (18):
>>>>   af_vsock: update functions for connectible socket
>>>>   af_vsock: separate wait data loop
>>>>   af_vsock: separate receive data loop
>>>>   af_vsock: implement SEQPACKET receive loop
>>>>   af_vsock: implement send logic for SEQPACKET
>>>>   af_vsock: rest of SEQPACKET support
>>>>   af_vsock: update comments for stream sockets
>>>>   virtio/vsock: set packet's type in virtio_transport_send_pkt_info()
>>>>   virtio/vsock: simplify credit update function API
>>>>   virtio/vsock: defines and constants for SEQPACKET
>>>>   virtio/vsock: dequeue callback for SOCK_SEQPACKET
>>>>   virtio/vsock: add SEQPACKET receive logic
>>>>   virtio/vsock: rest of SOCK_SEQPACKET support
>>>>   virtio/vsock: enable SEQPACKET for transport
>>>>   vhost/vsock: enable SEQPACKET for transport
>>>>   vsock/loopback: enable SEQPACKET for transport
>>>>   vsock_test: add SOCK_SEQPACKET tests
>>>>   virtio/vsock: update trace event for SEQPACKET
>>>>
>>>>  drivers/vhost/vsock.c                              |  56 ++-
>>>>  include/linux/virtio_vsock.h                       |  10 +
>>>>  include/net/af_vsock.h                             |   8 +
>>>>  .../trace/events/vsock_virtio_transport_common.h   |   5 +-
>>>>  include/uapi/linux/virtio_vsock.h                  |   9 +
>>>>  net/vmw_vsock/af_vsock.c                           | 464 ++++++++++++------
>>>>  net/vmw_vsock/virtio_transport.c                   |  26 ++
>>>>  net/vmw_vsock/virtio_transport_common.c            | 179 +++++++-
>>>>  net/vmw_vsock/vsock_loopback.c                     |  12 +
>>>>  tools/testing/vsock/util.c                         |  32 +-
>>>>  tools/testing/vsock/util.h                         |   3 +
>>>>  tools/testing/vsock/vsock_test.c                   | 116 ++++++
>>>>  12 files changed, 730 insertions(+), 190 deletions(-)
>>>>
>>>>  v10 -> v11:
>>>>  General changelog:
>>>>   - now data is copied to user's buffer only when
>>>>     whole message is received.
>>>>   - reader is woken up when EOR packet is received.
>>>>   - if read syscall was interrupted by signal or
>>>>     timeout, error is returned(not 0).
>>>>
>>>>  Per patch changelog:
>>>>   see every patch after '---' line.
>>> So here is new version for review with updates discussed earlier :)
>> Thanks, I'll review next week, but I suggest you again to split in two
>> series, since patchwork (and netdev maintainers) are not happy with a
>> series of 18 patches.
>>
>> If you still prefer to keep them together during development, then
>> please use the RFC tag.
>>
>> Also did you take a look at the FAQ for netdev that I linked last 
>> time?
>> I don't see the net-next tag...
>
>I didn't use next tag because two patches from first seven(which was
>
>considered to be sent to netdev) - 0004 and 0006
>
>were changed in this patchset(because of last ideas about queueing
>
>whole message). So i removed R-b line and now there is no sense to
>
>use net-next tag for first patches. When it will be R-b - i'll send it 

Okay, in that case better to use RFC tag.

>to
>
>netdev with such tag and we can continue discussing second part
>
>of patches(virtio specific).

Don't worry for now. You can do it for the next round, but I think all 
the patches will go through netdev and would be better to split in 2 
series, both of them with net-next tag.

Thanks,
Stefano

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v11 00/18] virtio/vsock: introduce SOCK_SEQPACKET support
  2021-06-11 14:57         ` Stefano Garzarella
  (?)
@ 2021-06-11 15:00         ` Arseny Krasnov
  -1 siblings, 0 replies; 46+ messages in thread
From: Arseny Krasnov @ 2021-06-11 15:00 UTC (permalink / raw)
  To: Stefano Garzarella
  Cc: Stefan Hajnoczi, Michael S. Tsirkin, Jason Wang, David S. Miller,
	Jakub Kicinski, Andra Paraschiv, Norbert Slusarek,
	Colin Ian King, kvm, virtualization, netdev, linux-kernel,
	oxffffaa


On 11.06.2021 17:57, Stefano Garzarella wrote:
> On Fri, Jun 11, 2021 at 05:39:01PM +0300, Arseny Krasnov wrote:
>> On 11.06.2021 15:25, Stefano Garzarella wrote:
>>> Hi Arseny,
>>>
>>> On Fri, Jun 11, 2021 at 02:17:00PM +0300, Arseny Krasnov wrote:
>>>> On 11.06.2021 14:07, Arseny Krasnov wrote:
>>>>> 	This patchset implements support of SOCK_SEQPACKET for virtio
>>>>> transport.
>>>>> 	As SOCK_SEQPACKET guarantees to save record boundaries, so to
>>>>> do it, new bit for field 'flags' was added: SEQ_EOR. This bit is
>>>>> set to 1 in last RW packet of message.
>>>>> 	Now as  packets of one socket are not reordered neither on vsock
>>>>> nor on vhost transport layers, such bit allows to restore original
>>>>> message on receiver's side. If user's buffer is smaller than message
>>>>> length, when all out of size data is dropped.
>>>>> 	Maximum length of datagram is limited by 'peer_buf_alloc' value.
>>>>> 	Implementation also supports 'MSG_TRUNC' flags.
>>>>> 	Tests also implemented.
>>>>>
>>>>> 	Thanks to stsp2@yandex.ru for encouragements and initial design
>>>>> recommendations.
>>>>>
>>>>>  Arseny Krasnov (18):
>>>>>   af_vsock: update functions for connectible socket
>>>>>   af_vsock: separate wait data loop
>>>>>   af_vsock: separate receive data loop
>>>>>   af_vsock: implement SEQPACKET receive loop
>>>>>   af_vsock: implement send logic for SEQPACKET
>>>>>   af_vsock: rest of SEQPACKET support
>>>>>   af_vsock: update comments for stream sockets
>>>>>   virtio/vsock: set packet's type in virtio_transport_send_pkt_info()
>>>>>   virtio/vsock: simplify credit update function API
>>>>>   virtio/vsock: defines and constants for SEQPACKET
>>>>>   virtio/vsock: dequeue callback for SOCK_SEQPACKET
>>>>>   virtio/vsock: add SEQPACKET receive logic
>>>>>   virtio/vsock: rest of SOCK_SEQPACKET support
>>>>>   virtio/vsock: enable SEQPACKET for transport
>>>>>   vhost/vsock: enable SEQPACKET for transport
>>>>>   vsock/loopback: enable SEQPACKET for transport
>>>>>   vsock_test: add SOCK_SEQPACKET tests
>>>>>   virtio/vsock: update trace event for SEQPACKET
>>>>>
>>>>>  drivers/vhost/vsock.c                              |  56 ++-
>>>>>  include/linux/virtio_vsock.h                       |  10 +
>>>>>  include/net/af_vsock.h                             |   8 +
>>>>>  .../trace/events/vsock_virtio_transport_common.h   |   5 +-
>>>>>  include/uapi/linux/virtio_vsock.h                  |   9 +
>>>>>  net/vmw_vsock/af_vsock.c                           | 464 ++++++++++++------
>>>>>  net/vmw_vsock/virtio_transport.c                   |  26 ++
>>>>>  net/vmw_vsock/virtio_transport_common.c            | 179 +++++++-
>>>>>  net/vmw_vsock/vsock_loopback.c                     |  12 +
>>>>>  tools/testing/vsock/util.c                         |  32 +-
>>>>>  tools/testing/vsock/util.h                         |   3 +
>>>>>  tools/testing/vsock/vsock_test.c                   | 116 ++++++
>>>>>  12 files changed, 730 insertions(+), 190 deletions(-)
>>>>>
>>>>>  v10 -> v11:
>>>>>  General changelog:
>>>>>   - now data is copied to user's buffer only when
>>>>>     whole message is received.
>>>>>   - reader is woken up when EOR packet is received.
>>>>>   - if read syscall was interrupted by signal or
>>>>>     timeout, error is returned(not 0).
>>>>>
>>>>>  Per patch changelog:
>>>>>   see every patch after '---' line.
>>>> So here is new version for review with updates discussed earlier :)
>>> Thanks, I'll review next week, but I suggest you again to split in two
>>> series, since patchwork (and netdev maintainers) are not happy with a
>>> series of 18 patches.
>>>
>>> If you still prefer to keep them together during development, then
>>> please use the RFC tag.
>>>
>>> Also did you take a look at the FAQ for netdev that I linked last 
>>> time?
>>> I don't see the net-next tag...
>> I didn't use next tag because two patches from first seven(which was
>>
>> considered to be sent to netdev) - 0004 and 0006
>>
>> were changed in this patchset(because of last ideas about queueing
>>
>> whole message). So i removed R-b line and now there is no sense to
>>
>> use net-next tag for first patches. When it will be R-b - i'll send it 
> Okay, in that case better to use RFC tag.
Ack, for big patchset for LKML i'll RFC tag
>
>> to
>>
>> netdev with such tag and we can continue discussing second part
>>
>> of patches(virtio specific).
> Don't worry for now. You can do it for the next round, but I think all 
> the patches will go through netdev and would be better to split in 2 
> series, both of them with net-next tag.

Of course, for netdev this patchset will be splitted for two series


Thank You

>
> Thanks,
> Stefano
>
>

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v11 00/18] virtio/vsock: introduce SOCK_SEQPACKET support
  2021-06-11 11:07 [PATCH v11 00/18] virtio/vsock: introduce SOCK_SEQPACKET support Arseny Krasnov
                   ` (18 preceding siblings ...)
  2021-06-11 11:17 ` [PATCH v11 00/18] virtio/vsock: introduce SOCK_SEQPACKET support Arseny Krasnov
@ 2021-06-11 21:00 ` patchwork-bot+netdevbpf
  2021-06-18 13:49     ` Michael S. Tsirkin
  19 siblings, 1 reply; 46+ messages in thread
From: patchwork-bot+netdevbpf @ 2021-06-11 21:00 UTC (permalink / raw)
  To: Arseny Krasnov
  Cc: stefanha, sgarzare, mst, jasowang, davem, kuba, andraprs,
	nslusarek, colin.king, kvm, virtualization, netdev, linux-kernel,
	oxffffaa

Hello:

This series was applied to netdev/net-next.git (refs/heads/master):

On Fri, 11 Jun 2021 14:07:40 +0300 you wrote:
> This patchset implements support of SOCK_SEQPACKET for virtio
> transport.
> 	As SOCK_SEQPACKET guarantees to save record boundaries, so to
> do it, new bit for field 'flags' was added: SEQ_EOR. This bit is
> set to 1 in last RW packet of message.
> 	Now as  packets of one socket are not reordered neither on vsock
> nor on vhost transport layers, such bit allows to restore original
> message on receiver's side. If user's buffer is smaller than message
> length, when all out of size data is dropped.
> 	Maximum length of datagram is limited by 'peer_buf_alloc' value.
> 	Implementation also supports 'MSG_TRUNC' flags.
> 	Tests also implemented.
> 
> [...]

Here is the summary with links:
  - [v11,01/18] af_vsock: update functions for connectible socket
    https://git.kernel.org/netdev/net-next/c/a9e29e5511b9
  - [v11,02/18] af_vsock: separate wait data loop
    https://git.kernel.org/netdev/net-next/c/b3f7fd54881b
  - [v11,03/18] af_vsock: separate receive data loop
    https://git.kernel.org/netdev/net-next/c/19c1b90e1979
  - [v11,04/18] af_vsock: implement SEQPACKET receive loop
    https://git.kernel.org/netdev/net-next/c/9942c192b256
  - [v11,05/18] af_vsock: implement send logic for SEQPACKET
    https://git.kernel.org/netdev/net-next/c/fbe70c480796
  - [v11,06/18] af_vsock: rest of SEQPACKET support
    https://git.kernel.org/netdev/net-next/c/0798e78b102b
  - [v11,07/18] af_vsock: update comments for stream sockets
    https://git.kernel.org/netdev/net-next/c/8cb48554ad82
  - [v11,08/18] virtio/vsock: set packet's type in virtio_transport_send_pkt_info()
    https://git.kernel.org/netdev/net-next/c/b93f8877c1f2
  - [v11,09/18] virtio/vsock: simplify credit update function API
    https://git.kernel.org/netdev/net-next/c/c10844c59799
  - [v11,10/18] virtio/vsock: defines and constants for SEQPACKET
    https://git.kernel.org/netdev/net-next/c/f07b2a5b04d4
  - [v11,11/18] virtio/vsock: dequeue callback for SOCK_SEQPACKET
    https://git.kernel.org/netdev/net-next/c/44931195a541
  - [v11,12/18] virtio/vsock: add SEQPACKET receive logic
    https://git.kernel.org/netdev/net-next/c/e4b1ef152f53
  - [v11,13/18] virtio/vsock: rest of SOCK_SEQPACKET support
    https://git.kernel.org/netdev/net-next/c/9ac841f5e9f2
  - [v11,14/18] virtio/vsock: enable SEQPACKET for transport
    https://git.kernel.org/netdev/net-next/c/53efbba12cc7
  - [v11,15/18] vhost/vsock: support SEQPACKET for transport
    https://git.kernel.org/netdev/net-next/c/ced7b713711f
  - [v11,16/18] vsock/loopback: enable SEQPACKET for transport
    https://git.kernel.org/netdev/net-next/c/6e90a57795aa
  - [v11,17/18] vsock_test: add SOCK_SEQPACKET tests
    https://git.kernel.org/netdev/net-next/c/41b792d7a86d
  - [v11,18/18] virtio/vsock: update trace event for SEQPACKET
    https://git.kernel.org/netdev/net-next/c/184039eefeae

You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v11 11/18] virtio/vsock: dequeue callback for SOCK_SEQPACKET
  2021-06-11 11:12 ` [PATCH v11 11/18] virtio/vsock: dequeue callback for SOCK_SEQPACKET Arseny Krasnov
@ 2021-06-18 13:44     ` Stefano Garzarella
  0 siblings, 0 replies; 46+ messages in thread
From: Stefano Garzarella @ 2021-06-18 13:44 UTC (permalink / raw)
  To: Arseny Krasnov
  Cc: Stefan Hajnoczi, Michael S. Tsirkin, Jason Wang, David S. Miller,
	Jakub Kicinski, Norbert Slusarek, Andra Paraschiv,
	Colin Ian King, kvm, virtualization, netdev, linux-kernel,
	oxffffaa

Hi Arseny,
the series looks great, I have just a question below about 
seqpacket_dequeue.

I also sent a couple a simple fixes, it would be great if you can review 
them: 
https://lore.kernel.org/netdev/20210618133526.300347-1-sgarzare@redhat.com/


On Fri, Jun 11, 2021 at 02:12:38PM +0300, Arseny Krasnov wrote:
>Callback fetches RW packets from rx queue of socket until whole record
>is copied(if user's buffer is full, user is not woken up). This is done
>to not stall sender, because if we wake up user and it leaves syscall,
>nobody will send credit update for rest of record, and sender will wait
>for next enter of read syscall at receiver's side. So if user buffer is
>full, we just send credit update and drop data.
>
>Signed-off-by: Arseny Krasnov <arseny.krasnov@kaspersky.com>
>---
> v10 -> v11:
> 1) 'msg_count' field added to count current number of EORs.
> 2) 'msg_ready' argument removed from callback.
> 3) If 'memcpy_to_msg()' failed during copy loop, there will be
>    no next attempts to copy data, rest of record will be freed.
>
> include/linux/virtio_vsock.h            |  5 ++
> net/vmw_vsock/virtio_transport_common.c | 84 +++++++++++++++++++++++++
> 2 files changed, 89 insertions(+)
>
>diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h
>index dc636b727179..1d9a302cb91d 100644
>--- a/include/linux/virtio_vsock.h
>+++ b/include/linux/virtio_vsock.h
>@@ -36,6 +36,7 @@ struct virtio_vsock_sock {
> 	u32 rx_bytes;
> 	u32 buf_alloc;
> 	struct list_head rx_queue;
>+	u32 msg_count;
> };
>
> struct virtio_vsock_pkt {
>@@ -80,6 +81,10 @@ virtio_transport_dgram_dequeue(struct vsock_sock *vsk,
> 			       struct msghdr *msg,
> 			       size_t len, int flags);
>
>+ssize_t
>+virtio_transport_seqpacket_dequeue(struct vsock_sock *vsk,
>+				   struct msghdr *msg,
>+				   int flags);
> s64 virtio_transport_stream_has_data(struct vsock_sock *vsk);
> s64 virtio_transport_stream_has_space(struct vsock_sock *vsk);
>
>diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
>index ad0d34d41444..1e1df19ec164 100644
>--- a/net/vmw_vsock/virtio_transport_common.c
>+++ b/net/vmw_vsock/virtio_transport_common.c
>@@ -393,6 +393,78 @@ virtio_transport_stream_do_dequeue(struct vsock_sock *vsk,
> 	return err;
> }
>
>+static int virtio_transport_seqpacket_do_dequeue(struct vsock_sock *vsk,
>+						 struct msghdr *msg,
>+						 int flags)
>+{
>+	struct virtio_vsock_sock *vvs = vsk->trans;
>+	struct virtio_vsock_pkt *pkt;
>+	int dequeued_len = 0;
>+	size_t user_buf_len = msg_data_left(msg);
>+	bool copy_failed = false;
>+	bool msg_ready = false;
>+
>+	spin_lock_bh(&vvs->rx_lock);
>+
>+	if (vvs->msg_count == 0) {
>+		spin_unlock_bh(&vvs->rx_lock);
>+		return 0;
>+	}
>+
>+	while (!msg_ready) {
>+		pkt = list_first_entry(&vvs->rx_queue, struct virtio_vsock_pkt, list);
>+
>+		if (!copy_failed) {
>+			size_t pkt_len;
>+			size_t bytes_to_copy;
>+
>+			pkt_len = (size_t)le32_to_cpu(pkt->hdr.len);
>+			bytes_to_copy = min(user_buf_len, pkt_len);
>+
>+			if (bytes_to_copy) {
>+				int err;
>+
>+				/* sk_lock is held by caller so no one else can dequeue.
>+				 * Unlock rx_lock since memcpy_to_msg() may sleep.
>+				 */
>+				spin_unlock_bh(&vvs->rx_lock);
>+
>+				err = memcpy_to_msg(msg, pkt->buf, bytes_to_copy);
>+				if (err) {
>+					/* Copy of message failed, set flag to skip
>+					 * copy path for rest of fragments. Rest of
>+					 * fragments will be freed without copy.
>+					 */
>+					copy_failed = true;
>+					dequeued_len = err;

If we fail to copy the message we will discard the entire packet.
Is it acceptable for the user point of view, or we should leave the 
packet in the queue and the user can retry, maybe with a different 
buffer?

Then we can remove the packets only when we successfully copied all the 
fragments.

I'm not sure make sense, maybe better to check also other 
implementations :-)

Thanks,
Stefano

>+				} else {
>+					user_buf_len -= bytes_to_copy;
>+				}
>+
>+				spin_lock_bh(&vvs->rx_lock);
>+			}
>+
>+			if (dequeued_len >= 0)
>+				dequeued_len += pkt_len;
>+		}
>+
>+		if (le32_to_cpu(pkt->hdr.flags) & VIRTIO_VSOCK_SEQ_EOR) {
>+			msg_ready = true;
>+			vvs->msg_count--;
>+		}
>+
>+		virtio_transport_dec_rx_pkt(vvs, pkt);
>+		list_del(&pkt->list);
>+		virtio_transport_free_pkt(pkt);
>+	}
>+
>+	spin_unlock_bh(&vvs->rx_lock);
>+
>+	virtio_transport_send_credit_update(vsk);
>+
>+	return dequeued_len;
>+}
>+
> ssize_t
> virtio_transport_stream_dequeue(struct vsock_sock *vsk,
> 				struct msghdr *msg,
>@@ -405,6 +477,18 @@ virtio_transport_stream_dequeue(struct vsock_sock *vsk,
> }
> EXPORT_SYMBOL_GPL(virtio_transport_stream_dequeue);
>
>+ssize_t
>+virtio_transport_seqpacket_dequeue(struct vsock_sock *vsk,
>+				   struct msghdr *msg,
>+				   int flags)
>+{
>+	if (flags & MSG_PEEK)
>+		return -EOPNOTSUPP;
>+
>+	return virtio_transport_seqpacket_do_dequeue(vsk, msg, flags);
>+}
>+EXPORT_SYMBOL_GPL(virtio_transport_seqpacket_dequeue);
>+
> int
> virtio_transport_dgram_dequeue(struct vsock_sock *vsk,
> 			       struct msghdr *msg,
>-- 
>2.25.1
>


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v11 11/18] virtio/vsock: dequeue callback for SOCK_SEQPACKET
@ 2021-06-18 13:44     ` Stefano Garzarella
  0 siblings, 0 replies; 46+ messages in thread
From: Stefano Garzarella @ 2021-06-18 13:44 UTC (permalink / raw)
  To: Arseny Krasnov
  Cc: Andra Paraschiv, kvm, Michael S. Tsirkin, netdev, linux-kernel,
	virtualization, oxffffaa, Norbert Slusarek, Stefan Hajnoczi,
	Colin Ian King, Jakub Kicinski, David S. Miller

Hi Arseny,
the series looks great, I have just a question below about 
seqpacket_dequeue.

I also sent a couple a simple fixes, it would be great if you can review 
them: 
https://lore.kernel.org/netdev/20210618133526.300347-1-sgarzare@redhat.com/


On Fri, Jun 11, 2021 at 02:12:38PM +0300, Arseny Krasnov wrote:
>Callback fetches RW packets from rx queue of socket until whole record
>is copied(if user's buffer is full, user is not woken up). This is done
>to not stall sender, because if we wake up user and it leaves syscall,
>nobody will send credit update for rest of record, and sender will wait
>for next enter of read syscall at receiver's side. So if user buffer is
>full, we just send credit update and drop data.
>
>Signed-off-by: Arseny Krasnov <arseny.krasnov@kaspersky.com>
>---
> v10 -> v11:
> 1) 'msg_count' field added to count current number of EORs.
> 2) 'msg_ready' argument removed from callback.
> 3) If 'memcpy_to_msg()' failed during copy loop, there will be
>    no next attempts to copy data, rest of record will be freed.
>
> include/linux/virtio_vsock.h            |  5 ++
> net/vmw_vsock/virtio_transport_common.c | 84 +++++++++++++++++++++++++
> 2 files changed, 89 insertions(+)
>
>diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h
>index dc636b727179..1d9a302cb91d 100644
>--- a/include/linux/virtio_vsock.h
>+++ b/include/linux/virtio_vsock.h
>@@ -36,6 +36,7 @@ struct virtio_vsock_sock {
> 	u32 rx_bytes;
> 	u32 buf_alloc;
> 	struct list_head rx_queue;
>+	u32 msg_count;
> };
>
> struct virtio_vsock_pkt {
>@@ -80,6 +81,10 @@ virtio_transport_dgram_dequeue(struct vsock_sock *vsk,
> 			       struct msghdr *msg,
> 			       size_t len, int flags);
>
>+ssize_t
>+virtio_transport_seqpacket_dequeue(struct vsock_sock *vsk,
>+				   struct msghdr *msg,
>+				   int flags);
> s64 virtio_transport_stream_has_data(struct vsock_sock *vsk);
> s64 virtio_transport_stream_has_space(struct vsock_sock *vsk);
>
>diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
>index ad0d34d41444..1e1df19ec164 100644
>--- a/net/vmw_vsock/virtio_transport_common.c
>+++ b/net/vmw_vsock/virtio_transport_common.c
>@@ -393,6 +393,78 @@ virtio_transport_stream_do_dequeue(struct vsock_sock *vsk,
> 	return err;
> }
>
>+static int virtio_transport_seqpacket_do_dequeue(struct vsock_sock *vsk,
>+						 struct msghdr *msg,
>+						 int flags)
>+{
>+	struct virtio_vsock_sock *vvs = vsk->trans;
>+	struct virtio_vsock_pkt *pkt;
>+	int dequeued_len = 0;
>+	size_t user_buf_len = msg_data_left(msg);
>+	bool copy_failed = false;
>+	bool msg_ready = false;
>+
>+	spin_lock_bh(&vvs->rx_lock);
>+
>+	if (vvs->msg_count == 0) {
>+		spin_unlock_bh(&vvs->rx_lock);
>+		return 0;
>+	}
>+
>+	while (!msg_ready) {
>+		pkt = list_first_entry(&vvs->rx_queue, struct virtio_vsock_pkt, list);
>+
>+		if (!copy_failed) {
>+			size_t pkt_len;
>+			size_t bytes_to_copy;
>+
>+			pkt_len = (size_t)le32_to_cpu(pkt->hdr.len);
>+			bytes_to_copy = min(user_buf_len, pkt_len);
>+
>+			if (bytes_to_copy) {
>+				int err;
>+
>+				/* sk_lock is held by caller so no one else can dequeue.
>+				 * Unlock rx_lock since memcpy_to_msg() may sleep.
>+				 */
>+				spin_unlock_bh(&vvs->rx_lock);
>+
>+				err = memcpy_to_msg(msg, pkt->buf, bytes_to_copy);
>+				if (err) {
>+					/* Copy of message failed, set flag to skip
>+					 * copy path for rest of fragments. Rest of
>+					 * fragments will be freed without copy.
>+					 */
>+					copy_failed = true;
>+					dequeued_len = err;

If we fail to copy the message we will discard the entire packet.
Is it acceptable for the user point of view, or we should leave the 
packet in the queue and the user can retry, maybe with a different 
buffer?

Then we can remove the packets only when we successfully copied all the 
fragments.

I'm not sure make sense, maybe better to check also other 
implementations :-)

Thanks,
Stefano

>+				} else {
>+					user_buf_len -= bytes_to_copy;
>+				}
>+
>+				spin_lock_bh(&vvs->rx_lock);
>+			}
>+
>+			if (dequeued_len >= 0)
>+				dequeued_len += pkt_len;
>+		}
>+
>+		if (le32_to_cpu(pkt->hdr.flags) & VIRTIO_VSOCK_SEQ_EOR) {
>+			msg_ready = true;
>+			vvs->msg_count--;
>+		}
>+
>+		virtio_transport_dec_rx_pkt(vvs, pkt);
>+		list_del(&pkt->list);
>+		virtio_transport_free_pkt(pkt);
>+	}
>+
>+	spin_unlock_bh(&vvs->rx_lock);
>+
>+	virtio_transport_send_credit_update(vsk);
>+
>+	return dequeued_len;
>+}
>+
> ssize_t
> virtio_transport_stream_dequeue(struct vsock_sock *vsk,
> 				struct msghdr *msg,
>@@ -405,6 +477,18 @@ virtio_transport_stream_dequeue(struct vsock_sock *vsk,
> }
> EXPORT_SYMBOL_GPL(virtio_transport_stream_dequeue);
>
>+ssize_t
>+virtio_transport_seqpacket_dequeue(struct vsock_sock *vsk,
>+				   struct msghdr *msg,
>+				   int flags)
>+{
>+	if (flags & MSG_PEEK)
>+		return -EOPNOTSUPP;
>+
>+	return virtio_transport_seqpacket_do_dequeue(vsk, msg, flags);
>+}
>+EXPORT_SYMBOL_GPL(virtio_transport_seqpacket_dequeue);
>+
> int
> virtio_transport_dgram_dequeue(struct vsock_sock *vsk,
> 			       struct msghdr *msg,
>-- 
>2.25.1
>

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v11 00/18] virtio/vsock: introduce SOCK_SEQPACKET support
  2021-06-11 21:00 ` patchwork-bot+netdevbpf
@ 2021-06-18 13:49     ` Michael S. Tsirkin
  0 siblings, 0 replies; 46+ messages in thread
From: Michael S. Tsirkin @ 2021-06-18 13:49 UTC (permalink / raw)
  To: patchwork-bot+netdevbpf
  Cc: Arseny Krasnov, stefanha, sgarzare, jasowang, davem, kuba,
	andraprs, nslusarek, colin.king, kvm, virtualization, netdev,
	linux-kernel, oxffffaa

On Fri, Jun 11, 2021 at 09:00:13PM +0000, patchwork-bot+netdevbpf@kernel.org wrote:
> Hello:
> 
> This series was applied to netdev/net-next.git (refs/heads/master):
> 
> On Fri, 11 Jun 2021 14:07:40 +0300 you wrote:
> > This patchset implements support of SOCK_SEQPACKET for virtio
> > transport.
> > 	As SOCK_SEQPACKET guarantees to save record boundaries, so to
> > do it, new bit for field 'flags' was added: SEQ_EOR. This bit is
> > set to 1 in last RW packet of message.
> > 	Now as  packets of one socket are not reordered neither on vsock
> > nor on vhost transport layers, such bit allows to restore original
> > message on receiver's side. If user's buffer is smaller than message
> > length, when all out of size data is dropped.
> > 	Maximum length of datagram is limited by 'peer_buf_alloc' value.
> > 	Implementation also supports 'MSG_TRUNC' flags.
> > 	Tests also implemented.
> > 
> > [...]
> 
> Here is the summary with links:
>   - [v11,01/18] af_vsock: update functions for connectible socket
>     https://git.kernel.org/netdev/net-next/c/a9e29e5511b9
>   - [v11,02/18] af_vsock: separate wait data loop
>     https://git.kernel.org/netdev/net-next/c/b3f7fd54881b
>   - [v11,03/18] af_vsock: separate receive data loop
>     https://git.kernel.org/netdev/net-next/c/19c1b90e1979
>   - [v11,04/18] af_vsock: implement SEQPACKET receive loop
>     https://git.kernel.org/netdev/net-next/c/9942c192b256
>   - [v11,05/18] af_vsock: implement send logic for SEQPACKET
>     https://git.kernel.org/netdev/net-next/c/fbe70c480796
>   - [v11,06/18] af_vsock: rest of SEQPACKET support
>     https://git.kernel.org/netdev/net-next/c/0798e78b102b
>   - [v11,07/18] af_vsock: update comments for stream sockets
>     https://git.kernel.org/netdev/net-next/c/8cb48554ad82
>   - [v11,08/18] virtio/vsock: set packet's type in virtio_transport_send_pkt_info()
>     https://git.kernel.org/netdev/net-next/c/b93f8877c1f2
>   - [v11,09/18] virtio/vsock: simplify credit update function API
>     https://git.kernel.org/netdev/net-next/c/c10844c59799
>   - [v11,10/18] virtio/vsock: defines and constants for SEQPACKET
>     https://git.kernel.org/netdev/net-next/c/f07b2a5b04d4
>   - [v11,11/18] virtio/vsock: dequeue callback for SOCK_SEQPACKET
>     https://git.kernel.org/netdev/net-next/c/44931195a541
>   - [v11,12/18] virtio/vsock: add SEQPACKET receive logic
>     https://git.kernel.org/netdev/net-next/c/e4b1ef152f53
>   - [v11,13/18] virtio/vsock: rest of SOCK_SEQPACKET support
>     https://git.kernel.org/netdev/net-next/c/9ac841f5e9f2
>   - [v11,14/18] virtio/vsock: enable SEQPACKET for transport
>     https://git.kernel.org/netdev/net-next/c/53efbba12cc7
>   - [v11,15/18] vhost/vsock: support SEQPACKET for transport
>     https://git.kernel.org/netdev/net-next/c/ced7b713711f
>   - [v11,16/18] vsock/loopback: enable SEQPACKET for transport
>     https://git.kernel.org/netdev/net-next/c/6e90a57795aa
>   - [v11,17/18] vsock_test: add SOCK_SEQPACKET tests
>     https://git.kernel.org/netdev/net-next/c/41b792d7a86d
>   - [v11,18/18] virtio/vsock: update trace event for SEQPACKET
>     https://git.kernel.org/netdev/net-next/c/184039eefeae

Hmm so the virtio part was merged before the spec is ready.
What's the plan now?


> You are awesome, thank you!
> --
> Deet-doot-dot, I am a bot.
> https://korg.docs.kernel.org/patchwork/pwbot.html
> 


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v11 00/18] virtio/vsock: introduce SOCK_SEQPACKET support
@ 2021-06-18 13:49     ` Michael S. Tsirkin
  0 siblings, 0 replies; 46+ messages in thread
From: Michael S. Tsirkin @ 2021-06-18 13:49 UTC (permalink / raw)
  To: patchwork-bot+netdevbpf
  Cc: andraprs, kvm, netdev, linux-kernel, virtualization, oxffffaa,
	nslusarek, stefanha, colin.king, kuba, Arseny Krasnov, davem

On Fri, Jun 11, 2021 at 09:00:13PM +0000, patchwork-bot+netdevbpf@kernel.org wrote:
> Hello:
> 
> This series was applied to netdev/net-next.git (refs/heads/master):
> 
> On Fri, 11 Jun 2021 14:07:40 +0300 you wrote:
> > This patchset implements support of SOCK_SEQPACKET for virtio
> > transport.
> > 	As SOCK_SEQPACKET guarantees to save record boundaries, so to
> > do it, new bit for field 'flags' was added: SEQ_EOR. This bit is
> > set to 1 in last RW packet of message.
> > 	Now as  packets of one socket are not reordered neither on vsock
> > nor on vhost transport layers, such bit allows to restore original
> > message on receiver's side. If user's buffer is smaller than message
> > length, when all out of size data is dropped.
> > 	Maximum length of datagram is limited by 'peer_buf_alloc' value.
> > 	Implementation also supports 'MSG_TRUNC' flags.
> > 	Tests also implemented.
> > 
> > [...]
> 
> Here is the summary with links:
>   - [v11,01/18] af_vsock: update functions for connectible socket
>     https://git.kernel.org/netdev/net-next/c/a9e29e5511b9
>   - [v11,02/18] af_vsock: separate wait data loop
>     https://git.kernel.org/netdev/net-next/c/b3f7fd54881b
>   - [v11,03/18] af_vsock: separate receive data loop
>     https://git.kernel.org/netdev/net-next/c/19c1b90e1979
>   - [v11,04/18] af_vsock: implement SEQPACKET receive loop
>     https://git.kernel.org/netdev/net-next/c/9942c192b256
>   - [v11,05/18] af_vsock: implement send logic for SEQPACKET
>     https://git.kernel.org/netdev/net-next/c/fbe70c480796
>   - [v11,06/18] af_vsock: rest of SEQPACKET support
>     https://git.kernel.org/netdev/net-next/c/0798e78b102b
>   - [v11,07/18] af_vsock: update comments for stream sockets
>     https://git.kernel.org/netdev/net-next/c/8cb48554ad82
>   - [v11,08/18] virtio/vsock: set packet's type in virtio_transport_send_pkt_info()
>     https://git.kernel.org/netdev/net-next/c/b93f8877c1f2
>   - [v11,09/18] virtio/vsock: simplify credit update function API
>     https://git.kernel.org/netdev/net-next/c/c10844c59799
>   - [v11,10/18] virtio/vsock: defines and constants for SEQPACKET
>     https://git.kernel.org/netdev/net-next/c/f07b2a5b04d4
>   - [v11,11/18] virtio/vsock: dequeue callback for SOCK_SEQPACKET
>     https://git.kernel.org/netdev/net-next/c/44931195a541
>   - [v11,12/18] virtio/vsock: add SEQPACKET receive logic
>     https://git.kernel.org/netdev/net-next/c/e4b1ef152f53
>   - [v11,13/18] virtio/vsock: rest of SOCK_SEQPACKET support
>     https://git.kernel.org/netdev/net-next/c/9ac841f5e9f2
>   - [v11,14/18] virtio/vsock: enable SEQPACKET for transport
>     https://git.kernel.org/netdev/net-next/c/53efbba12cc7
>   - [v11,15/18] vhost/vsock: support SEQPACKET for transport
>     https://git.kernel.org/netdev/net-next/c/ced7b713711f
>   - [v11,16/18] vsock/loopback: enable SEQPACKET for transport
>     https://git.kernel.org/netdev/net-next/c/6e90a57795aa
>   - [v11,17/18] vsock_test: add SOCK_SEQPACKET tests
>     https://git.kernel.org/netdev/net-next/c/41b792d7a86d
>   - [v11,18/18] virtio/vsock: update trace event for SEQPACKET
>     https://git.kernel.org/netdev/net-next/c/184039eefeae

Hmm so the virtio part was merged before the spec is ready.
What's the plan now?


> You are awesome, thank you!
> --
> Deet-doot-dot, I am a bot.
> https://korg.docs.kernel.org/patchwork/pwbot.html
> 

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v11 11/18] virtio/vsock: dequeue callback for SOCK_SEQPACKET
  2021-06-18 13:44     ` Stefano Garzarella
@ 2021-06-18 13:51       ` Michael S. Tsirkin
  -1 siblings, 0 replies; 46+ messages in thread
From: Michael S. Tsirkin @ 2021-06-18 13:51 UTC (permalink / raw)
  To: Stefano Garzarella
  Cc: Arseny Krasnov, Stefan Hajnoczi, Jason Wang, David S. Miller,
	Jakub Kicinski, Norbert Slusarek, Andra Paraschiv,
	Colin Ian King, kvm, virtualization, netdev, linux-kernel,
	oxffffaa

On Fri, Jun 18, 2021 at 03:44:23PM +0200, Stefano Garzarella wrote:
> Hi Arseny,
> the series looks great, I have just a question below about
> seqpacket_dequeue.
> 
> I also sent a couple a simple fixes, it would be great if you can review
> them:
> https://lore.kernel.org/netdev/20210618133526.300347-1-sgarzare@redhat.com/

So given this was picked into net next, what's the plan? Just make spec
follow code? We can wait and see, if there are issues with the spec just
remember to mask the feature before release.

-- 
MST


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v11 11/18] virtio/vsock: dequeue callback for SOCK_SEQPACKET
@ 2021-06-18 13:51       ` Michael S. Tsirkin
  0 siblings, 0 replies; 46+ messages in thread
From: Michael S. Tsirkin @ 2021-06-18 13:51 UTC (permalink / raw)
  To: Stefano Garzarella
  Cc: Andra Paraschiv, kvm, netdev, linux-kernel, virtualization,
	oxffffaa, Norbert Slusarek, Stefan Hajnoczi, Colin Ian King,
	Jakub Kicinski, Arseny Krasnov, David S. Miller

On Fri, Jun 18, 2021 at 03:44:23PM +0200, Stefano Garzarella wrote:
> Hi Arseny,
> the series looks great, I have just a question below about
> seqpacket_dequeue.
> 
> I also sent a couple a simple fixes, it would be great if you can review
> them:
> https://lore.kernel.org/netdev/20210618133526.300347-1-sgarzare@redhat.com/

So given this was picked into net next, what's the plan? Just make spec
follow code? We can wait and see, if there are issues with the spec just
remember to mask the feature before release.

-- 
MST

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v11 11/18] virtio/vsock: dequeue callback for SOCK_SEQPACKET
  2021-06-18 13:51       ` Michael S. Tsirkin
@ 2021-06-18 14:44         ` Stefano Garzarella
  -1 siblings, 0 replies; 46+ messages in thread
From: Stefano Garzarella @ 2021-06-18 14:44 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Arseny Krasnov, Stefan Hajnoczi, Jason Wang, David S. Miller,
	Jakub Kicinski, Norbert Slusarek, Andra Paraschiv,
	Colin Ian King, kvm, virtualization, netdev, linux-kernel,
	oxffffaa

On Fri, Jun 18, 2021 at 09:51:44AM -0400, Michael S. Tsirkin wrote:
>On Fri, Jun 18, 2021 at 03:44:23PM +0200, Stefano Garzarella wrote:
>> Hi Arseny,
>> the series looks great, I have just a question below about
>> seqpacket_dequeue.
>>
>> I also sent a couple a simple fixes, it would be great if you can review
>> them:
>> https://lore.kernel.org/netdev/20210618133526.300347-1-sgarzare@redhat.com/
>
>So given this was picked into net next, what's the plan? Just make spec
>follow code? We can wait and see, if there are issues with the spec just
>remember to mask the feature before release.

Yep, the spec patches was already posted, but not merged yet: 
https://lists.oasis-open.org/archives/virtio-comment/202105/msg00017.html

The changes are quite small and they are aligned with the current 
implementation.

Anyway, I perfectly agree with you about waiting and mask it before 
v5.14 release if there are any issue.

Thanks,
Stefano


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v11 11/18] virtio/vsock: dequeue callback for SOCK_SEQPACKET
@ 2021-06-18 14:44         ` Stefano Garzarella
  0 siblings, 0 replies; 46+ messages in thread
From: Stefano Garzarella @ 2021-06-18 14:44 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Andra Paraschiv, kvm, netdev, linux-kernel, virtualization,
	oxffffaa, Norbert Slusarek, Stefan Hajnoczi, Colin Ian King,
	Jakub Kicinski, Arseny Krasnov, David S. Miller

On Fri, Jun 18, 2021 at 09:51:44AM -0400, Michael S. Tsirkin wrote:
>On Fri, Jun 18, 2021 at 03:44:23PM +0200, Stefano Garzarella wrote:
>> Hi Arseny,
>> the series looks great, I have just a question below about
>> seqpacket_dequeue.
>>
>> I also sent a couple a simple fixes, it would be great if you can review
>> them:
>> https://lore.kernel.org/netdev/20210618133526.300347-1-sgarzare@redhat.com/
>
>So given this was picked into net next, what's the plan? Just make spec
>follow code? We can wait and see, if there are issues with the spec just
>remember to mask the feature before release.

Yep, the spec patches was already posted, but not merged yet: 
https://lists.oasis-open.org/archives/virtio-comment/202105/msg00017.html

The changes are quite small and they are aligned with the current 
implementation.

Anyway, I perfectly agree with you about waiting and mask it before 
v5.14 release if there are any issue.

Thanks,
Stefano

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v11 11/18] virtio/vsock: dequeue callback for SOCK_SEQPACKET
  2021-06-18 13:44     ` Stefano Garzarella
  (?)
  (?)
@ 2021-06-18 15:04     ` Arseny Krasnov
  2021-06-18 15:55         ` Stefano Garzarella
  -1 siblings, 1 reply; 46+ messages in thread
From: Arseny Krasnov @ 2021-06-18 15:04 UTC (permalink / raw)
  To: Stefano Garzarella
  Cc: Stefan Hajnoczi, Michael S. Tsirkin, Jason Wang, David S. Miller,
	Jakub Kicinski, Norbert Slusarek, Andra Paraschiv,
	Colin Ian King, kvm, virtualization, netdev, linux-kernel,
	oxffffaa


On 18.06.2021 16:44, Stefano Garzarella wrote:
> Hi Arseny,
> the series looks great, I have just a question below about 
> seqpacket_dequeue.
>
> I also sent a couple a simple fixes, it would be great if you can review 
> them: 
> https://lore.kernel.org/netdev/20210618133526.300347-1-sgarzare@redhat.com/
>
>
> On Fri, Jun 11, 2021 at 02:12:38PM +0300, Arseny Krasnov wrote:
>> Callback fetches RW packets from rx queue of socket until whole record
>> is copied(if user's buffer is full, user is not woken up). This is done
>> to not stall sender, because if we wake up user and it leaves syscall,
>> nobody will send credit update for rest of record, and sender will wait
>> for next enter of read syscall at receiver's side. So if user buffer is
>> full, we just send credit update and drop data.
>>
>> Signed-off-by: Arseny Krasnov <arseny.krasnov@kaspersky.com>
>> ---
>> v10 -> v11:
>> 1) 'msg_count' field added to count current number of EORs.
>> 2) 'msg_ready' argument removed from callback.
>> 3) If 'memcpy_to_msg()' failed during copy loop, there will be
>>    no next attempts to copy data, rest of record will be freed.
>>
>> include/linux/virtio_vsock.h            |  5 ++
>> net/vmw_vsock/virtio_transport_common.c | 84 +++++++++++++++++++++++++
>> 2 files changed, 89 insertions(+)
>>
>> diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h
>> index dc636b727179..1d9a302cb91d 100644
>> --- a/include/linux/virtio_vsock.h
>> +++ b/include/linux/virtio_vsock.h
>> @@ -36,6 +36,7 @@ struct virtio_vsock_sock {
>> 	u32 rx_bytes;
>> 	u32 buf_alloc;
>> 	struct list_head rx_queue;
>> +	u32 msg_count;
>> };
>>
>> struct virtio_vsock_pkt {
>> @@ -80,6 +81,10 @@ virtio_transport_dgram_dequeue(struct vsock_sock *vsk,
>> 			       struct msghdr *msg,
>> 			       size_t len, int flags);
>>
>> +ssize_t
>> +virtio_transport_seqpacket_dequeue(struct vsock_sock *vsk,
>> +				   struct msghdr *msg,
>> +				   int flags);
>> s64 virtio_transport_stream_has_data(struct vsock_sock *vsk);
>> s64 virtio_transport_stream_has_space(struct vsock_sock *vsk);
>>
>> diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
>> index ad0d34d41444..1e1df19ec164 100644
>> --- a/net/vmw_vsock/virtio_transport_common.c
>> +++ b/net/vmw_vsock/virtio_transport_common.c
>> @@ -393,6 +393,78 @@ virtio_transport_stream_do_dequeue(struct vsock_sock *vsk,
>> 	return err;
>> }
>>
>> +static int virtio_transport_seqpacket_do_dequeue(struct vsock_sock *vsk,
>> +						 struct msghdr *msg,
>> +						 int flags)
>> +{
>> +	struct virtio_vsock_sock *vvs = vsk->trans;
>> +	struct virtio_vsock_pkt *pkt;
>> +	int dequeued_len = 0;
>> +	size_t user_buf_len = msg_data_left(msg);
>> +	bool copy_failed = false;
>> +	bool msg_ready = false;
>> +
>> +	spin_lock_bh(&vvs->rx_lock);
>> +
>> +	if (vvs->msg_count == 0) {
>> +		spin_unlock_bh(&vvs->rx_lock);
>> +		return 0;
>> +	}
>> +
>> +	while (!msg_ready) {
>> +		pkt = list_first_entry(&vvs->rx_queue, struct virtio_vsock_pkt, list);
>> +
>> +		if (!copy_failed) {
>> +			size_t pkt_len;
>> +			size_t bytes_to_copy;
>> +
>> +			pkt_len = (size_t)le32_to_cpu(pkt->hdr.len);
>> +			bytes_to_copy = min(user_buf_len, pkt_len);
>> +
>> +			if (bytes_to_copy) {
>> +				int err;
>> +
>> +				/* sk_lock is held by caller so no one else can dequeue.
>> +				 * Unlock rx_lock since memcpy_to_msg() may sleep.
>> +				 */
>> +				spin_unlock_bh(&vvs->rx_lock);
>> +
>> +				err = memcpy_to_msg(msg, pkt->buf, bytes_to_copy);
>> +				if (err) {
>> +					/* Copy of message failed, set flag to skip
>> +					 * copy path for rest of fragments. Rest of
>> +					 * fragments will be freed without copy.
>> +					 */
>> +					copy_failed = true;
>> +					dequeued_len = err;
> If we fail to copy the message we will discard the entire packet.
> Is it acceptable for the user point of view, or we should leave the 
> packet in the queue and the user can retry, maybe with a different 
> buffer?
>
> Then we can remove the packets only when we successfully copied all the 
> fragments.
>
> I'm not sure make sense, maybe better to check also other 
> implementations :-)
>
> Thanks,
> Stefano

Understand, i'll check it on weekend, anyway I think it is

not critical for implementation.


I have another question: may be it is useful to research for

approach where packets are not queued until whole message

is received, but copied to user's buffer thus freeing memory.

(like previous implementation, of course with solution of problem

where part of message still in queue, while reader was woken

by timeout or signal).

I think it is better, because  in current version, sender may set

'peer_alloc_buf' to  for example 1MB, so at receiver we get

1MB of 'kmalloc()' memory allocated, while having user's buffer

to copy data there or drop it(if user's buffer is full). This way

won't change spec(e.g. no message id or SEQ_BEGIN will be added).


What do You think?

>
>> +				} else {
>> +					user_buf_len -= bytes_to_copy;
>> +				}
>> +
>> +				spin_lock_bh(&vvs->rx_lock);
>> +			}
>> +
>> +			if (dequeued_len >= 0)
>> +				dequeued_len += pkt_len;
>> +		}
>> +
>> +		if (le32_to_cpu(pkt->hdr.flags) & VIRTIO_VSOCK_SEQ_EOR) {
>> +			msg_ready = true;
>> +			vvs->msg_count--;
>> +		}
>> +
>> +		virtio_transport_dec_rx_pkt(vvs, pkt);
>> +		list_del(&pkt->list);
>> +		virtio_transport_free_pkt(pkt);
>> +	}
>> +
>> +	spin_unlock_bh(&vvs->rx_lock);
>> +
>> +	virtio_transport_send_credit_update(vsk);
>> +
>> +	return dequeued_len;
>> +}
>> +
>> ssize_t
>> virtio_transport_stream_dequeue(struct vsock_sock *vsk,
>> 				struct msghdr *msg,
>> @@ -405,6 +477,18 @@ virtio_transport_stream_dequeue(struct vsock_sock *vsk,
>> }
>> EXPORT_SYMBOL_GPL(virtio_transport_stream_dequeue);
>>
>> +ssize_t
>> +virtio_transport_seqpacket_dequeue(struct vsock_sock *vsk,
>> +				   struct msghdr *msg,
>> +				   int flags)
>> +{
>> +	if (flags & MSG_PEEK)
>> +		return -EOPNOTSUPP;
>> +
>> +	return virtio_transport_seqpacket_do_dequeue(vsk, msg, flags);
>> +}
>> +EXPORT_SYMBOL_GPL(virtio_transport_seqpacket_dequeue);
>> +
>> int
>> virtio_transport_dgram_dequeue(struct vsock_sock *vsk,
>> 			       struct msghdr *msg,
>> -- 
>> 2.25.1
>>
>

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v11 11/18] virtio/vsock: dequeue callback for SOCK_SEQPACKET
  2021-06-18 15:04     ` Arseny Krasnov
@ 2021-06-18 15:55         ` Stefano Garzarella
  0 siblings, 0 replies; 46+ messages in thread
From: Stefano Garzarella @ 2021-06-18 15:55 UTC (permalink / raw)
  To: Arseny Krasnov
  Cc: Stefan Hajnoczi, Michael S. Tsirkin, Jason Wang, David S. Miller,
	Jakub Kicinski, Norbert Slusarek, Andra Paraschiv,
	Colin Ian King, kvm, virtualization, netdev, linux-kernel,
	oxffffaa

On Fri, Jun 18, 2021 at 06:04:37PM +0300, Arseny Krasnov wrote:
>
>On 18.06.2021 16:44, Stefano Garzarella wrote:
>> Hi Arseny,
>> the series looks great, I have just a question below about
>> seqpacket_dequeue.
>>
>> I also sent a couple a simple fixes, it would be great if you can review
>> them:
>> https://lore.kernel.org/netdev/20210618133526.300347-1-sgarzare@redhat.com/
>>
>>
>> On Fri, Jun 11, 2021 at 02:12:38PM +0300, Arseny Krasnov wrote:
>>> Callback fetches RW packets from rx queue of socket until whole record
>>> is copied(if user's buffer is full, user is not woken up). This is done
>>> to not stall sender, because if we wake up user and it leaves syscall,
>>> nobody will send credit update for rest of record, and sender will wait
>>> for next enter of read syscall at receiver's side. So if user buffer is
>>> full, we just send credit update and drop data.
>>>
>>> Signed-off-by: Arseny Krasnov <arseny.krasnov@kaspersky.com>
>>> ---
>>> v10 -> v11:
>>> 1) 'msg_count' field added to count current number of EORs.
>>> 2) 'msg_ready' argument removed from callback.
>>> 3) If 'memcpy_to_msg()' failed during copy loop, there will be
>>>    no next attempts to copy data, rest of record will be freed.
>>>
>>> include/linux/virtio_vsock.h            |  5 ++
>>> net/vmw_vsock/virtio_transport_common.c | 84 +++++++++++++++++++++++++
>>> 2 files changed, 89 insertions(+)
>>>
>>> diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h
>>> index dc636b727179..1d9a302cb91d 100644
>>> --- a/include/linux/virtio_vsock.h
>>> +++ b/include/linux/virtio_vsock.h
>>> @@ -36,6 +36,7 @@ struct virtio_vsock_sock {
>>> 	u32 rx_bytes;
>>> 	u32 buf_alloc;
>>> 	struct list_head rx_queue;
>>> +	u32 msg_count;
>>> };
>>>
>>> struct virtio_vsock_pkt {
>>> @@ -80,6 +81,10 @@ virtio_transport_dgram_dequeue(struct vsock_sock *vsk,
>>> 			       struct msghdr *msg,
>>> 			       size_t len, int flags);
>>>
>>> +ssize_t
>>> +virtio_transport_seqpacket_dequeue(struct vsock_sock *vsk,
>>> +				   struct msghdr *msg,
>>> +				   int flags);
>>> s64 virtio_transport_stream_has_data(struct vsock_sock *vsk);
>>> s64 virtio_transport_stream_has_space(struct vsock_sock *vsk);
>>>
>>> diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
>>> index ad0d34d41444..1e1df19ec164 100644
>>> --- a/net/vmw_vsock/virtio_transport_common.c
>>> +++ b/net/vmw_vsock/virtio_transport_common.c
>>> @@ -393,6 +393,78 @@ virtio_transport_stream_do_dequeue(struct vsock_sock *vsk,
>>> 	return err;
>>> }
>>>
>>> +static int virtio_transport_seqpacket_do_dequeue(struct vsock_sock *vsk,
>>> +						 struct msghdr *msg,
>>> +						 int flags)
>>> +{
>>> +	struct virtio_vsock_sock *vvs = vsk->trans;
>>> +	struct virtio_vsock_pkt *pkt;
>>> +	int dequeued_len = 0;
>>> +	size_t user_buf_len = msg_data_left(msg);
>>> +	bool copy_failed = false;
>>> +	bool msg_ready = false;
>>> +
>>> +	spin_lock_bh(&vvs->rx_lock);
>>> +
>>> +	if (vvs->msg_count == 0) {
>>> +		spin_unlock_bh(&vvs->rx_lock);
>>> +		return 0;
>>> +	}
>>> +
>>> +	while (!msg_ready) {
>>> +		pkt = list_first_entry(&vvs->rx_queue, struct virtio_vsock_pkt, list);
>>> +
>>> +		if (!copy_failed) {
>>> +			size_t pkt_len;
>>> +			size_t bytes_to_copy;
>>> +
>>> +			pkt_len = (size_t)le32_to_cpu(pkt->hdr.len);
>>> +			bytes_to_copy = min(user_buf_len, pkt_len);
>>> +
>>> +			if (bytes_to_copy) {
>>> +				int err;
>>> +
>>> +				/* sk_lock is held by caller so no one else can dequeue.
>>> +				 * Unlock rx_lock since memcpy_to_msg() may sleep.
>>> +				 */
>>> +				spin_unlock_bh(&vvs->rx_lock);
>>> +
>>> +				err = memcpy_to_msg(msg, pkt->buf, bytes_to_copy);
>>> +				if (err) {
>>> +					/* Copy of message failed, set flag to skip
>>> +					 * copy path for rest of fragments. Rest of
>>> +					 * fragments will be freed without copy.
>>> +					 */
>>> +					copy_failed = true;
>>> +					dequeued_len = err;
>> If we fail to copy the message we will discard the entire packet.
>> Is it acceptable for the user point of view, or we should leave the
>> packet in the queue and the user can retry, maybe with a different
>> buffer?
>>
>> Then we can remove the packets only when we successfully copied all the
>> fragments.
>>
>> I'm not sure make sense, maybe better to check also other
>> implementations :-)
>>
>> Thanks,
>> Stefano
>
>Understand, i'll check it on weekend, anyway I think it is
>not critical for implementation.

Yep, I agree.

>
>
>I have another question: may be it is useful to research for
>approach where packets are not queued until whole message
>is received, but copied to user's buffer thus freeing memory.
>(like previous implementation, of course with solution of problem
>where part of message still in queue, while reader was woken
>by timeout or signal).
>
>I think it is better, because  in current version, sender may set
>'peer_alloc_buf' to  for example 1MB, so at receiver we get
>1MB of 'kmalloc()' memory allocated, while having user's buffer
>to copy data there or drop it(if user's buffer is full). This way
>won't change spec(e.g. no message id or SEQ_BEGIN will be added).
>
>What do You think?

Yep, I see your point and it would be great, but I think the main issues 
to fix is how to handle a signal while we are waiting other fragments 
since the other peer can take unspecified time to send them.

Note that the 'peer_alloc_buf' in the sender, is the value get from the 
receiver, so if the receiver doesn't want to allocate 1MB, can advertise 
a small buffer size.

Thanks,
Stefano


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v11 11/18] virtio/vsock: dequeue callback for SOCK_SEQPACKET
@ 2021-06-18 15:55         ` Stefano Garzarella
  0 siblings, 0 replies; 46+ messages in thread
From: Stefano Garzarella @ 2021-06-18 15:55 UTC (permalink / raw)
  To: Arseny Krasnov
  Cc: Andra Paraschiv, kvm, Michael S. Tsirkin, netdev, linux-kernel,
	virtualization, oxffffaa, Norbert Slusarek, Stefan Hajnoczi,
	Colin Ian King, Jakub Kicinski, David S. Miller

On Fri, Jun 18, 2021 at 06:04:37PM +0300, Arseny Krasnov wrote:
>
>On 18.06.2021 16:44, Stefano Garzarella wrote:
>> Hi Arseny,
>> the series looks great, I have just a question below about
>> seqpacket_dequeue.
>>
>> I also sent a couple a simple fixes, it would be great if you can review
>> them:
>> https://lore.kernel.org/netdev/20210618133526.300347-1-sgarzare@redhat.com/
>>
>>
>> On Fri, Jun 11, 2021 at 02:12:38PM +0300, Arseny Krasnov wrote:
>>> Callback fetches RW packets from rx queue of socket until whole record
>>> is copied(if user's buffer is full, user is not woken up). This is done
>>> to not stall sender, because if we wake up user and it leaves syscall,
>>> nobody will send credit update for rest of record, and sender will wait
>>> for next enter of read syscall at receiver's side. So if user buffer is
>>> full, we just send credit update and drop data.
>>>
>>> Signed-off-by: Arseny Krasnov <arseny.krasnov@kaspersky.com>
>>> ---
>>> v10 -> v11:
>>> 1) 'msg_count' field added to count current number of EORs.
>>> 2) 'msg_ready' argument removed from callback.
>>> 3) If 'memcpy_to_msg()' failed during copy loop, there will be
>>>    no next attempts to copy data, rest of record will be freed.
>>>
>>> include/linux/virtio_vsock.h            |  5 ++
>>> net/vmw_vsock/virtio_transport_common.c | 84 +++++++++++++++++++++++++
>>> 2 files changed, 89 insertions(+)
>>>
>>> diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h
>>> index dc636b727179..1d9a302cb91d 100644
>>> --- a/include/linux/virtio_vsock.h
>>> +++ b/include/linux/virtio_vsock.h
>>> @@ -36,6 +36,7 @@ struct virtio_vsock_sock {
>>> 	u32 rx_bytes;
>>> 	u32 buf_alloc;
>>> 	struct list_head rx_queue;
>>> +	u32 msg_count;
>>> };
>>>
>>> struct virtio_vsock_pkt {
>>> @@ -80,6 +81,10 @@ virtio_transport_dgram_dequeue(struct vsock_sock *vsk,
>>> 			       struct msghdr *msg,
>>> 			       size_t len, int flags);
>>>
>>> +ssize_t
>>> +virtio_transport_seqpacket_dequeue(struct vsock_sock *vsk,
>>> +				   struct msghdr *msg,
>>> +				   int flags);
>>> s64 virtio_transport_stream_has_data(struct vsock_sock *vsk);
>>> s64 virtio_transport_stream_has_space(struct vsock_sock *vsk);
>>>
>>> diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
>>> index ad0d34d41444..1e1df19ec164 100644
>>> --- a/net/vmw_vsock/virtio_transport_common.c
>>> +++ b/net/vmw_vsock/virtio_transport_common.c
>>> @@ -393,6 +393,78 @@ virtio_transport_stream_do_dequeue(struct vsock_sock *vsk,
>>> 	return err;
>>> }
>>>
>>> +static int virtio_transport_seqpacket_do_dequeue(struct vsock_sock *vsk,
>>> +						 struct msghdr *msg,
>>> +						 int flags)
>>> +{
>>> +	struct virtio_vsock_sock *vvs = vsk->trans;
>>> +	struct virtio_vsock_pkt *pkt;
>>> +	int dequeued_len = 0;
>>> +	size_t user_buf_len = msg_data_left(msg);
>>> +	bool copy_failed = false;
>>> +	bool msg_ready = false;
>>> +
>>> +	spin_lock_bh(&vvs->rx_lock);
>>> +
>>> +	if (vvs->msg_count == 0) {
>>> +		spin_unlock_bh(&vvs->rx_lock);
>>> +		return 0;
>>> +	}
>>> +
>>> +	while (!msg_ready) {
>>> +		pkt = list_first_entry(&vvs->rx_queue, struct virtio_vsock_pkt, list);
>>> +
>>> +		if (!copy_failed) {
>>> +			size_t pkt_len;
>>> +			size_t bytes_to_copy;
>>> +
>>> +			pkt_len = (size_t)le32_to_cpu(pkt->hdr.len);
>>> +			bytes_to_copy = min(user_buf_len, pkt_len);
>>> +
>>> +			if (bytes_to_copy) {
>>> +				int err;
>>> +
>>> +				/* sk_lock is held by caller so no one else can dequeue.
>>> +				 * Unlock rx_lock since memcpy_to_msg() may sleep.
>>> +				 */
>>> +				spin_unlock_bh(&vvs->rx_lock);
>>> +
>>> +				err = memcpy_to_msg(msg, pkt->buf, bytes_to_copy);
>>> +				if (err) {
>>> +					/* Copy of message failed, set flag to skip
>>> +					 * copy path for rest of fragments. Rest of
>>> +					 * fragments will be freed without copy.
>>> +					 */
>>> +					copy_failed = true;
>>> +					dequeued_len = err;
>> If we fail to copy the message we will discard the entire packet.
>> Is it acceptable for the user point of view, or we should leave the
>> packet in the queue and the user can retry, maybe with a different
>> buffer?
>>
>> Then we can remove the packets only when we successfully copied all the
>> fragments.
>>
>> I'm not sure make sense, maybe better to check also other
>> implementations :-)
>>
>> Thanks,
>> Stefano
>
>Understand, i'll check it on weekend, anyway I think it is
>not critical for implementation.

Yep, I agree.

>
>
>I have another question: may be it is useful to research for
>approach where packets are not queued until whole message
>is received, but copied to user's buffer thus freeing memory.
>(like previous implementation, of course with solution of problem
>where part of message still in queue, while reader was woken
>by timeout or signal).
>
>I think it is better, because  in current version, sender may set
>'peer_alloc_buf' to  for example 1MB, so at receiver we get
>1MB of 'kmalloc()' memory allocated, while having user's buffer
>to copy data there or drop it(if user's buffer is full). This way
>won't change spec(e.g. no message id or SEQ_BEGIN will be added).
>
>What do You think?

Yep, I see your point and it would be great, but I think the main issues 
to fix is how to handle a signal while we are waiting other fragments 
since the other peer can take unspecified time to send them.

Note that the 'peer_alloc_buf' in the sender, is the value get from the 
receiver, so if the receiver doesn't want to allocate 1MB, can advertise 
a small buffer size.

Thanks,
Stefano

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [MASSMAIL KLMS] Re: [PATCH v11 11/18] virtio/vsock: dequeue callback for SOCK_SEQPACKET
  2021-06-18 15:55         ` Stefano Garzarella
  (?)
@ 2021-06-18 16:08         ` Arseny Krasnov
  2021-06-18 16:25             ` Stefano Garzarella
  -1 siblings, 1 reply; 46+ messages in thread
From: Arseny Krasnov @ 2021-06-18 16:08 UTC (permalink / raw)
  To: Stefano Garzarella
  Cc: Stefan Hajnoczi, Michael S. Tsirkin, Jason Wang, David S. Miller,
	Jakub Kicinski, Norbert Slusarek, Andra Paraschiv,
	Colin Ian King, kvm, virtualization, netdev, linux-kernel,
	oxffffaa


On 18.06.2021 18:55, Stefano Garzarella wrote:
> On Fri, Jun 18, 2021 at 06:04:37PM +0300, Arseny Krasnov wrote:
>> On 18.06.2021 16:44, Stefano Garzarella wrote:
>>> Hi Arseny,
>>> the series looks great, I have just a question below about
>>> seqpacket_dequeue.
>>>
>>> I also sent a couple a simple fixes, it would be great if you can review
>>> them:
>>> https://lore.kernel.org/netdev/20210618133526.300347-1-sgarzare@redhat.com/
>>>
>>>
>>> On Fri, Jun 11, 2021 at 02:12:38PM +0300, Arseny Krasnov wrote:
>>>> Callback fetches RW packets from rx queue of socket until whole record
>>>> is copied(if user's buffer is full, user is not woken up). This is done
>>>> to not stall sender, because if we wake up user and it leaves syscall,
>>>> nobody will send credit update for rest of record, and sender will wait
>>>> for next enter of read syscall at receiver's side. So if user buffer is
>>>> full, we just send credit update and drop data.
>>>>
>>>> Signed-off-by: Arseny Krasnov <arseny.krasnov@kaspersky.com>
>>>> ---
>>>> v10 -> v11:
>>>> 1) 'msg_count' field added to count current number of EORs.
>>>> 2) 'msg_ready' argument removed from callback.
>>>> 3) If 'memcpy_to_msg()' failed during copy loop, there will be
>>>>    no next attempts to copy data, rest of record will be freed.
>>>>
>>>> include/linux/virtio_vsock.h            |  5 ++
>>>> net/vmw_vsock/virtio_transport_common.c | 84 +++++++++++++++++++++++++
>>>> 2 files changed, 89 insertions(+)
>>>>
>>>> diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h
>>>> index dc636b727179..1d9a302cb91d 100644
>>>> --- a/include/linux/virtio_vsock.h
>>>> +++ b/include/linux/virtio_vsock.h
>>>> @@ -36,6 +36,7 @@ struct virtio_vsock_sock {
>>>> 	u32 rx_bytes;
>>>> 	u32 buf_alloc;
>>>> 	struct list_head rx_queue;
>>>> +	u32 msg_count;
>>>> };
>>>>
>>>> struct virtio_vsock_pkt {
>>>> @@ -80,6 +81,10 @@ virtio_transport_dgram_dequeue(struct vsock_sock *vsk,
>>>> 			       struct msghdr *msg,
>>>> 			       size_t len, int flags);
>>>>
>>>> +ssize_t
>>>> +virtio_transport_seqpacket_dequeue(struct vsock_sock *vsk,
>>>> +				   struct msghdr *msg,
>>>> +				   int flags);
>>>> s64 virtio_transport_stream_has_data(struct vsock_sock *vsk);
>>>> s64 virtio_transport_stream_has_space(struct vsock_sock *vsk);
>>>>
>>>> diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
>>>> index ad0d34d41444..1e1df19ec164 100644
>>>> --- a/net/vmw_vsock/virtio_transport_common.c
>>>> +++ b/net/vmw_vsock/virtio_transport_common.c
>>>> @@ -393,6 +393,78 @@ virtio_transport_stream_do_dequeue(struct vsock_sock *vsk,
>>>> 	return err;
>>>> }
>>>>
>>>> +static int virtio_transport_seqpacket_do_dequeue(struct vsock_sock *vsk,
>>>> +						 struct msghdr *msg,
>>>> +						 int flags)
>>>> +{
>>>> +	struct virtio_vsock_sock *vvs = vsk->trans;
>>>> +	struct virtio_vsock_pkt *pkt;
>>>> +	int dequeued_len = 0;
>>>> +	size_t user_buf_len = msg_data_left(msg);
>>>> +	bool copy_failed = false;
>>>> +	bool msg_ready = false;
>>>> +
>>>> +	spin_lock_bh(&vvs->rx_lock);
>>>> +
>>>> +	if (vvs->msg_count == 0) {
>>>> +		spin_unlock_bh(&vvs->rx_lock);
>>>> +		return 0;
>>>> +	}
>>>> +
>>>> +	while (!msg_ready) {
>>>> +		pkt = list_first_entry(&vvs->rx_queue, struct virtio_vsock_pkt, list);
>>>> +
>>>> +		if (!copy_failed) {
>>>> +			size_t pkt_len;
>>>> +			size_t bytes_to_copy;
>>>> +
>>>> +			pkt_len = (size_t)le32_to_cpu(pkt->hdr.len);
>>>> +			bytes_to_copy = min(user_buf_len, pkt_len);
>>>> +
>>>> +			if (bytes_to_copy) {
>>>> +				int err;
>>>> +
>>>> +				/* sk_lock is held by caller so no one else can dequeue.
>>>> +				 * Unlock rx_lock since memcpy_to_msg() may sleep.
>>>> +				 */
>>>> +				spin_unlock_bh(&vvs->rx_lock);
>>>> +
>>>> +				err = memcpy_to_msg(msg, pkt->buf, bytes_to_copy);
>>>> +				if (err) {
>>>> +					/* Copy of message failed, set flag to skip
>>>> +					 * copy path for rest of fragments. Rest of
>>>> +					 * fragments will be freed without copy.
>>>> +					 */
>>>> +					copy_failed = true;
>>>> +					dequeued_len = err;
>>> If we fail to copy the message we will discard the entire packet.
>>> Is it acceptable for the user point of view, or we should leave the
>>> packet in the queue and the user can retry, maybe with a different
>>> buffer?
>>>
>>> Then we can remove the packets only when we successfully copied all the
>>> fragments.
>>>
>>> I'm not sure make sense, maybe better to check also other
>>> implementations :-)
>>>
>>> Thanks,
>>> Stefano
>> Understand, i'll check it on weekend, anyway I think it is
>> not critical for implementation.
> Yep, I agree.
>
>>
>> I have another question: may be it is useful to research for
>> approach where packets are not queued until whole message
>> is received, but copied to user's buffer thus freeing memory.
>> (like previous implementation, of course with solution of problem
>> where part of message still in queue, while reader was woken
>> by timeout or signal).
>>
>> I think it is better, because  in current version, sender may set
>> 'peer_alloc_buf' to  for example 1MB, so at receiver we get
>> 1MB of 'kmalloc()' memory allocated, while having user's buffer
>> to copy data there or drop it(if user's buffer is full). This way
>> won't change spec(e.g. no message id or SEQ_BEGIN will be added).
>>
>> What do You think?
> Yep, I see your point and it would be great, but I think the main issues 
> to fix is how to handle a signal while we are waiting other fragments 
> since the other peer can take unspecified time to send them.

What about transport callback, something like 'seqpacket_drain()' or

'seqpacket_drop_curr()' - when we got signal or timeout, notify transport

to drop current message. In virtio case this will set special flag in transport,

so on next dequeue, this flag is checked and if it is set - we drop all packets

until EOR found. Then we can copy untouched new record.

> Note that the 'peer_alloc_buf' in the sender, is the value get from the 
> receiver, so if the receiver doesn't want to allocate 1MB, can advertise 
> a small buffer size.
>
> Thanks,
> Stefano
>
>

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [MASSMAIL KLMS] Re: [PATCH v11 11/18] virtio/vsock: dequeue callback for SOCK_SEQPACKET
  2021-06-18 16:08         ` [MASSMAIL KLMS] " Arseny Krasnov
@ 2021-06-18 16:25             ` Stefano Garzarella
  0 siblings, 0 replies; 46+ messages in thread
From: Stefano Garzarella @ 2021-06-18 16:25 UTC (permalink / raw)
  To: Arseny Krasnov
  Cc: Stefan Hajnoczi, Michael S. Tsirkin, Jason Wang, David S. Miller,
	Jakub Kicinski, Norbert Slusarek, Andra Paraschiv,
	Colin Ian King, kvm, virtualization, netdev, linux-kernel,
	oxffffaa

On Fri, Jun 18, 2021 at 07:08:30PM +0300, Arseny Krasnov wrote:
>
>On 18.06.2021 18:55, Stefano Garzarella wrote:
>> On Fri, Jun 18, 2021 at 06:04:37PM +0300, Arseny Krasnov wrote:
>>> On 18.06.2021 16:44, Stefano Garzarella wrote:
>>>> Hi Arseny,
>>>> the series looks great, I have just a question below about
>>>> seqpacket_dequeue.
>>>>
>>>> I also sent a couple a simple fixes, it would be great if you can review
>>>> them:
>>>> https://lore.kernel.org/netdev/20210618133526.300347-1-sgarzare@redhat.com/
>>>>
>>>>
>>>> On Fri, Jun 11, 2021 at 02:12:38PM +0300, Arseny Krasnov wrote:
>>>>> Callback fetches RW packets from rx queue of socket until whole record
>>>>> is copied(if user's buffer is full, user is not woken up). This is done
>>>>> to not stall sender, because if we wake up user and it leaves syscall,
>>>>> nobody will send credit update for rest of record, and sender will wait
>>>>> for next enter of read syscall at receiver's side. So if user buffer is
>>>>> full, we just send credit update and drop data.
>>>>>
>>>>> Signed-off-by: Arseny Krasnov <arseny.krasnov@kaspersky.com>
>>>>> ---
>>>>> v10 -> v11:
>>>>> 1) 'msg_count' field added to count current number of EORs.
>>>>> 2) 'msg_ready' argument removed from callback.
>>>>> 3) If 'memcpy_to_msg()' failed during copy loop, there will be
>>>>>    no next attempts to copy data, rest of record will be freed.
>>>>>
>>>>> include/linux/virtio_vsock.h            |  5 ++
>>>>> net/vmw_vsock/virtio_transport_common.c | 84 +++++++++++++++++++++++++
>>>>> 2 files changed, 89 insertions(+)
>>>>>
>>>>> diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h
>>>>> index dc636b727179..1d9a302cb91d 100644
>>>>> --- a/include/linux/virtio_vsock.h
>>>>> +++ b/include/linux/virtio_vsock.h
>>>>> @@ -36,6 +36,7 @@ struct virtio_vsock_sock {
>>>>> 	u32 rx_bytes;
>>>>> 	u32 buf_alloc;
>>>>> 	struct list_head rx_queue;
>>>>> +	u32 msg_count;
>>>>> };
>>>>>
>>>>> struct virtio_vsock_pkt {
>>>>> @@ -80,6 +81,10 @@ virtio_transport_dgram_dequeue(struct vsock_sock *vsk,
>>>>> 			       struct msghdr *msg,
>>>>> 			       size_t len, int flags);
>>>>>
>>>>> +ssize_t
>>>>> +virtio_transport_seqpacket_dequeue(struct vsock_sock *vsk,
>>>>> +				   struct msghdr *msg,
>>>>> +				   int flags);
>>>>> s64 virtio_transport_stream_has_data(struct vsock_sock *vsk);
>>>>> s64 virtio_transport_stream_has_space(struct vsock_sock *vsk);
>>>>>
>>>>> diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
>>>>> index ad0d34d41444..1e1df19ec164 100644
>>>>> --- a/net/vmw_vsock/virtio_transport_common.c
>>>>> +++ b/net/vmw_vsock/virtio_transport_common.c
>>>>> @@ -393,6 +393,78 @@ virtio_transport_stream_do_dequeue(struct vsock_sock *vsk,
>>>>> 	return err;
>>>>> }
>>>>>
>>>>> +static int virtio_transport_seqpacket_do_dequeue(struct vsock_sock *vsk,
>>>>> +						 struct msghdr *msg,
>>>>> +						 int flags)
>>>>> +{
>>>>> +	struct virtio_vsock_sock *vvs = vsk->trans;
>>>>> +	struct virtio_vsock_pkt *pkt;
>>>>> +	int dequeued_len = 0;
>>>>> +	size_t user_buf_len = msg_data_left(msg);
>>>>> +	bool copy_failed = false;
>>>>> +	bool msg_ready = false;
>>>>> +
>>>>> +	spin_lock_bh(&vvs->rx_lock);
>>>>> +
>>>>> +	if (vvs->msg_count == 0) {
>>>>> +		spin_unlock_bh(&vvs->rx_lock);
>>>>> +		return 0;
>>>>> +	}
>>>>> +
>>>>> +	while (!msg_ready) {
>>>>> +		pkt = list_first_entry(&vvs->rx_queue, struct virtio_vsock_pkt, list);
>>>>> +
>>>>> +		if (!copy_failed) {
>>>>> +			size_t pkt_len;
>>>>> +			size_t bytes_to_copy;
>>>>> +
>>>>> +			pkt_len = (size_t)le32_to_cpu(pkt->hdr.len);
>>>>> +			bytes_to_copy = min(user_buf_len, pkt_len);
>>>>> +
>>>>> +			if (bytes_to_copy) {
>>>>> +				int err;
>>>>> +
>>>>> +				/* sk_lock is held by caller so no one else can dequeue.
>>>>> +				 * Unlock rx_lock since memcpy_to_msg() may sleep.
>>>>> +				 */
>>>>> +				spin_unlock_bh(&vvs->rx_lock);
>>>>> +
>>>>> +				err = memcpy_to_msg(msg, pkt->buf, bytes_to_copy);
>>>>> +				if (err) {
>>>>> +					/* Copy of message failed, set flag to skip
>>>>> +					 * copy path for rest of fragments. Rest of
>>>>> +					 * fragments will be freed without copy.
>>>>> +					 */
>>>>> +					copy_failed = true;
>>>>> +					dequeued_len = err;
>>>> If we fail to copy the message we will discard the entire packet.
>>>> Is it acceptable for the user point of view, or we should leave the
>>>> packet in the queue and the user can retry, maybe with a different
>>>> buffer?
>>>>
>>>> Then we can remove the packets only when we successfully copied all the
>>>> fragments.
>>>>
>>>> I'm not sure make sense, maybe better to check also other
>>>> implementations :-)
>>>>
>>>> Thanks,
>>>> Stefano
>>> Understand, i'll check it on weekend, anyway I think it is
>>> not critical for implementation.
>> Yep, I agree.
>>
>>>
>>> I have another question: may be it is useful to research for
>>> approach where packets are not queued until whole message
>>> is received, but copied to user's buffer thus freeing memory.
>>> (like previous implementation, of course with solution of problem
>>> where part of message still in queue, while reader was woken
>>> by timeout or signal).
>>>
>>> I think it is better, because  in current version, sender may set
>>> 'peer_alloc_buf' to  for example 1MB, so at receiver we get
>>> 1MB of 'kmalloc()' memory allocated, while having user's buffer
>>> to copy data there or drop it(if user's buffer is full). This way
>>> won't change spec(e.g. no message id or SEQ_BEGIN will be added).
>>>
>>> What do You think?
>> Yep, I see your point and it would be great, but I think the main issues
>> to fix is how to handle a signal while we are waiting other fragments
>> since the other peer can take unspecified time to send them.
>
>What about transport callback, something like 'seqpacket_drain()' or
>
>'seqpacket_drop_curr()' - when we got signal or timeout, notify transport
>
>to drop current message. In virtio case this will set special flag in transport,
>
>so on next dequeue, this flag is checked and if it is set - we drop all packets
>
>until EOR found. Then we can copy untouched new record.
>

But in this way, we will lose the entire message.

Is it acceptable for seqpacket?

Stefano


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [MASSMAIL KLMS] Re: [PATCH v11 11/18] virtio/vsock: dequeue callback for SOCK_SEQPACKET
@ 2021-06-18 16:25             ` Stefano Garzarella
  0 siblings, 0 replies; 46+ messages in thread
From: Stefano Garzarella @ 2021-06-18 16:25 UTC (permalink / raw)
  To: Arseny Krasnov
  Cc: Andra Paraschiv, kvm, Michael S. Tsirkin, netdev, linux-kernel,
	virtualization, oxffffaa, Norbert Slusarek, Stefan Hajnoczi,
	Colin Ian King, Jakub Kicinski, David S. Miller

On Fri, Jun 18, 2021 at 07:08:30PM +0300, Arseny Krasnov wrote:
>
>On 18.06.2021 18:55, Stefano Garzarella wrote:
>> On Fri, Jun 18, 2021 at 06:04:37PM +0300, Arseny Krasnov wrote:
>>> On 18.06.2021 16:44, Stefano Garzarella wrote:
>>>> Hi Arseny,
>>>> the series looks great, I have just a question below about
>>>> seqpacket_dequeue.
>>>>
>>>> I also sent a couple a simple fixes, it would be great if you can review
>>>> them:
>>>> https://lore.kernel.org/netdev/20210618133526.300347-1-sgarzare@redhat.com/
>>>>
>>>>
>>>> On Fri, Jun 11, 2021 at 02:12:38PM +0300, Arseny Krasnov wrote:
>>>>> Callback fetches RW packets from rx queue of socket until whole record
>>>>> is copied(if user's buffer is full, user is not woken up). This is done
>>>>> to not stall sender, because if we wake up user and it leaves syscall,
>>>>> nobody will send credit update for rest of record, and sender will wait
>>>>> for next enter of read syscall at receiver's side. So if user buffer is
>>>>> full, we just send credit update and drop data.
>>>>>
>>>>> Signed-off-by: Arseny Krasnov <arseny.krasnov@kaspersky.com>
>>>>> ---
>>>>> v10 -> v11:
>>>>> 1) 'msg_count' field added to count current number of EORs.
>>>>> 2) 'msg_ready' argument removed from callback.
>>>>> 3) If 'memcpy_to_msg()' failed during copy loop, there will be
>>>>>    no next attempts to copy data, rest of record will be freed.
>>>>>
>>>>> include/linux/virtio_vsock.h            |  5 ++
>>>>> net/vmw_vsock/virtio_transport_common.c | 84 +++++++++++++++++++++++++
>>>>> 2 files changed, 89 insertions(+)
>>>>>
>>>>> diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h
>>>>> index dc636b727179..1d9a302cb91d 100644
>>>>> --- a/include/linux/virtio_vsock.h
>>>>> +++ b/include/linux/virtio_vsock.h
>>>>> @@ -36,6 +36,7 @@ struct virtio_vsock_sock {
>>>>> 	u32 rx_bytes;
>>>>> 	u32 buf_alloc;
>>>>> 	struct list_head rx_queue;
>>>>> +	u32 msg_count;
>>>>> };
>>>>>
>>>>> struct virtio_vsock_pkt {
>>>>> @@ -80,6 +81,10 @@ virtio_transport_dgram_dequeue(struct vsock_sock *vsk,
>>>>> 			       struct msghdr *msg,
>>>>> 			       size_t len, int flags);
>>>>>
>>>>> +ssize_t
>>>>> +virtio_transport_seqpacket_dequeue(struct vsock_sock *vsk,
>>>>> +				   struct msghdr *msg,
>>>>> +				   int flags);
>>>>> s64 virtio_transport_stream_has_data(struct vsock_sock *vsk);
>>>>> s64 virtio_transport_stream_has_space(struct vsock_sock *vsk);
>>>>>
>>>>> diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
>>>>> index ad0d34d41444..1e1df19ec164 100644
>>>>> --- a/net/vmw_vsock/virtio_transport_common.c
>>>>> +++ b/net/vmw_vsock/virtio_transport_common.c
>>>>> @@ -393,6 +393,78 @@ virtio_transport_stream_do_dequeue(struct vsock_sock *vsk,
>>>>> 	return err;
>>>>> }
>>>>>
>>>>> +static int virtio_transport_seqpacket_do_dequeue(struct vsock_sock *vsk,
>>>>> +						 struct msghdr *msg,
>>>>> +						 int flags)
>>>>> +{
>>>>> +	struct virtio_vsock_sock *vvs = vsk->trans;
>>>>> +	struct virtio_vsock_pkt *pkt;
>>>>> +	int dequeued_len = 0;
>>>>> +	size_t user_buf_len = msg_data_left(msg);
>>>>> +	bool copy_failed = false;
>>>>> +	bool msg_ready = false;
>>>>> +
>>>>> +	spin_lock_bh(&vvs->rx_lock);
>>>>> +
>>>>> +	if (vvs->msg_count == 0) {
>>>>> +		spin_unlock_bh(&vvs->rx_lock);
>>>>> +		return 0;
>>>>> +	}
>>>>> +
>>>>> +	while (!msg_ready) {
>>>>> +		pkt = list_first_entry(&vvs->rx_queue, struct virtio_vsock_pkt, list);
>>>>> +
>>>>> +		if (!copy_failed) {
>>>>> +			size_t pkt_len;
>>>>> +			size_t bytes_to_copy;
>>>>> +
>>>>> +			pkt_len = (size_t)le32_to_cpu(pkt->hdr.len);
>>>>> +			bytes_to_copy = min(user_buf_len, pkt_len);
>>>>> +
>>>>> +			if (bytes_to_copy) {
>>>>> +				int err;
>>>>> +
>>>>> +				/* sk_lock is held by caller so no one else can dequeue.
>>>>> +				 * Unlock rx_lock since memcpy_to_msg() may sleep.
>>>>> +				 */
>>>>> +				spin_unlock_bh(&vvs->rx_lock);
>>>>> +
>>>>> +				err = memcpy_to_msg(msg, pkt->buf, bytes_to_copy);
>>>>> +				if (err) {
>>>>> +					/* Copy of message failed, set flag to skip
>>>>> +					 * copy path for rest of fragments. Rest of
>>>>> +					 * fragments will be freed without copy.
>>>>> +					 */
>>>>> +					copy_failed = true;
>>>>> +					dequeued_len = err;
>>>> If we fail to copy the message we will discard the entire packet.
>>>> Is it acceptable for the user point of view, or we should leave the
>>>> packet in the queue and the user can retry, maybe with a different
>>>> buffer?
>>>>
>>>> Then we can remove the packets only when we successfully copied all the
>>>> fragments.
>>>>
>>>> I'm not sure make sense, maybe better to check also other
>>>> implementations :-)
>>>>
>>>> Thanks,
>>>> Stefano
>>> Understand, i'll check it on weekend, anyway I think it is
>>> not critical for implementation.
>> Yep, I agree.
>>
>>>
>>> I have another question: may be it is useful to research for
>>> approach where packets are not queued until whole message
>>> is received, but copied to user's buffer thus freeing memory.
>>> (like previous implementation, of course with solution of problem
>>> where part of message still in queue, while reader was woken
>>> by timeout or signal).
>>>
>>> I think it is better, because  in current version, sender may set
>>> 'peer_alloc_buf' to  for example 1MB, so at receiver we get
>>> 1MB of 'kmalloc()' memory allocated, while having user's buffer
>>> to copy data there or drop it(if user's buffer is full). This way
>>> won't change spec(e.g. no message id or SEQ_BEGIN will be added).
>>>
>>> What do You think?
>> Yep, I see your point and it would be great, but I think the main issues
>> to fix is how to handle a signal while we are waiting other fragments
>> since the other peer can take unspecified time to send them.
>
>What about transport callback, something like 'seqpacket_drain()' or
>
>'seqpacket_drop_curr()' - when we got signal or timeout, notify transport
>
>to drop current message. In virtio case this will set special flag in transport,
>
>so on next dequeue, this flag is checked and if it is set - we drop all packets
>
>until EOR found. Then we can copy untouched new record.
>

But in this way, we will lose the entire message.

Is it acceptable for seqpacket?

Stefano

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [MASSMAIL KLMS] Re: [PATCH v11 11/18] virtio/vsock: dequeue callback for SOCK_SEQPACKET
  2021-06-18 16:25             ` Stefano Garzarella
  (?)
@ 2021-06-18 16:26             ` Arseny Krasnov
  2021-06-21  6:55               ` Arseny Krasnov
  -1 siblings, 1 reply; 46+ messages in thread
From: Arseny Krasnov @ 2021-06-18 16:26 UTC (permalink / raw)
  To: Stefano Garzarella
  Cc: Stefan Hajnoczi, Michael S. Tsirkin, Jason Wang, David S. Miller,
	Jakub Kicinski, Norbert Slusarek, Andra Paraschiv,
	Colin Ian King, kvm, virtualization, netdev, linux-kernel,
	oxffffaa


On 18.06.2021 19:25, Stefano Garzarella wrote:
> On Fri, Jun 18, 2021 at 07:08:30PM +0300, Arseny Krasnov wrote:
>> On 18.06.2021 18:55, Stefano Garzarella wrote:
>>> On Fri, Jun 18, 2021 at 06:04:37PM +0300, Arseny Krasnov wrote:
>>>> On 18.06.2021 16:44, Stefano Garzarella wrote:
>>>>> Hi Arseny,
>>>>> the series looks great, I have just a question below about
>>>>> seqpacket_dequeue.
>>>>>
>>>>> I also sent a couple a simple fixes, it would be great if you can review
>>>>> them:
>>>>> https://lore.kernel.org/netdev/20210618133526.300347-1-sgarzare@redhat.com/
>>>>>
>>>>>
>>>>> On Fri, Jun 11, 2021 at 02:12:38PM +0300, Arseny Krasnov wrote:
>>>>>> Callback fetches RW packets from rx queue of socket until whole record
>>>>>> is copied(if user's buffer is full, user is not woken up). This is done
>>>>>> to not stall sender, because if we wake up user and it leaves syscall,
>>>>>> nobody will send credit update for rest of record, and sender will wait
>>>>>> for next enter of read syscall at receiver's side. So if user buffer is
>>>>>> full, we just send credit update and drop data.
>>>>>>
>>>>>> Signed-off-by: Arseny Krasnov <arseny.krasnov@kaspersky.com>
>>>>>> ---
>>>>>> v10 -> v11:
>>>>>> 1) 'msg_count' field added to count current number of EORs.
>>>>>> 2) 'msg_ready' argument removed from callback.
>>>>>> 3) If 'memcpy_to_msg()' failed during copy loop, there will be
>>>>>>    no next attempts to copy data, rest of record will be freed.
>>>>>>
>>>>>> include/linux/virtio_vsock.h            |  5 ++
>>>>>> net/vmw_vsock/virtio_transport_common.c | 84 +++++++++++++++++++++++++
>>>>>> 2 files changed, 89 insertions(+)
>>>>>>
>>>>>> diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h
>>>>>> index dc636b727179..1d9a302cb91d 100644
>>>>>> --- a/include/linux/virtio_vsock.h
>>>>>> +++ b/include/linux/virtio_vsock.h
>>>>>> @@ -36,6 +36,7 @@ struct virtio_vsock_sock {
>>>>>> 	u32 rx_bytes;
>>>>>> 	u32 buf_alloc;
>>>>>> 	struct list_head rx_queue;
>>>>>> +	u32 msg_count;
>>>>>> };
>>>>>>
>>>>>> struct virtio_vsock_pkt {
>>>>>> @@ -80,6 +81,10 @@ virtio_transport_dgram_dequeue(struct vsock_sock *vsk,
>>>>>> 			       struct msghdr *msg,
>>>>>> 			       size_t len, int flags);
>>>>>>
>>>>>> +ssize_t
>>>>>> +virtio_transport_seqpacket_dequeue(struct vsock_sock *vsk,
>>>>>> +				   struct msghdr *msg,
>>>>>> +				   int flags);
>>>>>> s64 virtio_transport_stream_has_data(struct vsock_sock *vsk);
>>>>>> s64 virtio_transport_stream_has_space(struct vsock_sock *vsk);
>>>>>>
>>>>>> diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
>>>>>> index ad0d34d41444..1e1df19ec164 100644
>>>>>> --- a/net/vmw_vsock/virtio_transport_common.c
>>>>>> +++ b/net/vmw_vsock/virtio_transport_common.c
>>>>>> @@ -393,6 +393,78 @@ virtio_transport_stream_do_dequeue(struct vsock_sock *vsk,
>>>>>> 	return err;
>>>>>> }
>>>>>>
>>>>>> +static int virtio_transport_seqpacket_do_dequeue(struct vsock_sock *vsk,
>>>>>> +						 struct msghdr *msg,
>>>>>> +						 int flags)
>>>>>> +{
>>>>>> +	struct virtio_vsock_sock *vvs = vsk->trans;
>>>>>> +	struct virtio_vsock_pkt *pkt;
>>>>>> +	int dequeued_len = 0;
>>>>>> +	size_t user_buf_len = msg_data_left(msg);
>>>>>> +	bool copy_failed = false;
>>>>>> +	bool msg_ready = false;
>>>>>> +
>>>>>> +	spin_lock_bh(&vvs->rx_lock);
>>>>>> +
>>>>>> +	if (vvs->msg_count == 0) {
>>>>>> +		spin_unlock_bh(&vvs->rx_lock);
>>>>>> +		return 0;
>>>>>> +	}
>>>>>> +
>>>>>> +	while (!msg_ready) {
>>>>>> +		pkt = list_first_entry(&vvs->rx_queue, struct virtio_vsock_pkt, list);
>>>>>> +
>>>>>> +		if (!copy_failed) {
>>>>>> +			size_t pkt_len;
>>>>>> +			size_t bytes_to_copy;
>>>>>> +
>>>>>> +			pkt_len = (size_t)le32_to_cpu(pkt->hdr.len);
>>>>>> +			bytes_to_copy = min(user_buf_len, pkt_len);
>>>>>> +
>>>>>> +			if (bytes_to_copy) {
>>>>>> +				int err;
>>>>>> +
>>>>>> +				/* sk_lock is held by caller so no one else can dequeue.
>>>>>> +				 * Unlock rx_lock since memcpy_to_msg() may sleep.
>>>>>> +				 */
>>>>>> +				spin_unlock_bh(&vvs->rx_lock);
>>>>>> +
>>>>>> +				err = memcpy_to_msg(msg, pkt->buf, bytes_to_copy);
>>>>>> +				if (err) {
>>>>>> +					/* Copy of message failed, set flag to skip
>>>>>> +					 * copy path for rest of fragments. Rest of
>>>>>> +					 * fragments will be freed without copy.
>>>>>> +					 */
>>>>>> +					copy_failed = true;
>>>>>> +					dequeued_len = err;
>>>>> If we fail to copy the message we will discard the entire packet.
>>>>> Is it acceptable for the user point of view, or we should leave the
>>>>> packet in the queue and the user can retry, maybe with a different
>>>>> buffer?
>>>>>
>>>>> Then we can remove the packets only when we successfully copied all the
>>>>> fragments.
>>>>>
>>>>> I'm not sure make sense, maybe better to check also other
>>>>> implementations :-)
>>>>>
>>>>> Thanks,
>>>>> Stefano
>>>> Understand, i'll check it on weekend, anyway I think it is
>>>> not critical for implementation.
>>> Yep, I agree.
>>>
>>>> I have another question: may be it is useful to research for
>>>> approach where packets are not queued until whole message
>>>> is received, but copied to user's buffer thus freeing memory.
>>>> (like previous implementation, of course with solution of problem
>>>> where part of message still in queue, while reader was woken
>>>> by timeout or signal).
>>>>
>>>> I think it is better, because  in current version, sender may set
>>>> 'peer_alloc_buf' to  for example 1MB, so at receiver we get
>>>> 1MB of 'kmalloc()' memory allocated, while having user's buffer
>>>> to copy data there or drop it(if user's buffer is full). This way
>>>> won't change spec(e.g. no message id or SEQ_BEGIN will be added).
>>>>
>>>> What do You think?
>>> Yep, I see your point and it would be great, but I think the main issues
>>> to fix is how to handle a signal while we are waiting other fragments
>>> since the other peer can take unspecified time to send them.
>> What about transport callback, something like 'seqpacket_drain()' or
>>
>> 'seqpacket_drop_curr()' - when we got signal or timeout, notify transport
>>
>> to drop current message. In virtio case this will set special flag in transport,
>>
>> so on next dequeue, this flag is checked and if it is set - we drop all packets
>>
>> until EOR found. Then we can copy untouched new record.
>>
> But in this way, we will lose the entire message.
>
> Is it acceptable for seqpacket?
>
> Stefano
Hm, i'll check it. At least for unix domain sockets - it supports SEQPACKET
>
>

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [MASSMAIL KLMS] Re: [PATCH v11 11/18] virtio/vsock: dequeue callback for SOCK_SEQPACKET
  2021-06-18 16:26             ` Arseny Krasnov
@ 2021-06-21  6:55               ` Arseny Krasnov
  2021-06-21 10:23                   ` Stefano Garzarella
  0 siblings, 1 reply; 46+ messages in thread
From: Arseny Krasnov @ 2021-06-21  6:55 UTC (permalink / raw)
  To: Stefano Garzarella
  Cc: Stefan Hajnoczi, Michael S. Tsirkin, Jason Wang, David S. Miller,
	Jakub Kicinski, Norbert Slusarek, Andra Paraschiv,
	Colin Ian King, kvm, virtualization, netdev, linux-kernel,
	oxffffaa


On 18.06.2021 19:26, Arseny Krasnov wrote:
> On 18.06.2021 19:25, Stefano Garzarella wrote:
>> On Fri, Jun 18, 2021 at 07:08:30PM +0300, Arseny Krasnov wrote:
>>> On 18.06.2021 18:55, Stefano Garzarella wrote:
>>>> On Fri, Jun 18, 2021 at 06:04:37PM +0300, Arseny Krasnov wrote:
>>>>> On 18.06.2021 16:44, Stefano Garzarella wrote:
>>>>>> Hi Arseny,
>>>>>> the series looks great, I have just a question below about
>>>>>> seqpacket_dequeue.
>>>>>>
>>>>>> I also sent a couple a simple fixes, it would be great if you can review
>>>>>> them:
>>>>>> https://lore.kernel.org/netdev/20210618133526.300347-1-sgarzare@redhat.com/
>>>>>>
>>>>>>
>>>>>> On Fri, Jun 11, 2021 at 02:12:38PM +0300, Arseny Krasnov wrote:
>>>>>>> Callback fetches RW packets from rx queue of socket until whole record
>>>>>>> is copied(if user's buffer is full, user is not woken up). This is done
>>>>>>> to not stall sender, because if we wake up user and it leaves syscall,
>>>>>>> nobody will send credit update for rest of record, and sender will wait
>>>>>>> for next enter of read syscall at receiver's side. So if user buffer is
>>>>>>> full, we just send credit update and drop data.
>>>>>>>
>>>>>>> Signed-off-by: Arseny Krasnov <arseny.krasnov@kaspersky.com>
>>>>>>> ---
>>>>>>> v10 -> v11:
>>>>>>> 1) 'msg_count' field added to count current number of EORs.
>>>>>>> 2) 'msg_ready' argument removed from callback.
>>>>>>> 3) If 'memcpy_to_msg()' failed during copy loop, there will be
>>>>>>>    no next attempts to copy data, rest of record will be freed.
>>>>>>>
>>>>>>> include/linux/virtio_vsock.h            |  5 ++
>>>>>>> net/vmw_vsock/virtio_transport_common.c | 84 +++++++++++++++++++++++++
>>>>>>> 2 files changed, 89 insertions(+)
>>>>>>>
>>>>>>> diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h
>>>>>>> index dc636b727179..1d9a302cb91d 100644
>>>>>>> --- a/include/linux/virtio_vsock.h
>>>>>>> +++ b/include/linux/virtio_vsock.h
>>>>>>> @@ -36,6 +36,7 @@ struct virtio_vsock_sock {
>>>>>>> 	u32 rx_bytes;
>>>>>>> 	u32 buf_alloc;
>>>>>>> 	struct list_head rx_queue;
>>>>>>> +	u32 msg_count;
>>>>>>> };
>>>>>>>
>>>>>>> struct virtio_vsock_pkt {
>>>>>>> @@ -80,6 +81,10 @@ virtio_transport_dgram_dequeue(struct vsock_sock *vsk,
>>>>>>> 			       struct msghdr *msg,
>>>>>>> 			       size_t len, int flags);
>>>>>>>
>>>>>>> +ssize_t
>>>>>>> +virtio_transport_seqpacket_dequeue(struct vsock_sock *vsk,
>>>>>>> +				   struct msghdr *msg,
>>>>>>> +				   int flags);
>>>>>>> s64 virtio_transport_stream_has_data(struct vsock_sock *vsk);
>>>>>>> s64 virtio_transport_stream_has_space(struct vsock_sock *vsk);
>>>>>>>
>>>>>>> diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
>>>>>>> index ad0d34d41444..1e1df19ec164 100644
>>>>>>> --- a/net/vmw_vsock/virtio_transport_common.c
>>>>>>> +++ b/net/vmw_vsock/virtio_transport_common.c
>>>>>>> @@ -393,6 +393,78 @@ virtio_transport_stream_do_dequeue(struct vsock_sock *vsk,
>>>>>>> 	return err;
>>>>>>> }
>>>>>>>
>>>>>>> +static int virtio_transport_seqpacket_do_dequeue(struct vsock_sock *vsk,
>>>>>>> +						 struct msghdr *msg,
>>>>>>> +						 int flags)
>>>>>>> +{
>>>>>>> +	struct virtio_vsock_sock *vvs = vsk->trans;
>>>>>>> +	struct virtio_vsock_pkt *pkt;
>>>>>>> +	int dequeued_len = 0;
>>>>>>> +	size_t user_buf_len = msg_data_left(msg);
>>>>>>> +	bool copy_failed = false;
>>>>>>> +	bool msg_ready = false;
>>>>>>> +
>>>>>>> +	spin_lock_bh(&vvs->rx_lock);
>>>>>>> +
>>>>>>> +	if (vvs->msg_count == 0) {
>>>>>>> +		spin_unlock_bh(&vvs->rx_lock);
>>>>>>> +		return 0;
>>>>>>> +	}
>>>>>>> +
>>>>>>> +	while (!msg_ready) {
>>>>>>> +		pkt = list_first_entry(&vvs->rx_queue, struct virtio_vsock_pkt, list);
>>>>>>> +
>>>>>>> +		if (!copy_failed) {
>>>>>>> +			size_t pkt_len;
>>>>>>> +			size_t bytes_to_copy;
>>>>>>> +
>>>>>>> +			pkt_len = (size_t)le32_to_cpu(pkt->hdr.len);
>>>>>>> +			bytes_to_copy = min(user_buf_len, pkt_len);
>>>>>>> +
>>>>>>> +			if (bytes_to_copy) {
>>>>>>> +				int err;
>>>>>>> +
>>>>>>> +				/* sk_lock is held by caller so no one else can dequeue.
>>>>>>> +				 * Unlock rx_lock since memcpy_to_msg() may sleep.
>>>>>>> +				 */
>>>>>>> +				spin_unlock_bh(&vvs->rx_lock);
>>>>>>> +
>>>>>>> +				err = memcpy_to_msg(msg, pkt->buf, bytes_to_copy);
>>>>>>> +				if (err) {
>>>>>>> +					/* Copy of message failed, set flag to skip
>>>>>>> +					 * copy path for rest of fragments. Rest of
>>>>>>> +					 * fragments will be freed without copy.
>>>>>>> +					 */
>>>>>>> +					copy_failed = true;
>>>>>>> +					dequeued_len = err;
>>>>>> If we fail to copy the message we will discard the entire packet.
>>>>>> Is it acceptable for the user point of view, or we should leave the
>>>>>> packet in the queue and the user can retry, maybe with a different
>>>>>> buffer?
>>>>>>
>>>>>> Then we can remove the packets only when we successfully copied all the
>>>>>> fragments.
>>>>>>
>>>>>> I'm not sure make sense, maybe better to check also other
>>>>>> implementations :-)
>>>>>>
>>>>>> Thanks,
>>>>>> Stefano
>>>>> Understand, i'll check it on weekend, anyway I think it is
>>>>> not critical for implementation.
>>>> Yep, I agree.
>>>>
>>>>> I have another question: may be it is useful to research for
>>>>> approach where packets are not queued until whole message
>>>>> is received, but copied to user's buffer thus freeing memory.
>>>>> (like previous implementation, of course with solution of problem
>>>>> where part of message still in queue, while reader was woken
>>>>> by timeout or signal).
>>>>>
>>>>> I think it is better, because  in current version, sender may set
>>>>> 'peer_alloc_buf' to  for example 1MB, so at receiver we get
>>>>> 1MB of 'kmalloc()' memory allocated, while having user's buffer
>>>>> to copy data there or drop it(if user's buffer is full). This way
>>>>> won't change spec(e.g. no message id or SEQ_BEGIN will be added).
>>>>>
>>>>> What do You think?
>>>> Yep, I see your point and it would be great, but I think the main issues
>>>> to fix is how to handle a signal while we are waiting other fragments
>>>> since the other peer can take unspecified time to send them.
>>> What about transport callback, something like 'seqpacket_drain()' or
>>>
>>> 'seqpacket_drop_curr()' - when we got signal or timeout, notify transport
>>>
>>> to drop current message. In virtio case this will set special flag in transport,
>>>
>>> so on next dequeue, this flag is checked and if it is set - we drop all packets
>>>
>>> until EOR found. Then we can copy untouched new record.
>>>
>> But in this way, we will lose the entire message.
>>
>> Is it acceptable for seqpacket?
>>
>> Stefano
> Hm, i'll check it. At least for unix domain sockets - it supports SEQPACKET

Hello, i've checked AF_UNIX and AF_AX25 SEQPACKET implementations,

in both cases:

1) Datagram is dequeued first, then copied to user's buffer.

2) Datagram is also freed when copying to user's buffer fail

(it is not reinserted back).


But, in case of virtio vsock, i've got the following concern in

this approach: in cases of AF_UNIX or AF_AX25 there is maximum

datagram size, strictly limited by spec, so no 'setsockopt()' call allows

to exceed this. Also these limits are significantly smaller that current

amounts of RAM. But, in our case, there is no such limit: peer could

say 'i want to use 100MB datagram', and receiver just answer 'ok',

 as there is just variable assignment to setup new limit. Now, consider

that there will be 10 peers, 100MB each(no one limit such request,

because each socket doesn't know about each other). I think we get

out-of-service in this case - all kmalloc() memory will be wasted for

pending record.


I still think, that approach when we copy data from packet to user's

buffer without waiting EOR is better.


Also i'll rebase QEMU patch today or tomorrow.


What do You Think?

>>

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [MASSMAIL KLMS] Re: [PATCH v11 11/18] virtio/vsock: dequeue callback for SOCK_SEQPACKET
  2021-06-21  6:55               ` Arseny Krasnov
@ 2021-06-21 10:23                   ` Stefano Garzarella
  0 siblings, 0 replies; 46+ messages in thread
From: Stefano Garzarella @ 2021-06-21 10:23 UTC (permalink / raw)
  To: Arseny Krasnov
  Cc: Stefan Hajnoczi, Michael S. Tsirkin, Jason Wang, David S. Miller,
	Jakub Kicinski, Norbert Slusarek, Andra Paraschiv,
	Colin Ian King, kvm, virtualization, netdev, linux-kernel,
	oxffffaa

On Mon, Jun 21, 2021 at 09:55:13AM +0300, Arseny Krasnov wrote:
>
>On 18.06.2021 19:26, Arseny Krasnov wrote:
>> On 18.06.2021 19:25, Stefano Garzarella wrote:
>>> On Fri, Jun 18, 2021 at 07:08:30PM +0300, Arseny Krasnov wrote:
>>>> On 18.06.2021 18:55, Stefano Garzarella wrote:
>>>>> On Fri, Jun 18, 2021 at 06:04:37PM +0300, Arseny Krasnov wrote:
>>>>>> On 18.06.2021 16:44, Stefano Garzarella wrote:
>>>>>>> Hi Arseny,
>>>>>>> the series looks great, I have just a question below about
>>>>>>> seqpacket_dequeue.
>>>>>>>
>>>>>>> I also sent a couple a simple fixes, it would be great if you can review
>>>>>>> them:
>>>>>>> https://lore.kernel.org/netdev/20210618133526.300347-1-sgarzare@redhat.com/
>>>>>>>
>>>>>>>
>>>>>>> On Fri, Jun 11, 2021 at 02:12:38PM +0300, Arseny Krasnov wrote:
>>>>>>>> Callback fetches RW packets from rx queue of socket until whole record
>>>>>>>> is copied(if user's buffer is full, user is not woken up). This is done
>>>>>>>> to not stall sender, because if we wake up user and it leaves syscall,
>>>>>>>> nobody will send credit update for rest of record, and sender will wait
>>>>>>>> for next enter of read syscall at receiver's side. So if user buffer is
>>>>>>>> full, we just send credit update and drop data.
>>>>>>>>
>>>>>>>> Signed-off-by: Arseny Krasnov <arseny.krasnov@kaspersky.com>
>>>>>>>> ---
>>>>>>>> v10 -> v11:
>>>>>>>> 1) 'msg_count' field added to count current number of EORs.
>>>>>>>> 2) 'msg_ready' argument removed from callback.
>>>>>>>> 3) If 'memcpy_to_msg()' failed during copy loop, there will be
>>>>>>>>    no next attempts to copy data, rest of record will be freed.
>>>>>>>>
>>>>>>>> include/linux/virtio_vsock.h            |  5 ++
>>>>>>>> net/vmw_vsock/virtio_transport_common.c | 84 +++++++++++++++++++++++++
>>>>>>>> 2 files changed, 89 insertions(+)
>>>>>>>>
>>>>>>>> diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h
>>>>>>>> index dc636b727179..1d9a302cb91d 100644
>>>>>>>> --- a/include/linux/virtio_vsock.h
>>>>>>>> +++ b/include/linux/virtio_vsock.h
>>>>>>>> @@ -36,6 +36,7 @@ struct virtio_vsock_sock {
>>>>>>>> 	u32 rx_bytes;
>>>>>>>> 	u32 buf_alloc;
>>>>>>>> 	struct list_head rx_queue;
>>>>>>>> +	u32 msg_count;
>>>>>>>> };
>>>>>>>>
>>>>>>>> struct virtio_vsock_pkt {
>>>>>>>> @@ -80,6 +81,10 @@ virtio_transport_dgram_dequeue(struct vsock_sock *vsk,
>>>>>>>> 			       struct msghdr *msg,
>>>>>>>> 			       size_t len, int flags);
>>>>>>>>
>>>>>>>> +ssize_t
>>>>>>>> +virtio_transport_seqpacket_dequeue(struct vsock_sock *vsk,
>>>>>>>> +				   struct msghdr *msg,
>>>>>>>> +				   int flags);
>>>>>>>> s64 virtio_transport_stream_has_data(struct vsock_sock *vsk);
>>>>>>>> s64 virtio_transport_stream_has_space(struct vsock_sock *vsk);
>>>>>>>>
>>>>>>>> diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
>>>>>>>> index ad0d34d41444..1e1df19ec164 100644
>>>>>>>> --- a/net/vmw_vsock/virtio_transport_common.c
>>>>>>>> +++ b/net/vmw_vsock/virtio_transport_common.c
>>>>>>>> @@ -393,6 +393,78 @@ virtio_transport_stream_do_dequeue(struct vsock_sock *vsk,
>>>>>>>> 	return err;
>>>>>>>> }
>>>>>>>>
>>>>>>>> +static int virtio_transport_seqpacket_do_dequeue(struct vsock_sock *vsk,
>>>>>>>> +						 struct msghdr *msg,
>>>>>>>> +						 int flags)
>>>>>>>> +{
>>>>>>>> +	struct virtio_vsock_sock *vvs = vsk->trans;
>>>>>>>> +	struct virtio_vsock_pkt *pkt;
>>>>>>>> +	int dequeued_len = 0;
>>>>>>>> +	size_t user_buf_len = msg_data_left(msg);
>>>>>>>> +	bool copy_failed = false;
>>>>>>>> +	bool msg_ready = false;
>>>>>>>> +
>>>>>>>> +	spin_lock_bh(&vvs->rx_lock);
>>>>>>>> +
>>>>>>>> +	if (vvs->msg_count == 0) {
>>>>>>>> +		spin_unlock_bh(&vvs->rx_lock);
>>>>>>>> +		return 0;
>>>>>>>> +	}
>>>>>>>> +
>>>>>>>> +	while (!msg_ready) {
>>>>>>>> +		pkt = list_first_entry(&vvs->rx_queue, struct virtio_vsock_pkt, list);
>>>>>>>> +
>>>>>>>> +		if (!copy_failed) {
>>>>>>>> +			size_t pkt_len;
>>>>>>>> +			size_t bytes_to_copy;
>>>>>>>> +
>>>>>>>> +			pkt_len = (size_t)le32_to_cpu(pkt->hdr.len);
>>>>>>>> +			bytes_to_copy = min(user_buf_len, pkt_len);
>>>>>>>> +
>>>>>>>> +			if (bytes_to_copy) {
>>>>>>>> +				int err;
>>>>>>>> +
>>>>>>>> +				/* sk_lock is held by caller so no one else can dequeue.
>>>>>>>> +				 * Unlock rx_lock since memcpy_to_msg() may sleep.
>>>>>>>> +				 */
>>>>>>>> +				spin_unlock_bh(&vvs->rx_lock);
>>>>>>>> +
>>>>>>>> +				err = memcpy_to_msg(msg, pkt->buf, bytes_to_copy);
>>>>>>>> +				if (err) {
>>>>>>>> +					/* Copy of message failed, set flag to skip
>>>>>>>> +					 * copy path for rest of fragments. Rest of
>>>>>>>> +					 * fragments will be freed without copy.
>>>>>>>> +					 */
>>>>>>>> +					copy_failed = true;
>>>>>>>> +					dequeued_len = err;
>>>>>>> If we fail to copy the message we will discard the entire packet.
>>>>>>> Is it acceptable for the user point of view, or we should leave the
>>>>>>> packet in the queue and the user can retry, maybe with a different
>>>>>>> buffer?
>>>>>>>
>>>>>>> Then we can remove the packets only when we successfully copied all the
>>>>>>> fragments.
>>>>>>>
>>>>>>> I'm not sure make sense, maybe better to check also other
>>>>>>> implementations :-)
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Stefano
>>>>>> Understand, i'll check it on weekend, anyway I think it is
>>>>>> not critical for implementation.
>>>>> Yep, I agree.
>>>>>
>>>>>> I have another question: may be it is useful to research for
>>>>>> approach where packets are not queued until whole message
>>>>>> is received, but copied to user's buffer thus freeing memory.
>>>>>> (like previous implementation, of course with solution of problem
>>>>>> where part of message still in queue, while reader was woken
>>>>>> by timeout or signal).
>>>>>>
>>>>>> I think it is better, because  in current version, sender may set
>>>>>> 'peer_alloc_buf' to  for example 1MB, so at receiver we get
>>>>>> 1MB of 'kmalloc()' memory allocated, while having user's buffer
>>>>>> to copy data there or drop it(if user's buffer is full). This way
>>>>>> won't change spec(e.g. no message id or SEQ_BEGIN will be added).
>>>>>>
>>>>>> What do You think?
>>>>> Yep, I see your point and it would be great, but I think the main issues
>>>>> to fix is how to handle a signal while we are waiting other fragments
>>>>> since the other peer can take unspecified time to send them.
>>>> What about transport callback, something like 'seqpacket_drain()' or
>>>>
>>>> 'seqpacket_drop_curr()' - when we got signal or timeout, notify transport
>>>>
>>>> to drop current message. In virtio case this will set special flag in transport,
>>>>
>>>> so on next dequeue, this flag is checked and if it is set - we drop all packets
>>>>
>>>> until EOR found. Then we can copy untouched new record.
>>>>
>>> But in this way, we will lose the entire message.
>>>
>>> Is it acceptable for seqpacket?
>>>
>>> Stefano
>> Hm, i'll check it. At least for unix domain sockets - it supports SEQPACKET
>
>Hello, i've checked AF_UNIX and AF_AX25 SEQPACKET implementations,

Great! Thanks for checking!

>
>in both cases:
>
>1) Datagram is dequeued first, then copied to user's buffer.
>
>2) Datagram is also freed when copying to user's buffer fail
>
>(it is not reinserted back).
>
>
>But, in case of virtio vsock, i've got the following concern in

>this approach: in cases of AF_UNIX or AF_AX25 there is maximum
>
>datagram size, strictly limited by spec, so no 'setsockopt()' call allows
>
>to exceed this. Also these limits are significantly smaller that current
>
>amounts of RAM. But, in our case, there is no such limit: peer could
>
>say 'i want to use 100MB datagram', and receiver just answer 'ok',

The receiver sets the limit of its receive buffer and tells the 
transmitter that it should not exceed it. The default should be 256 KB, 
so IIUC this scenario can happen only if the receiver do a 
'setsockopt()' increasing the limit to 100MB. Right?

Maybe we should limit it.

>
> as there is just variable assignment to setup new limit. Now, consider
>
>that there will be 10 peers, 100MB each(no one limit such request,
>
>because each socket doesn't know about each other). I think we get
>
>out-of-service in this case - all kmalloc() memory will be wasted for
>
>pending record.
>
>
>I still think, that approach when we copy data from packet to user's
>
>buffer without waiting EOR is better.

Okay, in this way we can remove the receive buffer limit and maybe if we 
receive a signal, we can set MSG_TRUNC, return the partially received 
packet to the user, but we must free any next fragments.

So, as you proposed, we need a `seqpacket_drop()` to tell to the 
transport that if we were copying an uncompleted message, then it should 
delete the queued fragments and any others until the next EOR.

>
>
>Also i'll rebase QEMU patch today or tomorrow.

Great, please CC me, this is something high priority to test 
SOCK_SEQPACKET with a guest.

>
>
>What do You Think?

I'm fine with both, but I slightly prefer the approach we implemented 
because it's easier to handle.

Thanks,
Stefano


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [MASSMAIL KLMS] Re: [PATCH v11 11/18] virtio/vsock: dequeue callback for SOCK_SEQPACKET
@ 2021-06-21 10:23                   ` Stefano Garzarella
  0 siblings, 0 replies; 46+ messages in thread
From: Stefano Garzarella @ 2021-06-21 10:23 UTC (permalink / raw)
  To: Arseny Krasnov
  Cc: Andra Paraschiv, kvm, Michael S. Tsirkin, netdev, linux-kernel,
	virtualization, oxffffaa, Norbert Slusarek, Stefan Hajnoczi,
	Colin Ian King, Jakub Kicinski, David S. Miller

On Mon, Jun 21, 2021 at 09:55:13AM +0300, Arseny Krasnov wrote:
>
>On 18.06.2021 19:26, Arseny Krasnov wrote:
>> On 18.06.2021 19:25, Stefano Garzarella wrote:
>>> On Fri, Jun 18, 2021 at 07:08:30PM +0300, Arseny Krasnov wrote:
>>>> On 18.06.2021 18:55, Stefano Garzarella wrote:
>>>>> On Fri, Jun 18, 2021 at 06:04:37PM +0300, Arseny Krasnov wrote:
>>>>>> On 18.06.2021 16:44, Stefano Garzarella wrote:
>>>>>>> Hi Arseny,
>>>>>>> the series looks great, I have just a question below about
>>>>>>> seqpacket_dequeue.
>>>>>>>
>>>>>>> I also sent a couple a simple fixes, it would be great if you can review
>>>>>>> them:
>>>>>>> https://lore.kernel.org/netdev/20210618133526.300347-1-sgarzare@redhat.com/
>>>>>>>
>>>>>>>
>>>>>>> On Fri, Jun 11, 2021 at 02:12:38PM +0300, Arseny Krasnov wrote:
>>>>>>>> Callback fetches RW packets from rx queue of socket until whole record
>>>>>>>> is copied(if user's buffer is full, user is not woken up). This is done
>>>>>>>> to not stall sender, because if we wake up user and it leaves syscall,
>>>>>>>> nobody will send credit update for rest of record, and sender will wait
>>>>>>>> for next enter of read syscall at receiver's side. So if user buffer is
>>>>>>>> full, we just send credit update and drop data.
>>>>>>>>
>>>>>>>> Signed-off-by: Arseny Krasnov <arseny.krasnov@kaspersky.com>
>>>>>>>> ---
>>>>>>>> v10 -> v11:
>>>>>>>> 1) 'msg_count' field added to count current number of EORs.
>>>>>>>> 2) 'msg_ready' argument removed from callback.
>>>>>>>> 3) If 'memcpy_to_msg()' failed during copy loop, there will be
>>>>>>>>    no next attempts to copy data, rest of record will be freed.
>>>>>>>>
>>>>>>>> include/linux/virtio_vsock.h            |  5 ++
>>>>>>>> net/vmw_vsock/virtio_transport_common.c | 84 +++++++++++++++++++++++++
>>>>>>>> 2 files changed, 89 insertions(+)
>>>>>>>>
>>>>>>>> diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h
>>>>>>>> index dc636b727179..1d9a302cb91d 100644
>>>>>>>> --- a/include/linux/virtio_vsock.h
>>>>>>>> +++ b/include/linux/virtio_vsock.h
>>>>>>>> @@ -36,6 +36,7 @@ struct virtio_vsock_sock {
>>>>>>>> 	u32 rx_bytes;
>>>>>>>> 	u32 buf_alloc;
>>>>>>>> 	struct list_head rx_queue;
>>>>>>>> +	u32 msg_count;
>>>>>>>> };
>>>>>>>>
>>>>>>>> struct virtio_vsock_pkt {
>>>>>>>> @@ -80,6 +81,10 @@ virtio_transport_dgram_dequeue(struct vsock_sock *vsk,
>>>>>>>> 			       struct msghdr *msg,
>>>>>>>> 			       size_t len, int flags);
>>>>>>>>
>>>>>>>> +ssize_t
>>>>>>>> +virtio_transport_seqpacket_dequeue(struct vsock_sock *vsk,
>>>>>>>> +				   struct msghdr *msg,
>>>>>>>> +				   int flags);
>>>>>>>> s64 virtio_transport_stream_has_data(struct vsock_sock *vsk);
>>>>>>>> s64 virtio_transport_stream_has_space(struct vsock_sock *vsk);
>>>>>>>>
>>>>>>>> diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
>>>>>>>> index ad0d34d41444..1e1df19ec164 100644
>>>>>>>> --- a/net/vmw_vsock/virtio_transport_common.c
>>>>>>>> +++ b/net/vmw_vsock/virtio_transport_common.c
>>>>>>>> @@ -393,6 +393,78 @@ virtio_transport_stream_do_dequeue(struct vsock_sock *vsk,
>>>>>>>> 	return err;
>>>>>>>> }
>>>>>>>>
>>>>>>>> +static int virtio_transport_seqpacket_do_dequeue(struct vsock_sock *vsk,
>>>>>>>> +						 struct msghdr *msg,
>>>>>>>> +						 int flags)
>>>>>>>> +{
>>>>>>>> +	struct virtio_vsock_sock *vvs = vsk->trans;
>>>>>>>> +	struct virtio_vsock_pkt *pkt;
>>>>>>>> +	int dequeued_len = 0;
>>>>>>>> +	size_t user_buf_len = msg_data_left(msg);
>>>>>>>> +	bool copy_failed = false;
>>>>>>>> +	bool msg_ready = false;
>>>>>>>> +
>>>>>>>> +	spin_lock_bh(&vvs->rx_lock);
>>>>>>>> +
>>>>>>>> +	if (vvs->msg_count == 0) {
>>>>>>>> +		spin_unlock_bh(&vvs->rx_lock);
>>>>>>>> +		return 0;
>>>>>>>> +	}
>>>>>>>> +
>>>>>>>> +	while (!msg_ready) {
>>>>>>>> +		pkt = list_first_entry(&vvs->rx_queue, struct virtio_vsock_pkt, list);
>>>>>>>> +
>>>>>>>> +		if (!copy_failed) {
>>>>>>>> +			size_t pkt_len;
>>>>>>>> +			size_t bytes_to_copy;
>>>>>>>> +
>>>>>>>> +			pkt_len = (size_t)le32_to_cpu(pkt->hdr.len);
>>>>>>>> +			bytes_to_copy = min(user_buf_len, pkt_len);
>>>>>>>> +
>>>>>>>> +			if (bytes_to_copy) {
>>>>>>>> +				int err;
>>>>>>>> +
>>>>>>>> +				/* sk_lock is held by caller so no one else can dequeue.
>>>>>>>> +				 * Unlock rx_lock since memcpy_to_msg() may sleep.
>>>>>>>> +				 */
>>>>>>>> +				spin_unlock_bh(&vvs->rx_lock);
>>>>>>>> +
>>>>>>>> +				err = memcpy_to_msg(msg, pkt->buf, bytes_to_copy);
>>>>>>>> +				if (err) {
>>>>>>>> +					/* Copy of message failed, set flag to skip
>>>>>>>> +					 * copy path for rest of fragments. Rest of
>>>>>>>> +					 * fragments will be freed without copy.
>>>>>>>> +					 */
>>>>>>>> +					copy_failed = true;
>>>>>>>> +					dequeued_len = err;
>>>>>>> If we fail to copy the message we will discard the entire packet.
>>>>>>> Is it acceptable for the user point of view, or we should leave the
>>>>>>> packet in the queue and the user can retry, maybe with a different
>>>>>>> buffer?
>>>>>>>
>>>>>>> Then we can remove the packets only when we successfully copied all the
>>>>>>> fragments.
>>>>>>>
>>>>>>> I'm not sure make sense, maybe better to check also other
>>>>>>> implementations :-)
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Stefano
>>>>>> Understand, i'll check it on weekend, anyway I think it is
>>>>>> not critical for implementation.
>>>>> Yep, I agree.
>>>>>
>>>>>> I have another question: may be it is useful to research for
>>>>>> approach where packets are not queued until whole message
>>>>>> is received, but copied to user's buffer thus freeing memory.
>>>>>> (like previous implementation, of course with solution of problem
>>>>>> where part of message still in queue, while reader was woken
>>>>>> by timeout or signal).
>>>>>>
>>>>>> I think it is better, because  in current version, sender may set
>>>>>> 'peer_alloc_buf' to  for example 1MB, so at receiver we get
>>>>>> 1MB of 'kmalloc()' memory allocated, while having user's buffer
>>>>>> to copy data there or drop it(if user's buffer is full). This way
>>>>>> won't change spec(e.g. no message id or SEQ_BEGIN will be added).
>>>>>>
>>>>>> What do You think?
>>>>> Yep, I see your point and it would be great, but I think the main issues
>>>>> to fix is how to handle a signal while we are waiting other fragments
>>>>> since the other peer can take unspecified time to send them.
>>>> What about transport callback, something like 'seqpacket_drain()' or
>>>>
>>>> 'seqpacket_drop_curr()' - when we got signal or timeout, notify transport
>>>>
>>>> to drop current message. In virtio case this will set special flag in transport,
>>>>
>>>> so on next dequeue, this flag is checked and if it is set - we drop all packets
>>>>
>>>> until EOR found. Then we can copy untouched new record.
>>>>
>>> But in this way, we will lose the entire message.
>>>
>>> Is it acceptable for seqpacket?
>>>
>>> Stefano
>> Hm, i'll check it. At least for unix domain sockets - it supports SEQPACKET
>
>Hello, i've checked AF_UNIX and AF_AX25 SEQPACKET implementations,

Great! Thanks for checking!

>
>in both cases:
>
>1) Datagram is dequeued first, then copied to user's buffer.
>
>2) Datagram is also freed when copying to user's buffer fail
>
>(it is not reinserted back).
>
>
>But, in case of virtio vsock, i've got the following concern in

>this approach: in cases of AF_UNIX or AF_AX25 there is maximum
>
>datagram size, strictly limited by spec, so no 'setsockopt()' call allows
>
>to exceed this. Also these limits are significantly smaller that current
>
>amounts of RAM. But, in our case, there is no such limit: peer could
>
>say 'i want to use 100MB datagram', and receiver just answer 'ok',

The receiver sets the limit of its receive buffer and tells the 
transmitter that it should not exceed it. The default should be 256 KB, 
so IIUC this scenario can happen only if the receiver do a 
'setsockopt()' increasing the limit to 100MB. Right?

Maybe we should limit it.

>
> as there is just variable assignment to setup new limit. Now, consider
>
>that there will be 10 peers, 100MB each(no one limit such request,
>
>because each socket doesn't know about each other). I think we get
>
>out-of-service in this case - all kmalloc() memory will be wasted for
>
>pending record.
>
>
>I still think, that approach when we copy data from packet to user's
>
>buffer without waiting EOR is better.

Okay, in this way we can remove the receive buffer limit and maybe if we 
receive a signal, we can set MSG_TRUNC, return the partially received 
packet to the user, but we must free any next fragments.

So, as you proposed, we need a `seqpacket_drop()` to tell to the 
transport that if we were copying an uncompleted message, then it should 
delete the queued fragments and any others until the next EOR.

>
>
>Also i'll rebase QEMU patch today or tomorrow.

Great, please CC me, this is something high priority to test 
SOCK_SEQPACKET with a guest.

>
>
>What do You Think?

I'm fine with both, but I slightly prefer the approach we implemented 
because it's easier to handle.

Thanks,
Stefano

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [MASSMAIL KLMS] Re: [PATCH v11 11/18] virtio/vsock: dequeue callback for SOCK_SEQPACKET
  2021-06-21 10:23                   ` Stefano Garzarella
  (?)
@ 2021-06-21 12:27                   ` Arseny Krasnov
  -1 siblings, 0 replies; 46+ messages in thread
From: Arseny Krasnov @ 2021-06-21 12:27 UTC (permalink / raw)
  To: Stefano Garzarella
  Cc: Stefan Hajnoczi, Michael S. Tsirkin, Jason Wang, David S. Miller,
	Jakub Kicinski, Norbert Slusarek, Andra Paraschiv,
	Colin Ian King, kvm, virtualization, netdev, linux-kernel,
	oxffffaa


On 21.06.2021 13:23, Stefano Garzarella wrote:
> On Mon, Jun 21, 2021 at 09:55:13AM +0300, Arseny Krasnov wrote:
>> On 18.06.2021 19:26, Arseny Krasnov wrote:
>>> On 18.06.2021 19:25, Stefano Garzarella wrote:
>>>> On Fri, Jun 18, 2021 at 07:08:30PM +0300, Arseny Krasnov wrote:
>>>>> On 18.06.2021 18:55, Stefano Garzarella wrote:
>>>>>> On Fri, Jun 18, 2021 at 06:04:37PM +0300, Arseny Krasnov wrote:
>>>>>>> On 18.06.2021 16:44, Stefano Garzarella wrote:
>>>>>>>> Hi Arseny,
>>>>>>>> the series looks great, I have just a question below about
>>>>>>>> seqpacket_dequeue.
>>>>>>>>
>>>>>>>> I also sent a couple a simple fixes, it would be great if you can review
>>>>>>>> them:
>>>>>>>> https://lore.kernel.org/netdev/20210618133526.300347-1-sgarzare@redhat.com/
>>>>>>>>
>>>>>>>>
>>>>>>>> On Fri, Jun 11, 2021 at 02:12:38PM +0300, Arseny Krasnov wrote:
>>>>>>>>> Callback fetches RW packets from rx queue of socket until whole record
>>>>>>>>> is copied(if user's buffer is full, user is not woken up). This is done
>>>>>>>>> to not stall sender, because if we wake up user and it leaves syscall,
>>>>>>>>> nobody will send credit update for rest of record, and sender will wait
>>>>>>>>> for next enter of read syscall at receiver's side. So if user buffer is
>>>>>>>>> full, we just send credit update and drop data.
>>>>>>>>>
>>>>>>>>> Signed-off-by: Arseny Krasnov <arseny.krasnov@kaspersky.com>
>>>>>>>>> ---
>>>>>>>>> v10 -> v11:
>>>>>>>>> 1) 'msg_count' field added to count current number of EORs.
>>>>>>>>> 2) 'msg_ready' argument removed from callback.
>>>>>>>>> 3) If 'memcpy_to_msg()' failed during copy loop, there will be
>>>>>>>>>    no next attempts to copy data, rest of record will be freed.
>>>>>>>>>
>>>>>>>>> include/linux/virtio_vsock.h            |  5 ++
>>>>>>>>> net/vmw_vsock/virtio_transport_common.c | 84 +++++++++++++++++++++++++
>>>>>>>>> 2 files changed, 89 insertions(+)
>>>>>>>>>
>>>>>>>>> diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h
>>>>>>>>> index dc636b727179..1d9a302cb91d 100644
>>>>>>>>> --- a/include/linux/virtio_vsock.h
>>>>>>>>> +++ b/include/linux/virtio_vsock.h
>>>>>>>>> @@ -36,6 +36,7 @@ struct virtio_vsock_sock {
>>>>>>>>> 	u32 rx_bytes;
>>>>>>>>> 	u32 buf_alloc;
>>>>>>>>> 	struct list_head rx_queue;
>>>>>>>>> +	u32 msg_count;
>>>>>>>>> };
>>>>>>>>>
>>>>>>>>> struct virtio_vsock_pkt {
>>>>>>>>> @@ -80,6 +81,10 @@ virtio_transport_dgram_dequeue(struct vsock_sock *vsk,
>>>>>>>>> 			       struct msghdr *msg,
>>>>>>>>> 			       size_t len, int flags);
>>>>>>>>>
>>>>>>>>> +ssize_t
>>>>>>>>> +virtio_transport_seqpacket_dequeue(struct vsock_sock *vsk,
>>>>>>>>> +				   struct msghdr *msg,
>>>>>>>>> +				   int flags);
>>>>>>>>> s64 virtio_transport_stream_has_data(struct vsock_sock *vsk);
>>>>>>>>> s64 virtio_transport_stream_has_space(struct vsock_sock *vsk);
>>>>>>>>>
>>>>>>>>> diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
>>>>>>>>> index ad0d34d41444..1e1df19ec164 100644
>>>>>>>>> --- a/net/vmw_vsock/virtio_transport_common.c
>>>>>>>>> +++ b/net/vmw_vsock/virtio_transport_common.c
>>>>>>>>> @@ -393,6 +393,78 @@ virtio_transport_stream_do_dequeue(struct vsock_sock *vsk,
>>>>>>>>> 	return err;
>>>>>>>>> }
>>>>>>>>>
>>>>>>>>> +static int virtio_transport_seqpacket_do_dequeue(struct vsock_sock *vsk,
>>>>>>>>> +						 struct msghdr *msg,
>>>>>>>>> +						 int flags)
>>>>>>>>> +{
>>>>>>>>> +	struct virtio_vsock_sock *vvs = vsk->trans;
>>>>>>>>> +	struct virtio_vsock_pkt *pkt;
>>>>>>>>> +	int dequeued_len = 0;
>>>>>>>>> +	size_t user_buf_len = msg_data_left(msg);
>>>>>>>>> +	bool copy_failed = false;
>>>>>>>>> +	bool msg_ready = false;
>>>>>>>>> +
>>>>>>>>> +	spin_lock_bh(&vvs->rx_lock);
>>>>>>>>> +
>>>>>>>>> +	if (vvs->msg_count == 0) {
>>>>>>>>> +		spin_unlock_bh(&vvs->rx_lock);
>>>>>>>>> +		return 0;
>>>>>>>>> +	}
>>>>>>>>> +
>>>>>>>>> +	while (!msg_ready) {
>>>>>>>>> +		pkt = list_first_entry(&vvs->rx_queue, struct virtio_vsock_pkt, list);
>>>>>>>>> +
>>>>>>>>> +		if (!copy_failed) {
>>>>>>>>> +			size_t pkt_len;
>>>>>>>>> +			size_t bytes_to_copy;
>>>>>>>>> +
>>>>>>>>> +			pkt_len = (size_t)le32_to_cpu(pkt->hdr.len);
>>>>>>>>> +			bytes_to_copy = min(user_buf_len, pkt_len);
>>>>>>>>> +
>>>>>>>>> +			if (bytes_to_copy) {
>>>>>>>>> +				int err;
>>>>>>>>> +
>>>>>>>>> +				/* sk_lock is held by caller so no one else can dequeue.
>>>>>>>>> +				 * Unlock rx_lock since memcpy_to_msg() may sleep.
>>>>>>>>> +				 */
>>>>>>>>> +				spin_unlock_bh(&vvs->rx_lock);
>>>>>>>>> +
>>>>>>>>> +				err = memcpy_to_msg(msg, pkt->buf, bytes_to_copy);
>>>>>>>>> +				if (err) {
>>>>>>>>> +					/* Copy of message failed, set flag to skip
>>>>>>>>> +					 * copy path for rest of fragments. Rest of
>>>>>>>>> +					 * fragments will be freed without copy.
>>>>>>>>> +					 */
>>>>>>>>> +					copy_failed = true;
>>>>>>>>> +					dequeued_len = err;
>>>>>>>> If we fail to copy the message we will discard the entire packet.
>>>>>>>> Is it acceptable for the user point of view, or we should leave the
>>>>>>>> packet in the queue and the user can retry, maybe with a different
>>>>>>>> buffer?
>>>>>>>>
>>>>>>>> Then we can remove the packets only when we successfully copied all the
>>>>>>>> fragments.
>>>>>>>>
>>>>>>>> I'm not sure make sense, maybe better to check also other
>>>>>>>> implementations :-)
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> Stefano
>>>>>>> Understand, i'll check it on weekend, anyway I think it is
>>>>>>> not critical for implementation.
>>>>>> Yep, I agree.
>>>>>>
>>>>>>> I have another question: may be it is useful to research for
>>>>>>> approach where packets are not queued until whole message
>>>>>>> is received, but copied to user's buffer thus freeing memory.
>>>>>>> (like previous implementation, of course with solution of problem
>>>>>>> where part of message still in queue, while reader was woken
>>>>>>> by timeout or signal).
>>>>>>>
>>>>>>> I think it is better, because  in current version, sender may set
>>>>>>> 'peer_alloc_buf' to  for example 1MB, so at receiver we get
>>>>>>> 1MB of 'kmalloc()' memory allocated, while having user's buffer
>>>>>>> to copy data there or drop it(if user's buffer is full). This way
>>>>>>> won't change spec(e.g. no message id or SEQ_BEGIN will be added).
>>>>>>>
>>>>>>> What do You think?
>>>>>> Yep, I see your point and it would be great, but I think the main issues
>>>>>> to fix is how to handle a signal while we are waiting other fragments
>>>>>> since the other peer can take unspecified time to send them.
>>>>> What about transport callback, something like 'seqpacket_drain()' or
>>>>>
>>>>> 'seqpacket_drop_curr()' - when we got signal or timeout, notify transport
>>>>>
>>>>> to drop current message. In virtio case this will set special flag in transport,
>>>>>
>>>>> so on next dequeue, this flag is checked and if it is set - we drop all packets
>>>>>
>>>>> until EOR found. Then we can copy untouched new record.
>>>>>
>>>> But in this way, we will lose the entire message.
>>>>
>>>> Is it acceptable for seqpacket?
>>>>
>>>> Stefano
>>> Hm, i'll check it. At least for unix domain sockets - it supports SEQPACKET
>> Hello, i've checked AF_UNIX and AF_AX25 SEQPACKET implementations,
> Great! Thanks for checking!
>
>> in both cases:
>>
>> 1) Datagram is dequeued first, then copied to user's buffer.
>>
>> 2) Datagram is also freed when copying to user's buffer fail
>>
>> (it is not reinserted back).
>>
>>
>> But, in case of virtio vsock, i've got the following concern in
>> this approach: in cases of AF_UNIX or AF_AX25 there is maximum
>>
>> datagram size, strictly limited by spec, so no 'setsockopt()' call allows
>>
>> to exceed this. Also these limits are significantly smaller that current
>>
>> amounts of RAM. But, in our case, there is no such limit: peer could
>>
>> say 'i want to use 100MB datagram', and receiver just answer 'ok',
> The receiver sets the limit of its receive buffer and tells the 
> transmitter that it should not exceed it. The default should be 256 KB, 
> so IIUC this scenario can happen only if the receiver do a 
> 'setsockopt()' increasing the limit to 100MB. Right?
>
> Maybe we should limit it.

Yes, sorry, i meant this. Two peers want's to transmit 100mb message.

Receiver calls 'setsockopt()' and got 100mb of kmalloc() memory.

May be, from point of view of these two peers its ok. But for whole system

- i'm not sure. And limit - it is interesting question, what value to use as limit?

>
>>  as there is just variable assignment to setup new limit. Now, consider
>>
>> that there will be 10 peers, 100MB each(no one limit such request,
>>
>> because each socket doesn't know about each other). I think we get
>>
>> out-of-service in this case - all kmalloc() memory will be wasted for
>>
>> pending record.
>>
>>
>> I still think, that approach when we copy data from packet to user's
>>
>> buffer without waiting EOR is better.
> Okay, in this way we can remove the receive buffer limit and maybe if we 
> receive a signal, we can set MSG_TRUNC, return the partially received 
> packet to the user, but we must free any next fragments.
>
> So, as you proposed, we need a `seqpacket_drop()` to tell to the 
> transport that if we were copying an uncompleted message, then it should 
> delete the queued fragments and any others until the next EOR.

Ok, i'll prepare RFC patch for this approach, i think it will be

significantly smaller than merged patchset.

>
>>
>> Also i'll rebase QEMU patch today or tomorrow.
> Great, please CC me, this is something high priority to test 
> SOCK_SEQPACKET with a guest.
Ack
>
>>
>> What do You Think?
> I'm fine with both, but I slightly prefer the approach we implemented 
> because it's easier to handle.
>
> Thanks,
> Stefano
>
>

^ permalink raw reply	[flat|nested] 46+ messages in thread

end of thread, other threads:[~2021-06-21 12:27 UTC | newest]

Thread overview: 46+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-11 11:07 [PATCH v11 00/18] virtio/vsock: introduce SOCK_SEQPACKET support Arseny Krasnov
2021-06-11 11:09 ` [PATCH v11 01/18] af_vsock: update functions for connectible socket Arseny Krasnov
2021-06-11 11:10 ` [PATCH v11 02/18] af_vsock: separate wait data loop Arseny Krasnov
2021-06-11 11:10 ` [PATCH v11 03/18] af_vsock: separate receive " Arseny Krasnov
2021-06-11 11:10 ` [PATCH v11 04/18] af_vsock: implement SEQPACKET receive loop Arseny Krasnov
2021-06-11 11:10 ` [PATCH v11 05/18] af_vsock: implement send logic for SEQPACKET Arseny Krasnov
2021-06-11 11:11 ` [PATCH v11 06/18] af_vsock: rest of SEQPACKET support Arseny Krasnov
2021-06-11 11:11 ` [PATCH v11 07/18] af_vsock: update comments for stream sockets Arseny Krasnov
2021-06-11 11:11 ` [PATCH v11 08/18] virtio/vsock: set packet's type in virtio_transport_send_pkt_info() Arseny Krasnov
2021-06-11 11:12 ` [PATCH v11 09/18] virtio/vsock: simplify credit update function API Arseny Krasnov
2021-06-11 11:12 ` [PATCH v11 10/18] virtio/vsock: defines and constants for SEQPACKET Arseny Krasnov
2021-06-11 11:12 ` [PATCH v11 11/18] virtio/vsock: dequeue callback for SOCK_SEQPACKET Arseny Krasnov
2021-06-18 13:44   ` Stefano Garzarella
2021-06-18 13:44     ` Stefano Garzarella
2021-06-18 13:51     ` Michael S. Tsirkin
2021-06-18 13:51       ` Michael S. Tsirkin
2021-06-18 14:44       ` Stefano Garzarella
2021-06-18 14:44         ` Stefano Garzarella
2021-06-18 15:04     ` Arseny Krasnov
2021-06-18 15:55       ` Stefano Garzarella
2021-06-18 15:55         ` Stefano Garzarella
2021-06-18 16:08         ` [MASSMAIL KLMS] " Arseny Krasnov
2021-06-18 16:25           ` Stefano Garzarella
2021-06-18 16:25             ` Stefano Garzarella
2021-06-18 16:26             ` Arseny Krasnov
2021-06-21  6:55               ` Arseny Krasnov
2021-06-21 10:23                 ` Stefano Garzarella
2021-06-21 10:23                   ` Stefano Garzarella
2021-06-21 12:27                   ` Arseny Krasnov
2021-06-11 11:12 ` [PATCH v11 12/18] virtio/vsock: add SEQPACKET receive logic Arseny Krasnov
2021-06-11 11:13 ` [PATCH v11 13/18] virtio/vsock: rest of SOCK_SEQPACKET support Arseny Krasnov
2021-06-11 11:13 ` [PATCH v11 14/18] virtio/vsock: enable SEQPACKET for transport Arseny Krasnov
2021-06-11 11:13 ` [PATCH v11 15/18] vhost/vsock: support " Arseny Krasnov
2021-06-11 11:13 ` [PATCH v11 16/18] vsock/loopback: enable " Arseny Krasnov
2021-06-11 11:14 ` [PATCH v11 17/18] vsock_test: add SOCK_SEQPACKET tests Arseny Krasnov
2021-06-11 11:14 ` [PATCH v11 18/18] virtio/vsock: update trace event for SEQPACKET Arseny Krasnov
2021-06-11 11:17 ` [PATCH v11 00/18] virtio/vsock: introduce SOCK_SEQPACKET support Arseny Krasnov
2021-06-11 12:25   ` Stefano Garzarella
2021-06-11 12:25     ` Stefano Garzarella
2021-06-11 14:39     ` Arseny Krasnov
2021-06-11 14:57       ` Stefano Garzarella
2021-06-11 14:57         ` Stefano Garzarella
2021-06-11 15:00         ` Arseny Krasnov
2021-06-11 21:00 ` patchwork-bot+netdevbpf
2021-06-18 13:49   ` Michael S. Tsirkin
2021-06-18 13:49     ` Michael S. Tsirkin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.