linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] vsock: cycle only on its own socket
@ 2022-03-10 12:54 Jiyong Park
  2022-03-10 12:54 ` [PATCH 1/2] vsock: each transport cycles only on its own sockets Jiyong Park
                   ` (2 more replies)
  0 siblings, 3 replies; 11+ messages in thread
From: Jiyong Park @ 2022-03-10 12:54 UTC (permalink / raw)
  To: sgarzare, stefanha, mst, jasowang, davem, kuba
  Cc: adelva, kvm, virtualization, netdev, linux-kernel, Jiyong Park

Hi Stefano,

As suggested [1], I've made two patches for easier backporting without
breaking KMI.

PATCH 1 fixes the very issue of cycling all vsocks regardless of the
transport and shall be backported.

PATCH 2 is a refactor of PATCH 1 that forces the filtering to all
(including future) uses of vsock_for_each_connected_socket.

Thanks,

[1] https://lore.kernel.org/lkml/20220310110036.fgy323c4hvk3mziq@sgarzare-redhat/

Jiyong Park (2):
  vsock: each transport cycles only on its own sockets
  vsock: refactor vsock_for_each_connected_socket

 drivers/vhost/vsock.c            | 3 ++-
 include/net/af_vsock.h           | 3 ++-
 net/vmw_vsock/af_vsock.c         | 9 +++++++--
 net/vmw_vsock/virtio_transport.c | 7 +++++--
 net/vmw_vsock/vmci_transport.c   | 3 ++-
 5 files changed, 18 insertions(+), 7 deletions(-)


base-commit: 3bf7edc84a9eb4007dd9a0cb8878a7e1d5ec6a3b
-- 
2.35.1.723.g4982287a31-goog


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 1/2] vsock: each transport cycles only on its own sockets
  2022-03-10 12:54 [PATCH 0/2] vsock: cycle only on its own socket Jiyong Park
@ 2022-03-10 12:54 ` Jiyong Park
  2022-03-10 13:01   ` Michael S. Tsirkin
  2022-03-11  2:55   ` kernel test robot
  2022-03-10 12:54 ` [PATCH 2/2] vsock: refactor vsock_for_each_connected_socket Jiyong Park
  2022-03-10 12:57 ` [PATCH 0/2] vsock: cycle only on its own socket Michael S. Tsirkin
  2 siblings, 2 replies; 11+ messages in thread
From: Jiyong Park @ 2022-03-10 12:54 UTC (permalink / raw)
  To: sgarzare, stefanha, mst, jasowang, davem, kuba
  Cc: adelva, kvm, virtualization, netdev, linux-kernel, Jiyong Park

When iterating over sockets using vsock_for_each_connected_socket, make
sure that a transport filters out sockets that don't belong to the
transport.

There actually was an issue caused by this; in a nested VM
configuration, destroying the nested VM (which often involves the
closing of /dev/vhost-vsock if there was h2g connections to the nested
VM) kills not only the h2g connections, but also all existing g2h
connections to the (outmost) host which are totally unrelated.

Tested: Executed the following steps on Cuttlefish (Android running on a
VM) [1]: (1) Enter into an `adb shell` session - to have a g2h
connection inside the VM, (2) open and then close /dev/vhost-vsock by
`exec 3< /dev/vhost-vsock && exec 3<&-`, (3) observe that the adb
session is not reset.

[1] https://android.googlesource.com/device/google/cuttlefish/

Fixes: c0cfa2d8a788 ("vsock: add multi-transports support")
Signed-off-by: Jiyong Park <jiyong@google.com>
---
 drivers/vhost/vsock.c            | 4 ++++
 net/vmw_vsock/virtio_transport.c | 7 +++++++
 net/vmw_vsock/vmci_transport.c   | 5 +++++
 3 files changed, 16 insertions(+)

diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
index 37f0b4274113..853ddac00d5b 100644
--- a/drivers/vhost/vsock.c
+++ b/drivers/vhost/vsock.c
@@ -722,6 +722,10 @@ static void vhost_vsock_reset_orphans(struct sock *sk)
 	 * executing.
 	 */
 
+	/* Only handle our own sockets */
+	if (vsk->transport != &vhost_transport.transport)
+		return;
+
 	/* If the peer is still valid, no need to reset connection */
 	if (vhost_vsock_get(vsk->remote_addr.svm_cid))
 		return;
diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
index fb3302fff627..61b24eb31d4b 100644
--- a/net/vmw_vsock/virtio_transport.c
+++ b/net/vmw_vsock/virtio_transport.c
@@ -24,6 +24,7 @@
 static struct workqueue_struct *virtio_vsock_workqueue;
 static struct virtio_vsock __rcu *the_virtio_vsock;
 static DEFINE_MUTEX(the_virtio_vsock_mutex); /* protects the_virtio_vsock */
+static struct virtio_transport virtio_transport; /* forward declaration */
 
 struct virtio_vsock {
 	struct virtio_device *vdev;
@@ -357,11 +358,17 @@ static void virtio_vsock_event_fill(struct virtio_vsock *vsock)
 
 static void virtio_vsock_reset_sock(struct sock *sk)
 {
+	struct vsock_sock *vsk = vsock_sk(sk);
+
 	/* vmci_transport.c doesn't take sk_lock here either.  At least we're
 	 * under vsock_table_lock so the sock cannot disappear while we're
 	 * executing.
 	 */
 
+	/* Only handle our own sockets */
+	if (vsk->transport != &virtio_transport.transport)
+		return;
+
 	sk->sk_state = TCP_CLOSE;
 	sk->sk_err = ECONNRESET;
 	sk_error_report(sk);
diff --git a/net/vmw_vsock/vmci_transport.c b/net/vmw_vsock/vmci_transport.c
index 7aef34e32bdf..cd2f01513fae 100644
--- a/net/vmw_vsock/vmci_transport.c
+++ b/net/vmw_vsock/vmci_transport.c
@@ -803,6 +803,11 @@ static void vmci_transport_handle_detach(struct sock *sk)
 	struct vsock_sock *vsk;
 
 	vsk = vsock_sk(sk);
+
+	/* Only handle our own sockets */
+	if (vsk->transport != &vmci_transport)
+		return;
+
 	if (!vmci_handle_is_invalid(vmci_trans(vsk)->qp_handle)) {
 		sock_set_flag(sk, SOCK_DONE);
 
-- 
2.35.1.723.g4982287a31-goog


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 2/2] vsock: refactor vsock_for_each_connected_socket
  2022-03-10 12:54 [PATCH 0/2] vsock: cycle only on its own socket Jiyong Park
  2022-03-10 12:54 ` [PATCH 1/2] vsock: each transport cycles only on its own sockets Jiyong Park
@ 2022-03-10 12:54 ` Jiyong Park
  2022-03-10 13:09   ` Michael S. Tsirkin
  2022-03-10 12:57 ` [PATCH 0/2] vsock: cycle only on its own socket Michael S. Tsirkin
  2 siblings, 1 reply; 11+ messages in thread
From: Jiyong Park @ 2022-03-10 12:54 UTC (permalink / raw)
  To: sgarzare, stefanha, mst, jasowang, davem, kuba
  Cc: adelva, kvm, virtualization, netdev, linux-kernel, Jiyong Park

vsock_for_each_connected_socket now cycles over sockets of a specific
transport only, rather than asking callers to do the filtering manually,
which is error-prone.

Signed-off-by: Jiyong Park <jiyong@google.com>
---
 drivers/vhost/vsock.c            |  7 ++-----
 include/net/af_vsock.h           |  3 ++-
 net/vmw_vsock/af_vsock.c         |  9 +++++++--
 net/vmw_vsock/virtio_transport.c | 12 ++++--------
 net/vmw_vsock/vmci_transport.c   |  8 ++------
 5 files changed, 17 insertions(+), 22 deletions(-)

diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
index 853ddac00d5b..e6c9d41db1de 100644
--- a/drivers/vhost/vsock.c
+++ b/drivers/vhost/vsock.c
@@ -722,10 +722,6 @@ static void vhost_vsock_reset_orphans(struct sock *sk)
 	 * executing.
 	 */
 
-	/* Only handle our own sockets */
-	if (vsk->transport != &vhost_transport.transport)
-		return;
-
 	/* If the peer is still valid, no need to reset connection */
 	if (vhost_vsock_get(vsk->remote_addr.svm_cid))
 		return;
@@ -757,7 +753,8 @@ static int vhost_vsock_dev_release(struct inode *inode, struct file *file)
 
 	/* Iterating over all connections for all CIDs to find orphans is
 	 * inefficient.  Room for improvement here. */
-	vsock_for_each_connected_socket(vhost_vsock_reset_orphans);
+	vsock_for_each_connected_socket(&vhost_transport.transport,
+					vhost_vsock_reset_orphans);
 
 	/* Don't check the owner, because we are in the release path, so we
 	 * need to stop the vsock device in any case.
diff --git a/include/net/af_vsock.h b/include/net/af_vsock.h
index ab207677e0a8..f742e50207fb 100644
--- a/include/net/af_vsock.h
+++ b/include/net/af_vsock.h
@@ -205,7 +205,8 @@ struct sock *vsock_find_bound_socket(struct sockaddr_vm *addr);
 struct sock *vsock_find_connected_socket(struct sockaddr_vm *src,
 					 struct sockaddr_vm *dst);
 void vsock_remove_sock(struct vsock_sock *vsk);
-void vsock_for_each_connected_socket(void (*fn)(struct sock *sk));
+void vsock_for_each_connected_socket(struct vsock_transport *transport,
+				     void (*fn)(struct sock *sk));
 int vsock_assign_transport(struct vsock_sock *vsk, struct vsock_sock *psk);
 bool vsock_find_cid(unsigned int cid);
 
diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
index 38baeb189d4e..f04abf662ec6 100644
--- a/net/vmw_vsock/af_vsock.c
+++ b/net/vmw_vsock/af_vsock.c
@@ -334,7 +334,8 @@ void vsock_remove_sock(struct vsock_sock *vsk)
 }
 EXPORT_SYMBOL_GPL(vsock_remove_sock);
 
-void vsock_for_each_connected_socket(void (*fn)(struct sock *sk))
+void vsock_for_each_connected_socket(struct vsock_transport *transport,
+				     void (*fn)(struct sock *sk))
 {
 	int i;
 
@@ -343,8 +344,12 @@ void vsock_for_each_connected_socket(void (*fn)(struct sock *sk))
 	for (i = 0; i < ARRAY_SIZE(vsock_connected_table); i++) {
 		struct vsock_sock *vsk;
 		list_for_each_entry(vsk, &vsock_connected_table[i],
-				    connected_table)
+				    connected_table) {
+			if (vsk->transport != transport)
+				continue;
+
 			fn(sk_vsock(vsk));
+		}
 	}
 
 	spin_unlock_bh(&vsock_table_lock);
diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
index 61b24eb31d4b..5afc194a58bb 100644
--- a/net/vmw_vsock/virtio_transport.c
+++ b/net/vmw_vsock/virtio_transport.c
@@ -358,17 +358,11 @@ static void virtio_vsock_event_fill(struct virtio_vsock *vsock)
 
 static void virtio_vsock_reset_sock(struct sock *sk)
 {
-	struct vsock_sock *vsk = vsock_sk(sk);
-
 	/* vmci_transport.c doesn't take sk_lock here either.  At least we're
 	 * under vsock_table_lock so the sock cannot disappear while we're
 	 * executing.
 	 */
 
-	/* Only handle our own sockets */
-	if (vsk->transport != &virtio_transport.transport)
-		return;
-
 	sk->sk_state = TCP_CLOSE;
 	sk->sk_err = ECONNRESET;
 	sk_error_report(sk);
@@ -391,7 +385,8 @@ static void virtio_vsock_event_handle(struct virtio_vsock *vsock,
 	switch (le32_to_cpu(event->id)) {
 	case VIRTIO_VSOCK_EVENT_TRANSPORT_RESET:
 		virtio_vsock_update_guest_cid(vsock);
-		vsock_for_each_connected_socket(virtio_vsock_reset_sock);
+		vsock_for_each_connected_socket(&virtio_transport.transport,
+						virtio_vsock_reset_sock);
 		break;
 	}
 }
@@ -669,7 +664,8 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
 	synchronize_rcu();
 
 	/* Reset all connected sockets when the device disappear */
-	vsock_for_each_connected_socket(virtio_vsock_reset_sock);
+	vsock_for_each_connected_socket(&virtio_transport.transport,
+					virtio_vsock_reset_sock);
 
 	/* Stop all work handlers to make sure no one is accessing the device,
 	 * so we can safely call virtio_reset_device().
diff --git a/net/vmw_vsock/vmci_transport.c b/net/vmw_vsock/vmci_transport.c
index cd2f01513fae..735d5e14608a 100644
--- a/net/vmw_vsock/vmci_transport.c
+++ b/net/vmw_vsock/vmci_transport.c
@@ -803,11 +803,6 @@ static void vmci_transport_handle_detach(struct sock *sk)
 	struct vsock_sock *vsk;
 
 	vsk = vsock_sk(sk);
-
-	/* Only handle our own sockets */
-	if (vsk->transport != &vmci_transport)
-		return;
-
 	if (!vmci_handle_is_invalid(vmci_trans(vsk)->qp_handle)) {
 		sock_set_flag(sk, SOCK_DONE);
 
@@ -887,7 +882,8 @@ static void vmci_transport_qp_resumed_cb(u32 sub_id,
 					 const struct vmci_event_data *e_data,
 					 void *client_data)
 {
-	vsock_for_each_connected_socket(vmci_transport_handle_detach);
+	vsock_for_each_connected_socket(&vmci_transport,
+					vmci_transport_handle_detach);
 }
 
 static void vmci_transport_recv_pkt_work(struct work_struct *work)
-- 
2.35.1.723.g4982287a31-goog


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH 0/2] vsock: cycle only on its own socket
  2022-03-10 12:54 [PATCH 0/2] vsock: cycle only on its own socket Jiyong Park
  2022-03-10 12:54 ` [PATCH 1/2] vsock: each transport cycles only on its own sockets Jiyong Park
  2022-03-10 12:54 ` [PATCH 2/2] vsock: refactor vsock_for_each_connected_socket Jiyong Park
@ 2022-03-10 12:57 ` Michael S. Tsirkin
  2022-03-10 12:59   ` Michael S. Tsirkin
  2 siblings, 1 reply; 11+ messages in thread
From: Michael S. Tsirkin @ 2022-03-10 12:57 UTC (permalink / raw)
  To: Jiyong Park
  Cc: sgarzare, stefanha, jasowang, davem, kuba, adelva, kvm,
	virtualization, netdev, linux-kernel

On Thu, Mar 10, 2022 at 09:54:23PM +0900, Jiyong Park wrote:
> Hi Stefano,
> 
> As suggested [1], I've made two patches for easier backporting without
> breaking KMI.
> 
> PATCH 1 fixes the very issue of cycling all vsocks regardless of the
> transport and shall be backported.
> 
> PATCH 2 is a refactor of PATCH 1 that forces the filtering to all
> (including future) uses of vsock_for_each_connected_socket.
> 
> Thanks,
> 
> [1] https://lore.kernel.org/lkml/20220310110036.fgy323c4hvk3mziq@sgarzare-redhat/


OK that's better. Pls do include changelog in the future.

Acked-by: Michael S. Tsirkin <mst@redhat.com>



> Jiyong Park (2):
>   vsock: each transport cycles only on its own sockets
>   vsock: refactor vsock_for_each_connected_socket
> 
>  drivers/vhost/vsock.c            | 3 ++-
>  include/net/af_vsock.h           | 3 ++-
>  net/vmw_vsock/af_vsock.c         | 9 +++++++--
>  net/vmw_vsock/virtio_transport.c | 7 +++++--
>  net/vmw_vsock/vmci_transport.c   | 3 ++-
>  5 files changed, 18 insertions(+), 7 deletions(-)
> 
> 
> base-commit: 3bf7edc84a9eb4007dd9a0cb8878a7e1d5ec6a3b
> -- 
> 2.35.1.723.g4982287a31-goog


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 0/2] vsock: cycle only on its own socket
  2022-03-10 12:57 ` [PATCH 0/2] vsock: cycle only on its own socket Michael S. Tsirkin
@ 2022-03-10 12:59   ` Michael S. Tsirkin
  0 siblings, 0 replies; 11+ messages in thread
From: Michael S. Tsirkin @ 2022-03-10 12:59 UTC (permalink / raw)
  To: Jiyong Park
  Cc: sgarzare, stefanha, jasowang, davem, kuba, adelva, kvm,
	virtualization, netdev, linux-kernel

On Thu, Mar 10, 2022 at 07:57:58AM -0500, Michael S. Tsirkin wrote:
> On Thu, Mar 10, 2022 at 09:54:23PM +0900, Jiyong Park wrote:
> > Hi Stefano,
> > 
> > As suggested [1], I've made two patches for easier backporting without
> > breaking KMI.
> > 
> > PATCH 1 fixes the very issue of cycling all vsocks regardless of the
> > transport and shall be backported.
> > 
> > PATCH 2 is a refactor of PATCH 1 that forces the filtering to all
> > (including future) uses of vsock_for_each_connected_socket.
> > 
> > Thanks,
> > 
> > [1] https://lore.kernel.org/lkml/20220310110036.fgy323c4hvk3mziq@sgarzare-redhat/
> 
> 
> OK that's better. Pls do include changelog in the future.
> 
> Acked-by: Michael S. Tsirkin <mst@redhat.com>

Hmm actually I think I have a better idea. Hang on.

> 
> 
> > Jiyong Park (2):
> >   vsock: each transport cycles only on its own sockets
> >   vsock: refactor vsock_for_each_connected_socket
> > 
> >  drivers/vhost/vsock.c            | 3 ++-
> >  include/net/af_vsock.h           | 3 ++-
> >  net/vmw_vsock/af_vsock.c         | 9 +++++++--
> >  net/vmw_vsock/virtio_transport.c | 7 +++++--
> >  net/vmw_vsock/vmci_transport.c   | 3 ++-
> >  5 files changed, 18 insertions(+), 7 deletions(-)
> > 
> > 
> > base-commit: 3bf7edc84a9eb4007dd9a0cb8878a7e1d5ec6a3b
> > -- 
> > 2.35.1.723.g4982287a31-goog


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/2] vsock: each transport cycles only on its own sockets
  2022-03-10 12:54 ` [PATCH 1/2] vsock: each transport cycles only on its own sockets Jiyong Park
@ 2022-03-10 13:01   ` Michael S. Tsirkin
  2022-03-10 13:11     ` Jiyong Park
  2022-03-10 13:18     ` Stefano Garzarella
  2022-03-11  2:55   ` kernel test robot
  1 sibling, 2 replies; 11+ messages in thread
From: Michael S. Tsirkin @ 2022-03-10 13:01 UTC (permalink / raw)
  To: Jiyong Park
  Cc: sgarzare, stefanha, jasowang, davem, kuba, adelva, kvm,
	virtualization, netdev, linux-kernel

On Thu, Mar 10, 2022 at 09:54:24PM +0900, Jiyong Park wrote:
> When iterating over sockets using vsock_for_each_connected_socket, make
> sure that a transport filters out sockets that don't belong to the
> transport.
> 
> There actually was an issue caused by this; in a nested VM
> configuration, destroying the nested VM (which often involves the
> closing of /dev/vhost-vsock if there was h2g connections to the nested
> VM) kills not only the h2g connections, but also all existing g2h
> connections to the (outmost) host which are totally unrelated.
> 
> Tested: Executed the following steps on Cuttlefish (Android running on a
> VM) [1]: (1) Enter into an `adb shell` session - to have a g2h
> connection inside the VM, (2) open and then close /dev/vhost-vsock by
> `exec 3< /dev/vhost-vsock && exec 3<&-`, (3) observe that the adb
> session is not reset.
> 
> [1] https://android.googlesource.com/device/google/cuttlefish/
> 
> Fixes: c0cfa2d8a788 ("vsock: add multi-transports support")
> Signed-off-by: Jiyong Park <jiyong@google.com>
> ---
>  drivers/vhost/vsock.c            | 4 ++++
>  net/vmw_vsock/virtio_transport.c | 7 +++++++
>  net/vmw_vsock/vmci_transport.c   | 5 +++++
>  3 files changed, 16 insertions(+)
> 
> diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
> index 37f0b4274113..853ddac00d5b 100644
> --- a/drivers/vhost/vsock.c
> +++ b/drivers/vhost/vsock.c
> @@ -722,6 +722,10 @@ static void vhost_vsock_reset_orphans(struct sock *sk)
>  	 * executing.
>  	 */
>  
> +	/* Only handle our own sockets */
> +	if (vsk->transport != &vhost_transport.transport)
> +		return;
> +
>  	/* If the peer is still valid, no need to reset connection */
>  	if (vhost_vsock_get(vsk->remote_addr.svm_cid))
>  		return;


We know this is incomplete though. So I think it's the wrong thing to do
when you backport, too. If all you worry about is breaking a binary
module interface, how about simply exporting a new function when you
backport. Thus you will have downstream both:

void vsock_for_each_connected_socket(void (*fn)(struct sock *sk));

void vsock_for_each_connected_socket_new(struct vsock_transport *transport,
                                    void (*fn)(struct sock *sk));


and then upstream we can squash these two patches.

Hmm?


> diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
> index fb3302fff627..61b24eb31d4b 100644
> --- a/net/vmw_vsock/virtio_transport.c
> +++ b/net/vmw_vsock/virtio_transport.c
> @@ -24,6 +24,7 @@
>  static struct workqueue_struct *virtio_vsock_workqueue;
>  static struct virtio_vsock __rcu *the_virtio_vsock;
>  static DEFINE_MUTEX(the_virtio_vsock_mutex); /* protects the_virtio_vsock */
> +static struct virtio_transport virtio_transport; /* forward declaration */
>  
>  struct virtio_vsock {
>  	struct virtio_device *vdev;
> @@ -357,11 +358,17 @@ static void virtio_vsock_event_fill(struct virtio_vsock *vsock)
>  
>  static void virtio_vsock_reset_sock(struct sock *sk)
>  {
> +	struct vsock_sock *vsk = vsock_sk(sk);
> +
>  	/* vmci_transport.c doesn't take sk_lock here either.  At least we're
>  	 * under vsock_table_lock so the sock cannot disappear while we're
>  	 * executing.
>  	 */
>  
> +	/* Only handle our own sockets */
> +	if (vsk->transport != &virtio_transport.transport)
> +		return;
> +
>  	sk->sk_state = TCP_CLOSE;
>  	sk->sk_err = ECONNRESET;
>  	sk_error_report(sk);
> diff --git a/net/vmw_vsock/vmci_transport.c b/net/vmw_vsock/vmci_transport.c
> index 7aef34e32bdf..cd2f01513fae 100644
> --- a/net/vmw_vsock/vmci_transport.c
> +++ b/net/vmw_vsock/vmci_transport.c
> @@ -803,6 +803,11 @@ static void vmci_transport_handle_detach(struct sock *sk)
>  	struct vsock_sock *vsk;
>  
>  	vsk = vsock_sk(sk);
> +
> +	/* Only handle our own sockets */
> +	if (vsk->transport != &vmci_transport)
> +		return;
> +
>  	if (!vmci_handle_is_invalid(vmci_trans(vsk)->qp_handle)) {
>  		sock_set_flag(sk, SOCK_DONE);
>  
> -- 
> 2.35.1.723.g4982287a31-goog


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/2] vsock: refactor vsock_for_each_connected_socket
  2022-03-10 12:54 ` [PATCH 2/2] vsock: refactor vsock_for_each_connected_socket Jiyong Park
@ 2022-03-10 13:09   ` Michael S. Tsirkin
  0 siblings, 0 replies; 11+ messages in thread
From: Michael S. Tsirkin @ 2022-03-10 13:09 UTC (permalink / raw)
  To: Jiyong Park
  Cc: sgarzare, stefanha, jasowang, davem, kuba, adelva, kvm,
	virtualization, netdev, linux-kernel

On Thu, Mar 10, 2022 at 09:54:25PM +0900, Jiyong Park wrote:
> vsock_for_each_connected_socket now cycles over sockets of a specific
> transport only, rather than asking callers to do the filtering manually,
> which is error-prone.
> 
> Signed-off-by: Jiyong Park <jiyong@google.com>

Pls just squash these two patches. Downstream will do its own thing,
probably distict from your patch 1 and depending on what its
requirements are.

> ---
>  drivers/vhost/vsock.c            |  7 ++-----
>  include/net/af_vsock.h           |  3 ++-
>  net/vmw_vsock/af_vsock.c         |  9 +++++++--
>  net/vmw_vsock/virtio_transport.c | 12 ++++--------
>  net/vmw_vsock/vmci_transport.c   |  8 ++------
>  5 files changed, 17 insertions(+), 22 deletions(-)
> 
> diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
> index 853ddac00d5b..e6c9d41db1de 100644
> --- a/drivers/vhost/vsock.c
> +++ b/drivers/vhost/vsock.c
> @@ -722,10 +722,6 @@ static void vhost_vsock_reset_orphans(struct sock *sk)
>  	 * executing.
>  	 */
>  
> -	/* Only handle our own sockets */
> -	if (vsk->transport != &vhost_transport.transport)
> -		return;
> -
>  	/* If the peer is still valid, no need to reset connection */
>  	if (vhost_vsock_get(vsk->remote_addr.svm_cid))
>  		return;
> @@ -757,7 +753,8 @@ static int vhost_vsock_dev_release(struct inode *inode, struct file *file)
>  
>  	/* Iterating over all connections for all CIDs to find orphans is
>  	 * inefficient.  Room for improvement here. */
> -	vsock_for_each_connected_socket(vhost_vsock_reset_orphans);
> +	vsock_for_each_connected_socket(&vhost_transport.transport,
> +					vhost_vsock_reset_orphans);
>  
>  	/* Don't check the owner, because we are in the release path, so we
>  	 * need to stop the vsock device in any case.
> diff --git a/include/net/af_vsock.h b/include/net/af_vsock.h
> index ab207677e0a8..f742e50207fb 100644
> --- a/include/net/af_vsock.h
> +++ b/include/net/af_vsock.h
> @@ -205,7 +205,8 @@ struct sock *vsock_find_bound_socket(struct sockaddr_vm *addr);
>  struct sock *vsock_find_connected_socket(struct sockaddr_vm *src,
>  					 struct sockaddr_vm *dst);
>  void vsock_remove_sock(struct vsock_sock *vsk);
> -void vsock_for_each_connected_socket(void (*fn)(struct sock *sk));
> +void vsock_for_each_connected_socket(struct vsock_transport *transport,
> +				     void (*fn)(struct sock *sk));
>  int vsock_assign_transport(struct vsock_sock *vsk, struct vsock_sock *psk);
>  bool vsock_find_cid(unsigned int cid);
>  
> diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
> index 38baeb189d4e..f04abf662ec6 100644
> --- a/net/vmw_vsock/af_vsock.c
> +++ b/net/vmw_vsock/af_vsock.c
> @@ -334,7 +334,8 @@ void vsock_remove_sock(struct vsock_sock *vsk)
>  }
>  EXPORT_SYMBOL_GPL(vsock_remove_sock);
>  
> -void vsock_for_each_connected_socket(void (*fn)(struct sock *sk))
> +void vsock_for_each_connected_socket(struct vsock_transport *transport,
> +				     void (*fn)(struct sock *sk))
>  {
>  	int i;
>  
> @@ -343,8 +344,12 @@ void vsock_for_each_connected_socket(void (*fn)(struct sock *sk))
>  	for (i = 0; i < ARRAY_SIZE(vsock_connected_table); i++) {
>  		struct vsock_sock *vsk;
>  		list_for_each_entry(vsk, &vsock_connected_table[i],
> -				    connected_table)
> +				    connected_table) {
> +			if (vsk->transport != transport)
> +				continue;
> +
>  			fn(sk_vsock(vsk));
> +		}
>  	}
>  
>  	spin_unlock_bh(&vsock_table_lock);
> diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
> index 61b24eb31d4b..5afc194a58bb 100644
> --- a/net/vmw_vsock/virtio_transport.c
> +++ b/net/vmw_vsock/virtio_transport.c
> @@ -358,17 +358,11 @@ static void virtio_vsock_event_fill(struct virtio_vsock *vsock)
>  
>  static void virtio_vsock_reset_sock(struct sock *sk)
>  {
> -	struct vsock_sock *vsk = vsock_sk(sk);
> -
>  	/* vmci_transport.c doesn't take sk_lock here either.  At least we're
>  	 * under vsock_table_lock so the sock cannot disappear while we're
>  	 * executing.
>  	 */
>  
> -	/* Only handle our own sockets */
> -	if (vsk->transport != &virtio_transport.transport)
> -		return;
> -
>  	sk->sk_state = TCP_CLOSE;
>  	sk->sk_err = ECONNRESET;
>  	sk_error_report(sk);
> @@ -391,7 +385,8 @@ static void virtio_vsock_event_handle(struct virtio_vsock *vsock,
>  	switch (le32_to_cpu(event->id)) {
>  	case VIRTIO_VSOCK_EVENT_TRANSPORT_RESET:
>  		virtio_vsock_update_guest_cid(vsock);
> -		vsock_for_each_connected_socket(virtio_vsock_reset_sock);
> +		vsock_for_each_connected_socket(&virtio_transport.transport,
> +						virtio_vsock_reset_sock);
>  		break;
>  	}
>  }
> @@ -669,7 +664,8 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
>  	synchronize_rcu();
>  
>  	/* Reset all connected sockets when the device disappear */
> -	vsock_for_each_connected_socket(virtio_vsock_reset_sock);
> +	vsock_for_each_connected_socket(&virtio_transport.transport,
> +					virtio_vsock_reset_sock);
>  
>  	/* Stop all work handlers to make sure no one is accessing the device,
>  	 * so we can safely call virtio_reset_device().
> diff --git a/net/vmw_vsock/vmci_transport.c b/net/vmw_vsock/vmci_transport.c
> index cd2f01513fae..735d5e14608a 100644
> --- a/net/vmw_vsock/vmci_transport.c
> +++ b/net/vmw_vsock/vmci_transport.c
> @@ -803,11 +803,6 @@ static void vmci_transport_handle_detach(struct sock *sk)
>  	struct vsock_sock *vsk;
>  
>  	vsk = vsock_sk(sk);
> -
> -	/* Only handle our own sockets */
> -	if (vsk->transport != &vmci_transport)
> -		return;
> -
>  	if (!vmci_handle_is_invalid(vmci_trans(vsk)->qp_handle)) {
>  		sock_set_flag(sk, SOCK_DONE);
>  
> @@ -887,7 +882,8 @@ static void vmci_transport_qp_resumed_cb(u32 sub_id,
>  					 const struct vmci_event_data *e_data,
>  					 void *client_data)
>  {
> -	vsock_for_each_connected_socket(vmci_transport_handle_detach);
> +	vsock_for_each_connected_socket(&vmci_transport,
> +					vmci_transport_handle_detach);
>  }
>  
>  static void vmci_transport_recv_pkt_work(struct work_struct *work)
> -- 
> 2.35.1.723.g4982287a31-goog


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/2] vsock: each transport cycles only on its own sockets
  2022-03-10 13:01   ` Michael S. Tsirkin
@ 2022-03-10 13:11     ` Jiyong Park
  2022-03-10 13:16       ` Michael S. Tsirkin
  2022-03-10 13:18     ` Stefano Garzarella
  1 sibling, 1 reply; 11+ messages in thread
From: Jiyong Park @ 2022-03-10 13:11 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: sgarzare, stefanha, jasowang, davem, kuba, adelva, kvm,
	virtualization, netdev, linux-kernel

Hi Michael,

Thanks for looking into this.

Would you mind if I ask what you mean by incomplete? Is it because non-updated
modules will still have the issue? Please elaborate.


On Thu, Mar 10, 2022 at 10:02 PM Michael S. Tsirkin <mst@redhat.com> wrote:
>
> On Thu, Mar 10, 2022 at 09:54:24PM +0900, Jiyong Park wrote:
> > When iterating over sockets using vsock_for_each_connected_socket, make
> > sure that a transport filters out sockets that don't belong to the
> > transport.
> >
> > There actually was an issue caused by this; in a nested VM
> > configuration, destroying the nested VM (which often involves the
> > closing of /dev/vhost-vsock if there was h2g connections to the nested
> > VM) kills not only the h2g connections, but also all existing g2h
> > connections to the (outmost) host which are totally unrelated.
> >
> > Tested: Executed the following steps on Cuttlefish (Android running on a
> > VM) [1]: (1) Enter into an `adb shell` session - to have a g2h
> > connection inside the VM, (2) open and then close /dev/vhost-vsock by
> > `exec 3< /dev/vhost-vsock && exec 3<&-`, (3) observe that the adb
> > session is not reset.
> >
> > [1] https://android.googlesource.com/device/google/cuttlefish/
> >
> > Fixes: c0cfa2d8a788 ("vsock: add multi-transports support")
> > Signed-off-by: Jiyong Park <jiyong@google.com>
> > ---
> >  drivers/vhost/vsock.c            | 4 ++++
> >  net/vmw_vsock/virtio_transport.c | 7 +++++++
> >  net/vmw_vsock/vmci_transport.c   | 5 +++++
> >  3 files changed, 16 insertions(+)
> >
> > diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
> > index 37f0b4274113..853ddac00d5b 100644
> > --- a/drivers/vhost/vsock.c
> > +++ b/drivers/vhost/vsock.c
> > @@ -722,6 +722,10 @@ static void vhost_vsock_reset_orphans(struct sock *sk)
> >        * executing.
> >        */
> >
> > +     /* Only handle our own sockets */
> > +     if (vsk->transport != &vhost_transport.transport)
> > +             return;
> > +
> >       /* If the peer is still valid, no need to reset connection */
> >       if (vhost_vsock_get(vsk->remote_addr.svm_cid))
> >               return;
>
>
> We know this is incomplete though. So I think it's the wrong thing to do
> when you backport, too. If all you worry about is breaking a binary
> module interface, how about simply exporting a new function when you
> backport. Thus you will have downstream both:
>
> void vsock_for_each_connected_socket(void (*fn)(struct sock *sk));
>
> void vsock_for_each_connected_socket_new(struct vsock_transport *transport,
>                                     void (*fn)(struct sock *sk));
>
>
> and then upstream we can squash these two patches.
>
> Hmm?
>
>
> > diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
> > index fb3302fff627..61b24eb31d4b 100644
> > --- a/net/vmw_vsock/virtio_transport.c
> > +++ b/net/vmw_vsock/virtio_transport.c
> > @@ -24,6 +24,7 @@
> >  static struct workqueue_struct *virtio_vsock_workqueue;
> >  static struct virtio_vsock __rcu *the_virtio_vsock;
> >  static DEFINE_MUTEX(the_virtio_vsock_mutex); /* protects the_virtio_vsock */
> > +static struct virtio_transport virtio_transport; /* forward declaration */
> >
> >  struct virtio_vsock {
> >       struct virtio_device *vdev;
> > @@ -357,11 +358,17 @@ static void virtio_vsock_event_fill(struct virtio_vsock *vsock)
> >
> >  static void virtio_vsock_reset_sock(struct sock *sk)
> >  {
> > +     struct vsock_sock *vsk = vsock_sk(sk);
> > +
> >       /* vmci_transport.c doesn't take sk_lock here either.  At least we're
> >        * under vsock_table_lock so the sock cannot disappear while we're
> >        * executing.
> >        */
> >
> > +     /* Only handle our own sockets */
> > +     if (vsk->transport != &virtio_transport.transport)
> > +             return;
> > +
> >       sk->sk_state = TCP_CLOSE;
> >       sk->sk_err = ECONNRESET;
> >       sk_error_report(sk);
> > diff --git a/net/vmw_vsock/vmci_transport.c b/net/vmw_vsock/vmci_transport.c
> > index 7aef34e32bdf..cd2f01513fae 100644
> > --- a/net/vmw_vsock/vmci_transport.c
> > +++ b/net/vmw_vsock/vmci_transport.c
> > @@ -803,6 +803,11 @@ static void vmci_transport_handle_detach(struct sock *sk)
> >       struct vsock_sock *vsk;
> >
> >       vsk = vsock_sk(sk);
> > +
> > +     /* Only handle our own sockets */
> > +     if (vsk->transport != &vmci_transport)
> > +             return;
> > +
> >       if (!vmci_handle_is_invalid(vmci_trans(vsk)->qp_handle)) {
> >               sock_set_flag(sk, SOCK_DONE);
> >
> > --
> > 2.35.1.723.g4982287a31-goog
>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/2] vsock: each transport cycles only on its own sockets
  2022-03-10 13:11     ` Jiyong Park
@ 2022-03-10 13:16       ` Michael S. Tsirkin
  0 siblings, 0 replies; 11+ messages in thread
From: Michael S. Tsirkin @ 2022-03-10 13:16 UTC (permalink / raw)
  To: Jiyong Park
  Cc: sgarzare, stefanha, jasowang, davem, kuba, adelva, kvm,
	virtualization, netdev, linux-kernel

On Thu, Mar 10, 2022 at 10:11:32PM +0900, Jiyong Park wrote:
> Hi Michael,
> 
> Thanks for looking into this.
> 
> Would you mind if I ask what you mean by incomplete? Is it because non-updated
> modules will still have the issue? Please elaborate.

What stefano wrote:
	I think there is the same problem if the g2h driver will be
	unloaded (or a reset event is received after a VM migration), it will close
	all sockets of the nested h2g.
looks like this will keep happening even with your patch, though
I didn't try.

I also don't like how patch 1 adds code that patch 2 removes. Untidy.
Let's just squash and have downstreams worry about stable ABI.


> 
> On Thu, Mar 10, 2022 at 10:02 PM Michael S. Tsirkin <mst@redhat.com> wrote:
> >
> > On Thu, Mar 10, 2022 at 09:54:24PM +0900, Jiyong Park wrote:
> > > When iterating over sockets using vsock_for_each_connected_socket, make
> > > sure that a transport filters out sockets that don't belong to the
> > > transport.
> > >
> > > There actually was an issue caused by this; in a nested VM
> > > configuration, destroying the nested VM (which often involves the
> > > closing of /dev/vhost-vsock if there was h2g connections to the nested
> > > VM) kills not only the h2g connections, but also all existing g2h
> > > connections to the (outmost) host which are totally unrelated.
> > >
> > > Tested: Executed the following steps on Cuttlefish (Android running on a
> > > VM) [1]: (1) Enter into an `adb shell` session - to have a g2h
> > > connection inside the VM, (2) open and then close /dev/vhost-vsock by
> > > `exec 3< /dev/vhost-vsock && exec 3<&-`, (3) observe that the adb
> > > session is not reset.
> > >
> > > [1] https://android.googlesource.com/device/google/cuttlefish/
> > >
> > > Fixes: c0cfa2d8a788 ("vsock: add multi-transports support")
> > > Signed-off-by: Jiyong Park <jiyong@google.com>
> > > ---
> > >  drivers/vhost/vsock.c            | 4 ++++
> > >  net/vmw_vsock/virtio_transport.c | 7 +++++++
> > >  net/vmw_vsock/vmci_transport.c   | 5 +++++
> > >  3 files changed, 16 insertions(+)
> > >
> > > diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
> > > index 37f0b4274113..853ddac00d5b 100644
> > > --- a/drivers/vhost/vsock.c
> > > +++ b/drivers/vhost/vsock.c
> > > @@ -722,6 +722,10 @@ static void vhost_vsock_reset_orphans(struct sock *sk)
> > >        * executing.
> > >        */
> > >
> > > +     /* Only handle our own sockets */
> > > +     if (vsk->transport != &vhost_transport.transport)
> > > +             return;
> > > +
> > >       /* If the peer is still valid, no need to reset connection */
> > >       if (vhost_vsock_get(vsk->remote_addr.svm_cid))
> > >               return;
> >
> >
> > We know this is incomplete though. So I think it's the wrong thing to do
> > when you backport, too. If all you worry about is breaking a binary
> > module interface, how about simply exporting a new function when you
> > backport. Thus you will have downstream both:
> >
> > void vsock_for_each_connected_socket(void (*fn)(struct sock *sk));
> >
> > void vsock_for_each_connected_socket_new(struct vsock_transport *transport,
> >                                     void (*fn)(struct sock *sk));
> >
> >
> > and then upstream we can squash these two patches.
> >
> > Hmm?
> >
> >
> > > diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
> > > index fb3302fff627..61b24eb31d4b 100644
> > > --- a/net/vmw_vsock/virtio_transport.c
> > > +++ b/net/vmw_vsock/virtio_transport.c
> > > @@ -24,6 +24,7 @@
> > >  static struct workqueue_struct *virtio_vsock_workqueue;
> > >  static struct virtio_vsock __rcu *the_virtio_vsock;
> > >  static DEFINE_MUTEX(the_virtio_vsock_mutex); /* protects the_virtio_vsock */
> > > +static struct virtio_transport virtio_transport; /* forward declaration */
> > >
> > >  struct virtio_vsock {
> > >       struct virtio_device *vdev;
> > > @@ -357,11 +358,17 @@ static void virtio_vsock_event_fill(struct virtio_vsock *vsock)
> > >
> > >  static void virtio_vsock_reset_sock(struct sock *sk)
> > >  {
> > > +     struct vsock_sock *vsk = vsock_sk(sk);
> > > +
> > >       /* vmci_transport.c doesn't take sk_lock here either.  At least we're
> > >        * under vsock_table_lock so the sock cannot disappear while we're
> > >        * executing.
> > >        */
> > >
> > > +     /* Only handle our own sockets */
> > > +     if (vsk->transport != &virtio_transport.transport)
> > > +             return;
> > > +
> > >       sk->sk_state = TCP_CLOSE;
> > >       sk->sk_err = ECONNRESET;
> > >       sk_error_report(sk);
> > > diff --git a/net/vmw_vsock/vmci_transport.c b/net/vmw_vsock/vmci_transport.c
> > > index 7aef34e32bdf..cd2f01513fae 100644
> > > --- a/net/vmw_vsock/vmci_transport.c
> > > +++ b/net/vmw_vsock/vmci_transport.c
> > > @@ -803,6 +803,11 @@ static void vmci_transport_handle_detach(struct sock *sk)
> > >       struct vsock_sock *vsk;
> > >
> > >       vsk = vsock_sk(sk);
> > > +
> > > +     /* Only handle our own sockets */
> > > +     if (vsk->transport != &vmci_transport)
> > > +             return;
> > > +
> > >       if (!vmci_handle_is_invalid(vmci_trans(vsk)->qp_handle)) {
> > >               sock_set_flag(sk, SOCK_DONE);
> > >
> > > --
> > > 2.35.1.723.g4982287a31-goog
> >


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/2] vsock: each transport cycles only on its own sockets
  2022-03-10 13:01   ` Michael S. Tsirkin
  2022-03-10 13:11     ` Jiyong Park
@ 2022-03-10 13:18     ` Stefano Garzarella
  1 sibling, 0 replies; 11+ messages in thread
From: Stefano Garzarella @ 2022-03-10 13:18 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Jiyong Park, stefanha, jasowang, davem, kuba, adelva, kvm,
	virtualization, netdev, linux-kernel

On Thu, Mar 10, 2022 at 08:01:53AM -0500, Michael S. Tsirkin wrote:
>On Thu, Mar 10, 2022 at 09:54:24PM +0900, Jiyong Park wrote:
>> When iterating over sockets using vsock_for_each_connected_socket, make
>> sure that a transport filters out sockets that don't belong to the
>> transport.
>>
>> There actually was an issue caused by this; in a nested VM
>> configuration, destroying the nested VM (which often involves the
>> closing of /dev/vhost-vsock if there was h2g connections to the nested
>> VM) kills not only the h2g connections, but also all existing g2h
>> connections to the (outmost) host which are totally unrelated.
>>
>> Tested: Executed the following steps on Cuttlefish (Android running on a
>> VM) [1]: (1) Enter into an `adb shell` session - to have a g2h
>> connection inside the VM, (2) open and then close /dev/vhost-vsock by
>> `exec 3< /dev/vhost-vsock && exec 3<&-`, (3) observe that the adb
>> session is not reset.
>>
>> [1] https://android.googlesource.com/device/google/cuttlefish/
>>
>> Fixes: c0cfa2d8a788 ("vsock: add multi-transports support")
>> Signed-off-by: Jiyong Park <jiyong@google.com>
>> ---
>>  drivers/vhost/vsock.c            | 4 ++++
>>  net/vmw_vsock/virtio_transport.c | 7 +++++++
>>  net/vmw_vsock/vmci_transport.c   | 5 +++++
>>  3 files changed, 16 insertions(+)
>>
>> diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
>> index 37f0b4274113..853ddac00d5b 100644
>> --- a/drivers/vhost/vsock.c
>> +++ b/drivers/vhost/vsock.c
>> @@ -722,6 +722,10 @@ static void vhost_vsock_reset_orphans(struct sock *sk)
>>  	 * executing.
>>  	 */
>>
>> +	/* Only handle our own sockets */
>> +	if (vsk->transport != &vhost_transport.transport)
>> +		return;
>> +
>>  	/* If the peer is still valid, no need to reset connection */
>>  	if (vhost_vsock_get(vsk->remote_addr.svm_cid))
>>  		return;
>
>
>We know this is incomplete though. So I think it's the wrong thing to do
>when you backport, too. If all you worry about is breaking a binary
>module interface, how about simply exporting a new function when you
>backport. Thus you will have downstream both:
>
>void vsock_for_each_connected_socket(void (*fn)(struct sock *sk));
>
>void vsock_for_each_connected_socket_new(struct vsock_transport *transport,
>                                    void (*fn)(struct sock *sk));
>
>
>and then upstream we can squash these two patches.
>
>Hmm?
>

Yep, reading more of the kernel documentation [1] it seems that upstream 
we don't worry about this.

I agree with Michael, it's better to just have the final patch upstream 
and downstream will be handled accordingly.

This should make it easier upstream to backport into stable branches 
future patches that depend on this change.

Thanks,
Stefano

[1] 
https://www.kernel.org/doc/Documentation/process/stable-api-nonsense.rst


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/2] vsock: each transport cycles only on its own sockets
  2022-03-10 12:54 ` [PATCH 1/2] vsock: each transport cycles only on its own sockets Jiyong Park
  2022-03-10 13:01   ` Michael S. Tsirkin
@ 2022-03-11  2:55   ` kernel test robot
  1 sibling, 0 replies; 11+ messages in thread
From: kernel test robot @ 2022-03-11  2:55 UTC (permalink / raw)
  To: Jiyong Park, sgarzare, stefanha, mst, jasowang, davem, kuba
  Cc: kbuild-all, adelva, kvm, virtualization, netdev, linux-kernel,
	Jiyong Park

Hi Jiyong,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on 3bf7edc84a9eb4007dd9a0cb8878a7e1d5ec6a3b]

url:    https://github.com/0day-ci/linux/commits/Jiyong-Park/vsock-cycle-only-on-its-own-socket/20220310-205638
base:   3bf7edc84a9eb4007dd9a0cb8878a7e1d5ec6a3b
config: x86_64-rhel-8.3 (https://download.01.org/0day-ci/archive/20220311/202203111023.SPYFGn7W-lkp@intel.com/config)
compiler: gcc-9 (Ubuntu 9.4.0-1ubuntu1~20.04) 9.4.0
reproduce (this is a W=1 build):
        # https://github.com/0day-ci/linux/commit/6219060e1d706d7055fb0829b3bf23c5ae84790e
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Jiyong-Park/vsock-cycle-only-on-its-own-socket/20220310-205638
        git checkout 6219060e1d706d7055fb0829b3bf23c5ae84790e
        # save the config file to linux build tree
        mkdir build_dir
        make W=1 O=build_dir ARCH=x86_64 SHELL=/bin/bash net/vmw_vsock/

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   net/vmw_vsock/vmci_transport.c: In function 'vmci_transport_handle_detach':
>> net/vmw_vsock/vmci_transport.c:808:25: error: 'vmci_transport' undeclared (first use in this function)
     808 |  if (vsk->transport != &vmci_transport)
         |                         ^~~~~~~~~~~~~~
   net/vmw_vsock/vmci_transport.c:808:25: note: each undeclared identifier is reported only once for each function it appears in


vim +/vmci_transport +808 net/vmw_vsock/vmci_transport.c

   800	
   801	static void vmci_transport_handle_detach(struct sock *sk)
   802	{
   803		struct vsock_sock *vsk;
   804	
   805		vsk = vsock_sk(sk);
   806	
   807		/* Only handle our own sockets */
 > 808		if (vsk->transport != &vmci_transport)
   809			return;
   810	
   811		if (!vmci_handle_is_invalid(vmci_trans(vsk)->qp_handle)) {
   812			sock_set_flag(sk, SOCK_DONE);
   813	
   814			/* On a detach the peer will not be sending or receiving
   815			 * anymore.
   816			 */
   817			vsk->peer_shutdown = SHUTDOWN_MASK;
   818	
   819			/* We should not be sending anymore since the peer won't be
   820			 * there to receive, but we can still receive if there is data
   821			 * left in our consume queue. If the local endpoint is a host,
   822			 * we can't call vsock_stream_has_data, since that may block,
   823			 * but a host endpoint can't read data once the VM has
   824			 * detached, so there is no available data in that case.
   825			 */
   826			if (vsk->local_addr.svm_cid == VMADDR_CID_HOST ||
   827			    vsock_stream_has_data(vsk) <= 0) {
   828				if (sk->sk_state == TCP_SYN_SENT) {
   829					/* The peer may detach from a queue pair while
   830					 * we are still in the connecting state, i.e.,
   831					 * if the peer VM is killed after attaching to
   832					 * a queue pair, but before we complete the
   833					 * handshake. In that case, we treat the detach
   834					 * event like a reset.
   835					 */
   836	
   837					sk->sk_state = TCP_CLOSE;
   838					sk->sk_err = ECONNRESET;
   839					sk_error_report(sk);
   840					return;
   841				}
   842				sk->sk_state = TCP_CLOSE;
   843			}
   844			sk->sk_state_change(sk);
   845		}
   846	}
   847	

---
0-DAY CI Kernel Test Service
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2022-03-11  2:55 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-03-10 12:54 [PATCH 0/2] vsock: cycle only on its own socket Jiyong Park
2022-03-10 12:54 ` [PATCH 1/2] vsock: each transport cycles only on its own sockets Jiyong Park
2022-03-10 13:01   ` Michael S. Tsirkin
2022-03-10 13:11     ` Jiyong Park
2022-03-10 13:16       ` Michael S. Tsirkin
2022-03-10 13:18     ` Stefano Garzarella
2022-03-11  2:55   ` kernel test robot
2022-03-10 12:54 ` [PATCH 2/2] vsock: refactor vsock_for_each_connected_socket Jiyong Park
2022-03-10 13:09   ` Michael S. Tsirkin
2022-03-10 12:57 ` [PATCH 0/2] vsock: cycle only on its own socket Michael S. Tsirkin
2022-03-10 12:59   ` Michael S. Tsirkin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).