From: Stefano Garzarella <sgarzare@redhat.com> To: Junichi Uekawa <uekawa@chromium.org> Cc: Stefan Hajnoczi <stefanha@redhat.com>, Jason Wang <jasowang@redhat.com>, Eric Dumazet <edumazet@google.com>, davem@davemloft.net, netdev@vger.kernel.org, Jakub Kicinski <kuba@kernel.org>, virtualization@lists.linux-foundation.org, kvm@vger.kernel.org, mst@redhat.com, Paolo Abeni <pabeni@redhat.com>, linux-kernel@vger.kernel.org, Bobby Eshleman <bobby.eshleman@gmail.com> Subject: Re: [PATCH] vhost/vsock: Use kvmalloc/kvfree for larger packets. Date: Wed, 28 Sep 2022 10:28:23 +0200 [thread overview] Message-ID: <20220928082823.wyxplop5wtpuurwo@sgarzare-redhat> (raw) In-Reply-To: <20220928064538.667678-1-uekawa@chromium.org> On Wed, Sep 28, 2022 at 03:45:38PM +0900, Junichi Uekawa wrote: >When copying a large file over sftp over vsock, data size is usually 32kB, >and kmalloc seems to fail to try to allocate 32 32kB regions. > > Call Trace: > [<ffffffffb6a0df64>] dump_stack+0x97/0xdb > [<ffffffffb68d6aed>] warn_alloc_failed+0x10f/0x138 > [<ffffffffb68d868a>] ? __alloc_pages_direct_compact+0x38/0xc8 > [<ffffffffb664619f>] __alloc_pages_nodemask+0x84c/0x90d > [<ffffffffb6646e56>] alloc_kmem_pages+0x17/0x19 > [<ffffffffb6653a26>] kmalloc_order_trace+0x2b/0xdb > [<ffffffffb66682f3>] __kmalloc+0x177/0x1f7 > [<ffffffffb66e0d94>] ? copy_from_iter+0x8d/0x31d > [<ffffffffc0689ab7>] vhost_vsock_handle_tx_kick+0x1fa/0x301 [vhost_vsock] > [<ffffffffc06828d9>] vhost_worker+0xf7/0x157 [vhost] > [<ffffffffb683ddce>] kthread+0xfd/0x105 > [<ffffffffc06827e2>] ? vhost_dev_set_owner+0x22e/0x22e [vhost] > [<ffffffffb683dcd1>] ? flush_kthread_worker+0xf3/0xf3 > [<ffffffffb6eb332e>] ret_from_fork+0x4e/0x80 > [<ffffffffb683dcd1>] ? flush_kthread_worker+0xf3/0xf3 > >Work around by doing kvmalloc instead. > >Signed-off-by: Junichi Uekawa <uekawa@chromium.org> >--- > > drivers/vhost/vsock.c | 2 +- > net/vmw_vsock/virtio_transport_common.c | 2 +- > 2 files changed, 2 insertions(+), 2 deletions(-) > >diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c >index 368330417bde..5703775af129 100644 >--- a/drivers/vhost/vsock.c >+++ b/drivers/vhost/vsock.c >@@ -393,7 +393,7 @@ vhost_vsock_alloc_pkt(struct vhost_virtqueue *vq, > return NULL; > } > >- pkt->buf = kmalloc(pkt->len, GFP_KERNEL); >+ pkt->buf = kvmalloc(pkt->len, GFP_KERNEL); > if (!pkt->buf) { > kfree(pkt); > return NULL; >diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c >index ec2c2afbf0d0..3a12aee33e92 100644 >--- a/net/vmw_vsock/virtio_transport_common.c >+++ b/net/vmw_vsock/virtio_transport_common.c >@@ -1342,7 +1342,7 @@ EXPORT_SYMBOL_GPL(virtio_transport_recv_pkt); > > void virtio_transport_free_pkt(struct virtio_vsock_pkt *pkt) > { >- kfree(pkt->buf); >+ kvfree(pkt->buf); virtio_transport_free_pkt() is used also in virtio_transport.c and vsock_loopback.c where pkt->buf is allocated with kmalloc(), but IIUC kvfree() can be used with that memory, so this should be fine. > kfree(pkt); > } > EXPORT_SYMBOL_GPL(virtio_transport_free_pkt); >-- >2.37.3.998.g577e59143f-goog > This issue should go away with the Bobby's work about introducing sk_buff [1], but we can queue this for now. I'm not sure if we should do the same also in the virtio-vsock driver (virtio_transport.c). Here in vhost-vsock the buf allocated is only used in the host, while in the virtio-vsock driver the buffer is exposed to the device emulated in the host, so it should be physically contiguous (if not, maybe we need to adjust virtio_vsock_rx_fill()). So for now I think is fine to use kvmalloc only on vhost-vsock (eventually we can use it also in vsock_loopback), since the Bobby's patch should rework this code: Reviewed-by: Stefano Garzarella <sgarzare@redhat.com> [1] https://lore.kernel.org/lkml/65d117ddc530d12a6d47fcc45b38891465a90d9f.1660362668.git.bobby.eshleman@bytedance.com/ Thanks, Stefano
WARNING: multiple messages have this Message-ID (diff)
From: Stefano Garzarella <sgarzare@redhat.com> To: Junichi Uekawa <uekawa@chromium.org> Cc: kvm@vger.kernel.org, mst@redhat.com, netdev@vger.kernel.org, Bobby Eshleman <bobby.eshleman@gmail.com>, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, Eric Dumazet <edumazet@google.com>, Stefan Hajnoczi <stefanha@redhat.com>, Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>, davem@davemloft.net Subject: Re: [PATCH] vhost/vsock: Use kvmalloc/kvfree for larger packets. Date: Wed, 28 Sep 2022 10:28:23 +0200 [thread overview] Message-ID: <20220928082823.wyxplop5wtpuurwo@sgarzare-redhat> (raw) In-Reply-To: <20220928064538.667678-1-uekawa@chromium.org> On Wed, Sep 28, 2022 at 03:45:38PM +0900, Junichi Uekawa wrote: >When copying a large file over sftp over vsock, data size is usually 32kB, >and kmalloc seems to fail to try to allocate 32 32kB regions. > > Call Trace: > [<ffffffffb6a0df64>] dump_stack+0x97/0xdb > [<ffffffffb68d6aed>] warn_alloc_failed+0x10f/0x138 > [<ffffffffb68d868a>] ? __alloc_pages_direct_compact+0x38/0xc8 > [<ffffffffb664619f>] __alloc_pages_nodemask+0x84c/0x90d > [<ffffffffb6646e56>] alloc_kmem_pages+0x17/0x19 > [<ffffffffb6653a26>] kmalloc_order_trace+0x2b/0xdb > [<ffffffffb66682f3>] __kmalloc+0x177/0x1f7 > [<ffffffffb66e0d94>] ? copy_from_iter+0x8d/0x31d > [<ffffffffc0689ab7>] vhost_vsock_handle_tx_kick+0x1fa/0x301 [vhost_vsock] > [<ffffffffc06828d9>] vhost_worker+0xf7/0x157 [vhost] > [<ffffffffb683ddce>] kthread+0xfd/0x105 > [<ffffffffc06827e2>] ? vhost_dev_set_owner+0x22e/0x22e [vhost] > [<ffffffffb683dcd1>] ? flush_kthread_worker+0xf3/0xf3 > [<ffffffffb6eb332e>] ret_from_fork+0x4e/0x80 > [<ffffffffb683dcd1>] ? flush_kthread_worker+0xf3/0xf3 > >Work around by doing kvmalloc instead. > >Signed-off-by: Junichi Uekawa <uekawa@chromium.org> >--- > > drivers/vhost/vsock.c | 2 +- > net/vmw_vsock/virtio_transport_common.c | 2 +- > 2 files changed, 2 insertions(+), 2 deletions(-) > >diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c >index 368330417bde..5703775af129 100644 >--- a/drivers/vhost/vsock.c >+++ b/drivers/vhost/vsock.c >@@ -393,7 +393,7 @@ vhost_vsock_alloc_pkt(struct vhost_virtqueue *vq, > return NULL; > } > >- pkt->buf = kmalloc(pkt->len, GFP_KERNEL); >+ pkt->buf = kvmalloc(pkt->len, GFP_KERNEL); > if (!pkt->buf) { > kfree(pkt); > return NULL; >diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c >index ec2c2afbf0d0..3a12aee33e92 100644 >--- a/net/vmw_vsock/virtio_transport_common.c >+++ b/net/vmw_vsock/virtio_transport_common.c >@@ -1342,7 +1342,7 @@ EXPORT_SYMBOL_GPL(virtio_transport_recv_pkt); > > void virtio_transport_free_pkt(struct virtio_vsock_pkt *pkt) > { >- kfree(pkt->buf); >+ kvfree(pkt->buf); virtio_transport_free_pkt() is used also in virtio_transport.c and vsock_loopback.c where pkt->buf is allocated with kmalloc(), but IIUC kvfree() can be used with that memory, so this should be fine. > kfree(pkt); > } > EXPORT_SYMBOL_GPL(virtio_transport_free_pkt); >-- >2.37.3.998.g577e59143f-goog > This issue should go away with the Bobby's work about introducing sk_buff [1], but we can queue this for now. I'm not sure if we should do the same also in the virtio-vsock driver (virtio_transport.c). Here in vhost-vsock the buf allocated is only used in the host, while in the virtio-vsock driver the buffer is exposed to the device emulated in the host, so it should be physically contiguous (if not, maybe we need to adjust virtio_vsock_rx_fill()). So for now I think is fine to use kvmalloc only on vhost-vsock (eventually we can use it also in vsock_loopback), since the Bobby's patch should rework this code: Reviewed-by: Stefano Garzarella <sgarzare@redhat.com> [1] https://lore.kernel.org/lkml/65d117ddc530d12a6d47fcc45b38891465a90d9f.1660362668.git.bobby.eshleman@bytedance.com/ Thanks, Stefano _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
next prev parent reply other threads:[~2022-09-28 8:28 UTC|newest] Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top 2022-09-28 6:45 [PATCH] vhost/vsock: Use kvmalloc/kvfree for larger packets Junichi Uekawa 2022-09-28 8:28 ` Stefano Garzarella [this message] 2022-09-28 8:28 ` Stefano Garzarella 2022-09-28 9:31 ` Michael S. Tsirkin 2022-09-28 9:31 ` Michael S. Tsirkin 2022-09-28 15:11 ` Stefano Garzarella 2022-09-28 15:11 ` Stefano Garzarella 2022-09-27 10:36 ` Bobby Eshleman 2022-09-28 20:02 ` Michael S. Tsirkin 2022-09-28 20:02 ` Michael S. Tsirkin 2022-09-29 7:40 ` Stefano Garzarella 2022-09-29 7:40 ` Stefano Garzarella 2022-09-29 7:47 ` Michael S. Tsirkin 2022-09-29 7:47 ` Michael S. Tsirkin 2022-09-28 23:14 ` Junichi Uekawa (上川純一) 2022-09-29 7:19 ` Michael S. Tsirkin 2022-09-29 7:19 ` Michael S. Tsirkin 2022-09-29 7:46 ` Stefano Garzarella 2022-09-29 7:46 ` Stefano Garzarella 2022-09-29 7:49 ` Michael S. Tsirkin 2022-09-29 7:49 ` Michael S. Tsirkin 2022-09-29 16:07 ` Jakub Kicinski 2022-09-29 16:25 ` Michael S. Tsirkin 2022-09-29 16:25 ` Michael S. Tsirkin 2022-09-29 18:07 ` Jakub Kicinski 2022-09-30 0:49 ` Junichi Uekawa (上川純一) 2022-09-30 1:35 ` Jakub Kicinski 2022-09-30 1:50 ` patchwork-bot+netdevbpf
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20220928082823.wyxplop5wtpuurwo@sgarzare-redhat \ --to=sgarzare@redhat.com \ --cc=bobby.eshleman@gmail.com \ --cc=davem@davemloft.net \ --cc=edumazet@google.com \ --cc=jasowang@redhat.com \ --cc=kuba@kernel.org \ --cc=kvm@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=mst@redhat.com \ --cc=netdev@vger.kernel.org \ --cc=pabeni@redhat.com \ --cc=stefanha@redhat.com \ --cc=uekawa@chromium.org \ --cc=virtualization@lists.linux-foundation.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.