virtualization.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
* Re: [syzbot] [kvm?] [net?] [virt?] general protection fault in virtio_transport_purge_skbs
       [not found] <000000000000708b1005f79acf5c@google.com>
@ 2023-03-24  8:31 ` Stefano Garzarella
  2023-03-24  8:55   ` Stefano Garzarella
  0 siblings, 1 reply; 5+ messages in thread
From: Stefano Garzarella @ 2023-03-24  8:31 UTC (permalink / raw)
  To: Bobby Eshleman, Bobby Eshleman
  Cc: Krasnov Arseniy Vladimirovich, Krasnov Arseniy, kvm, netdev,
	syzkaller-bugs, linux-kernel, virtualization, syzbot, edumazet,
	stefanha, kuba, pabeni, davem

Hi Bobby,
can you take a look at this report?

It seems related to the changes we made to support skbuff.

Thanks,
Stefano

On Fri, Mar 24, 2023 at 1:52 AM syzbot
<syzbot+befff0a9536049e7902e@syzkaller.appspotmail.com> wrote:
>
> Hello,
>
> syzbot found the following issue on:
>
> HEAD commit:    fff5a5e7f528 Merge tag 'for-linus' of git://git.armlinux.o..
> git tree:       upstream
> console+strace: https://syzkaller.appspot.com/x/log.txt?x=1136e97ac80000
> kernel config:  https://syzkaller.appspot.com/x/.config?x=aaa4b45720ca0519
> dashboard link: https://syzkaller.appspot.com/bug?extid=befff0a9536049e7902e
> compiler:       gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2
> syz repro:      https://syzkaller.appspot.com/x/repro.syz?x=14365781c80000
> C reproducer:   https://syzkaller.appspot.com/x/repro.c?x=12eebc66c80000
>
> Downloadable assets:
> disk image: https://storage.googleapis.com/syzbot-assets/997791f5f9e1/disk-fff5a5e7.raw.xz
> vmlinux: https://storage.googleapis.com/syzbot-assets/0b0155b5eac1/vmlinux-fff5a5e7.xz
> kernel image: https://storage.googleapis.com/syzbot-assets/8d98dd2ba6b6/bzImage-fff5a5e7.xz
>
> IMPORTANT: if you fix the issue, please add the following tag to the commit:
> Reported-by: syzbot+befff0a9536049e7902e@syzkaller.appspotmail.com
>
> general protection fault, probably for non-canonical address 0xdffffc0000000000: 0000 [#1] PREEMPT SMP KASAN
> KASAN: null-ptr-deref in range [0x0000000000000000-0x0000000000000007]
> CPU: 0 PID: 8759 Comm: syz-executor379 Not tainted 6.3.0-rc3-syzkaller-00026-gfff5a5e7f528 #0
> Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/02/2023
> RIP: 0010:virtio_transport_purge_skbs+0x139/0x4c0 net/vmw_vsock/virtio_transport_common.c:1370
> Code: 00 00 00 00 fc ff df 48 89 c2 48 89 44 24 28 48 c1 ea 03 48 8d 04 1a 48 89 44 24 10 eb 29 e8 ee 27 a3 f7 48 89 e8 48 c1 e8 03 <80> 3c 18 00 0f 85 a6 02 00 00 49 39 ec 48 8b 55 00 49 89 ef 0f 84
> RSP: 0018:ffffc90006427b48 EFLAGS: 00010256
> RAX: 0000000000000000 RBX: dffffc0000000000 RCX: 0000000000000000
> RDX: ffff8880211157c0 RSI: ffffffff89dfbd12 RDI: ffff88802c11a018
> RBP: 0000000000000000 R08: 0000000000000001 R09: 0000000000000003
> R10: fffff52000c84f5b R11: 0000000000000000 R12: ffffffff92179188
> R13: ffffc90006427ba0 R14: ffff88801e0f1100 R15: ffff88802c11a000
> FS:  00007f01fdd51700(0000) GS:ffff8880b9800000(0000) knlGS:0000000000000000
> CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> CR2: 00007f01fdd30718 CR3: 000000002a3f9000 CR4: 0000000000350ef0
> Call Trace:
>  <TASK>
>  vsock_loopback_cancel_pkt+0x1c/0x20 net/vmw_vsock/vsock_loopback.c:48
>  vsock_transport_cancel_pkt net/vmw_vsock/af_vsock.c:1284 [inline]
>  vsock_connect+0x852/0xcc0 net/vmw_vsock/af_vsock.c:1426
>  __sys_connect_file+0x153/0x1a0 net/socket.c:2001
>  __sys_connect+0x165/0x1a0 net/socket.c:2018
>  __do_sys_connect net/socket.c:2028 [inline]
>  __se_sys_connect net/socket.c:2025 [inline]
>  __x64_sys_connect+0x73/0xb0 net/socket.c:2025
>  do_syscall_x64 arch/x86/entry/common.c:50 [inline]
>  do_syscall_64+0x39/0xb0 arch/x86/entry/common.c:80
>  entry_SYSCALL_64_after_hwframe+0x63/0xcd
> RIP: 0033:0x7f01fdda0159
> Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 41 15 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48
> RSP: 002b:00007f01fdd51308 EFLAGS: 00000246 ORIG_RAX: 000000000000002a
> RAX: ffffffffffffffda RBX: 00007f01fde28428 RCX: 00007f01fdda0159
> RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000004
> RBP: 00007f01fde28420 R08: 0000000000000000 R09: 0000000000000000
> R10: 0000000000000000 R11: 0000000000000246 R12: 00007f01fddf606c
> R13: 0000000000000000 R14: 00007f01fdd51400 R15: 0000000000022000
>  </TASK>
> Modules linked in:
> ---[ end trace 0000000000000000 ]---
> RIP: 0010:virtio_transport_purge_skbs+0x139/0x4c0 net/vmw_vsock/virtio_transport_common.c:1370
> Code: 00 00 00 00 fc ff df 48 89 c2 48 89 44 24 28 48 c1 ea 03 48 8d 04 1a 48 89 44 24 10 eb 29 e8 ee 27 a3 f7 48 89 e8 48 c1 e8 03 <80> 3c 18 00 0f 85 a6 02 00 00 49 39 ec 48 8b 55 00 49 89 ef 0f 84
> RSP: 0018:ffffc90006427b48 EFLAGS: 00010256
> RAX: 0000000000000000 RBX: dffffc0000000000 RCX: 0000000000000000
> RDX: ffff8880211157c0 RSI: ffffffff89dfbd12 RDI: ffff88802c11a018
> RBP: 0000000000000000 R08: 0000000000000001 R09: 0000000000000003
> R10: fffff52000c84f5b R11: 0000000000000000 R12: ffffffff92179188
> R13: ffffc90006427ba0 R14: ffff88801e0f1100 R15: ffff88802c11a000
> FS:  00007f01fdd51700(0000) GS:ffff8880b9800000(0000) knlGS:0000000000000000
> CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> CR2: 00007f01fdd30718 CR3: 000000002a3f9000 CR4: 0000000000350ef0
> ----------------
> Code disassembly (best guess), 6 bytes skipped:
>    0:   df 48 89                fisttps -0x77(%rax)
>    3:   c2 48 89                retq   $0x8948
>    6:   44 24 28                rex.R and $0x28,%al
>    9:   48 c1 ea 03             shr    $0x3,%rdx
>    d:   48 8d 04 1a             lea    (%rdx,%rbx,1),%rax
>   11:   48 89 44 24 10          mov    %rax,0x10(%rsp)
>   16:   eb 29                   jmp    0x41
>   18:   e8 ee 27 a3 f7          callq  0xf7a3280b
>   1d:   48 89 e8                mov    %rbp,%rax
>   20:   48 c1 e8 03             shr    $0x3,%rax
> * 24:   80 3c 18 00             cmpb   $0x0,(%rax,%rbx,1) <-- trapping instruction
>   28:   0f 85 a6 02 00 00       jne    0x2d4
>   2e:   49 39 ec                cmp    %rbp,%r12
>   31:   48 8b 55 00             mov    0x0(%rbp),%rdx
>   35:   49 89 ef                mov    %rbp,%r15
>   38:   0f                      .byte 0xf
>   39:   84                      .byte 0x84
>
>
> ---
> This report is generated by a bot. It may contain errors.
> See https://goo.gl/tpsmEJ for more information about syzbot.
> syzbot engineers can be reached at syzkaller@googlegroups.com.
>
> syzbot will keep track of this issue. See:
> https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
> syzbot can test patches for this issue, for details see:
> https://goo.gl/tpsmEJ#testing-patches
>

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [syzbot] [kvm?] [net?] [virt?] general protection fault in virtio_transport_purge_skbs
  2023-03-24  8:31 ` [syzbot] [kvm?] [net?] [virt?] general protection fault in virtio_transport_purge_skbs Stefano Garzarella
@ 2023-03-24  8:55   ` Stefano Garzarella
  2023-03-24  9:06     ` Stefano Garzarella
  0 siblings, 1 reply; 5+ messages in thread
From: Stefano Garzarella @ 2023-03-24  8:55 UTC (permalink / raw)
  To: syzbot, Bobby Eshleman, Bobby Eshleman
  Cc: Krasnov Arseniy Vladimirovich, Krasnov Arseniy, kvm, netdev,
	syzkaller-bugs, linux-kernel, virtualization, edumazet, stefanha,
	kuba, pabeni, davem

On Fri, Mar 24, 2023 at 9:31 AM Stefano Garzarella <sgarzare@redhat.com> wrote:
>
> Hi Bobby,
> can you take a look at this report?
>
> It seems related to the changes we made to support skbuff.

Could it be a problem of concurrent access to pkt_queue ?

IIUC we should hold pkt_queue.lock when we call skb_queue_splice_init()
and remove pkt_list_lock. (or hold pkt_list_lock when calling
virtio_transport_purge_skbs, but pkt_list_lock seems useless now that
we use skbuff)

#syz test https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git fff5a5e7f528

--- a/net/vmw_vsock/vsock_loopback.c
+++ b/net/vmw_vsock/vsock_loopback.c
@@ -15,7 +15,6 @@
 struct vsock_loopback {
        struct workqueue_struct *workqueue;
 
-       spinlock_t pkt_list_lock; /* protects pkt_list */
        struct sk_buff_head pkt_queue;
        struct work_struct pkt_work;
 };
@@ -32,9 +31,7 @@ static int vsock_loopback_send_pkt(struct sk_buff *skb)
        struct vsock_loopback *vsock = &the_vsock_loopback;
        int len = skb->len;
 
-       spin_lock_bh(&vsock->pkt_list_lock);
        skb_queue_tail(&vsock->pkt_queue, skb);
-       spin_unlock_bh(&vsock->pkt_list_lock);
 
        queue_work(vsock->workqueue, &vsock->pkt_work);
 
@@ -113,9 +110,9 @@ static void vsock_loopback_work(struct work_struct *work)
 
        skb_queue_head_init(&pkts);
 
-       spin_lock_bh(&vsock->pkt_list_lock);
+       spin_lock_bh(&vsock->pkt_queue.lock);
        skb_queue_splice_init(&vsock->pkt_queue, &pkts);
-       spin_unlock_bh(&vsock->pkt_list_lock);
+       spin_unlock_bh(&vsock->pkt_queue.lock);
 
        while ((skb = __skb_dequeue(&pkts))) {
                virtio_transport_deliver_tap_pkt(skb);
@@ -132,7 +129,6 @@ static int __init vsock_loopback_init(void)
        if (!vsock->workqueue)
                return -ENOMEM;
 
-       spin_lock_init(&vsock->pkt_list_lock);
        skb_queue_head_init(&vsock->pkt_queue);
        INIT_WORK(&vsock->pkt_work, vsock_loopback_work);

>
> Thanks,
> Stefano
>
> On Fri, Mar 24, 2023 at 1:52 AM syzbot
> <syzbot+befff0a9536049e7902e@syzkaller.appspotmail.com> wrote:
> >
> > Hello,
> >
> > syzbot found the following issue on:
> >
> > HEAD commit:    fff5a5e7f528 Merge tag 'for-linus' of git://git.armlinux.o..
> > git tree:       upstream
> > console+strace: https://syzkaller.appspot.com/x/log.txt?x=1136e97ac80000
> > kernel config:  https://syzkaller.appspot.com/x/.config?x=aaa4b45720ca0519
> > dashboard link: https://syzkaller.appspot.com/bug?extid=befff0a9536049e7902e
> > compiler:       gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2
> > syz repro:      https://syzkaller.appspot.com/x/repro.syz?x=14365781c80000
> > C reproducer:   https://syzkaller.appspot.com/x/repro.c?x=12eebc66c80000
> >
> > Downloadable assets:
> > disk image: https://storage.googleapis.com/syzbot-assets/997791f5f9e1/disk-fff5a5e7.raw.xz
> > vmlinux: https://storage.googleapis.com/syzbot-assets/0b0155b5eac1/vmlinux-fff5a5e7.xz
> > kernel image: https://storage.googleapis.com/syzbot-assets/8d98dd2ba6b6/bzImage-fff5a5e7.xz
> >
> > IMPORTANT: if you fix the issue, please add the following tag to the commit:
> > Reported-by: syzbot+befff0a9536049e7902e@syzkaller.appspotmail.com
> >
> > general protection fault, probably for non-canonical address 0xdffffc0000000000: 0000 [#1] PREEMPT SMP KASAN
> > KASAN: null-ptr-deref in range [0x0000000000000000-0x0000000000000007]
> > CPU: 0 PID: 8759 Comm: syz-executor379 Not tainted 6.3.0-rc3-syzkaller-00026-gfff5a5e7f528 #0
> > Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/02/2023
> > RIP: 0010:virtio_transport_purge_skbs+0x139/0x4c0 net/vmw_vsock/virtio_transport_common.c:1370
> > Code: 00 00 00 00 fc ff df 48 89 c2 48 89 44 24 28 48 c1 ea 03 48 8d 04 1a 48 89 44 24 10 eb 29 e8 ee 27 a3 f7 48 89 e8 48 c1 e8 03 <80> 3c 18 00 0f 85 a6 02 00 00 49 39 ec 48 8b 55 00 49 89 ef 0f 84
> > RSP: 0018:ffffc90006427b48 EFLAGS: 00010256
> > RAX: 0000000000000000 RBX: dffffc0000000000 RCX: 0000000000000000
> > RDX: ffff8880211157c0 RSI: ffffffff89dfbd12 RDI: ffff88802c11a018
> > RBP: 0000000000000000 R08: 0000000000000001 R09: 0000000000000003
> > R10: fffff52000c84f5b R11: 0000000000000000 R12: ffffffff92179188
> > R13: ffffc90006427ba0 R14: ffff88801e0f1100 R15: ffff88802c11a000
> > FS:  00007f01fdd51700(0000) GS:ffff8880b9800000(0000) knlGS:0000000000000000
> > CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> > CR2: 00007f01fdd30718 CR3: 000000002a3f9000 CR4: 0000000000350ef0
> > Call Trace:
> >  <TASK>
> >  vsock_loopback_cancel_pkt+0x1c/0x20 net/vmw_vsock/vsock_loopback.c:48
> >  vsock_transport_cancel_pkt net/vmw_vsock/af_vsock.c:1284 [inline]
> >  vsock_connect+0x852/0xcc0 net/vmw_vsock/af_vsock.c:1426
> >  __sys_connect_file+0x153/0x1a0 net/socket.c:2001
> >  __sys_connect+0x165/0x1a0 net/socket.c:2018
> >  __do_sys_connect net/socket.c:2028 [inline]
> >  __se_sys_connect net/socket.c:2025 [inline]
> >  __x64_sys_connect+0x73/0xb0 net/socket.c:2025
> >  do_syscall_x64 arch/x86/entry/common.c:50 [inline]
> >  do_syscall_64+0x39/0xb0 arch/x86/entry/common.c:80
> >  entry_SYSCALL_64_after_hwframe+0x63/0xcd
> > RIP: 0033:0x7f01fdda0159
> > Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 41 15 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48
> > RSP: 002b:00007f01fdd51308 EFLAGS: 00000246 ORIG_RAX: 000000000000002a
> > RAX: ffffffffffffffda RBX: 00007f01fde28428 RCX: 00007f01fdda0159
> > RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000004
> > RBP: 00007f01fde28420 R08: 0000000000000000 R09: 0000000000000000
> > R10: 0000000000000000 R11: 0000000000000246 R12: 00007f01fddf606c
> > R13: 0000000000000000 R14: 00007f01fdd51400 R15: 0000000000022000
> >  </TASK>
> > Modules linked in:
> > ---[ end trace 0000000000000000 ]---
> > RIP: 0010:virtio_transport_purge_skbs+0x139/0x4c0 net/vmw_vsock/virtio_transport_common.c:1370
> > Code: 00 00 00 00 fc ff df 48 89 c2 48 89 44 24 28 48 c1 ea 03 48 8d 04 1a 48 89 44 24 10 eb 29 e8 ee 27 a3 f7 48 89 e8 48 c1 e8 03 <80> 3c 18 00 0f 85 a6 02 00 00 49 39 ec 48 8b 55 00 49 89 ef 0f 84
> > RSP: 0018:ffffc90006427b48 EFLAGS: 00010256
> > RAX: 0000000000000000 RBX: dffffc0000000000 RCX: 0000000000000000
> > RDX: ffff8880211157c0 RSI: ffffffff89dfbd12 RDI: ffff88802c11a018
> > RBP: 0000000000000000 R08: 0000000000000001 R09: 0000000000000003
> > R10: fffff52000c84f5b R11: 0000000000000000 R12: ffffffff92179188
> > R13: ffffc90006427ba0 R14: ffff88801e0f1100 R15: ffff88802c11a000
> > FS:  00007f01fdd51700(0000) GS:ffff8880b9800000(0000) knlGS:0000000000000000
> > CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> > CR2: 00007f01fdd30718 CR3: 000000002a3f9000 CR4: 0000000000350ef0
> > ----------------
> > Code disassembly (best guess), 6 bytes skipped:
> >    0:   df 48 89                fisttps -0x77(%rax)
> >    3:   c2 48 89                retq   $0x8948
> >    6:   44 24 28                rex.R and $0x28,%al
> >    9:   48 c1 ea 03             shr    $0x3,%rdx
> >    d:   48 8d 04 1a             lea    (%rdx,%rbx,1),%rax
> >   11:   48 89 44 24 10          mov    %rax,0x10(%rsp)
> >   16:   eb 29                   jmp    0x41
> >   18:   e8 ee 27 a3 f7          callq  0xf7a3280b
> >   1d:   48 89 e8                mov    %rbp,%rax
> >   20:   48 c1 e8 03             shr    $0x3,%rax
> > * 24:   80 3c 18 00             cmpb   $0x0,(%rax,%rbx,1) <-- trapping instruction
> >   28:   0f 85 a6 02 00 00       jne    0x2d4
> >   2e:   49 39 ec                cmp    %rbp,%r12
> >   31:   48 8b 55 00             mov    0x0(%rbp),%rdx
> >   35:   49 89 ef                mov    %rbp,%r15
> >   38:   0f                      .byte 0xf
> >   39:   84                      .byte 0x84
> >
> >
> > ---
> > This report is generated by a bot. It may contain errors.
> > See https://goo.gl/tpsmEJ for more information about syzbot.
> > syzbot engineers can be reached at syzkaller@googlegroups.com.
> >
> > syzbot will keep track of this issue. See:
> > https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
> > syzbot can test patches for this issue, for details see:
> > https://goo.gl/tpsmEJ#testing-patches
> >

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [syzbot] [kvm?] [net?] [virt?] general protection fault in virtio_transport_purge_skbs
  2023-03-24  8:55   ` Stefano Garzarella
@ 2023-03-24  9:06     ` Stefano Garzarella
       [not found]       ` <46ba9b55-c6ff-925c-d51a-8da9d1abd2f2@sberdevices.ru>
  2023-03-24 11:58       ` Stefano Garzarella
  0 siblings, 2 replies; 5+ messages in thread
From: Stefano Garzarella @ 2023-03-24  9:06 UTC (permalink / raw)
  To: syzbot, Bobby Eshleman, Bobby Eshleman
  Cc: Krasnov Arseniy Vladimirovich, Krasnov Arseniy, kvm, netdev,
	syzkaller-bugs, linux-kernel, virtualization, edumazet, stefanha,
	kuba, pabeni, davem

On Fri, Mar 24, 2023 at 9:55 AM Stefano Garzarella <sgarzare@redhat.com> wrote:
>
> On Fri, Mar 24, 2023 at 9:31 AM Stefano Garzarella <sgarzare@redhat.com> wrote:
> >
> > Hi Bobby,
> > can you take a look at this report?
> >
> > It seems related to the changes we made to support skbuff.
>
> Could it be a problem of concurrent access to pkt_queue ?
>
> IIUC we should hold pkt_queue.lock when we call skb_queue_splice_init()
> and remove pkt_list_lock. (or hold pkt_list_lock when calling
> virtio_transport_purge_skbs, but pkt_list_lock seems useless now that
> we use skbuff)
>

In the previous patch was missing a hunk, new one attached:

#syz test https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git fff5a5e7f528

--- a/net/vmw_vsock/vsock_loopback.c
+++ b/net/vmw_vsock/vsock_loopback.c
@@ -15,7 +15,6 @@
 struct vsock_loopback {
        struct workqueue_struct *workqueue;

-       spinlock_t pkt_list_lock; /* protects pkt_list */
        struct sk_buff_head pkt_queue;
        struct work_struct pkt_work;
 };
@@ -32,9 +31,7 @@ static int vsock_loopback_send_pkt(struct sk_buff *skb)
        struct vsock_loopback *vsock = &the_vsock_loopback;
        int len = skb->len;

-       spin_lock_bh(&vsock->pkt_list_lock);
        skb_queue_tail(&vsock->pkt_queue, skb);
-       spin_unlock_bh(&vsock->pkt_list_lock);

        queue_work(vsock->workqueue, &vsock->pkt_work);

@@ -113,9 +110,9 @@ static void vsock_loopback_work(struct work_struct *work)

        skb_queue_head_init(&pkts);

-       spin_lock_bh(&vsock->pkt_list_lock);
+       spin_lock_bh(&vsock->pkt_queue.lock);
        skb_queue_splice_init(&vsock->pkt_queue, &pkts);
-       spin_unlock_bh(&vsock->pkt_list_lock);
+       spin_unlock_bh(&vsock->pkt_queue.lock);

        while ((skb = __skb_dequeue(&pkts))) {
                virtio_transport_deliver_tap_pkt(skb);
@@ -132,7 +129,6 @@ static int __init vsock_loopback_init(void)
        if (!vsock->workqueue)
                return -ENOMEM;

-       spin_lock_init(&vsock->pkt_list_lock);
        skb_queue_head_init(&vsock->pkt_queue);
        INIT_WORK(&vsock->pkt_work, vsock_loopback_work);

@@ -156,9 +152,7 @@ static void __exit vsock_loopback_exit(void)

        flush_work(&vsock->pkt_work);

-       spin_lock_bh(&vsock->pkt_list_lock);
        virtio_vsock_skb_queue_purge(&vsock->pkt_queue);
-       spin_unlock_bh(&vsock->pkt_list_lock);

        destroy_workqueue(vsock->workqueue);
 }

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [syzbot] [kvm?] [net?] [virt?] general protection fault in virtio_transport_purge_skbs
       [not found]       ` <46ba9b55-c6ff-925c-d51a-8da9d1abd2f2@sberdevices.ru>
@ 2023-03-24  9:30         ` Stefano Garzarella
  0 siblings, 0 replies; 5+ messages in thread
From: Stefano Garzarella @ 2023-03-24  9:30 UTC (permalink / raw)
  To: Arseniy Krasnov
  Cc: Bobby Eshleman, kvm, netdev, Bobby Eshleman, linux-kernel,
	virtualization, syzkaller-bugs, syzbot, edumazet, stefanha,
	Krasnov Arseniy, kuba, pabeni, davem

On Fri, Mar 24, 2023 at 10:10 AM Arseniy Krasnov
<avkrasnov@sberdevices.ru> wrote:
> On 24.03.2023 12:06, Stefano Garzarella wrote:
> > On Fri, Mar 24, 2023 at 9:55 AM Stefano Garzarella <sgarzare@redhat.com> wrote:
> >>
> >> On Fri, Mar 24, 2023 at 9:31 AM Stefano Garzarella <sgarzare@redhat.com> wrote:
> >>>
> >>> Hi Bobby,
> >>> can you take a look at this report?
> >>>
> >>> It seems related to the changes we made to support skbuff.
> >>
> >> Could it be a problem of concurrent access to pkt_queue ?
> >>
> >> IIUC we should hold pkt_queue.lock when we call skb_queue_splice_init()
> >> and remove pkt_list_lock. (or hold pkt_list_lock when calling
> >> virtio_transport_purge_skbs, but pkt_list_lock seems useless now that
> >> we use skbuff)
> >>
> >
> > In the previous patch was missing a hunk, new one attached:
> >
> > #syz test https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git fff5a5e7f528
> >
> > --- a/net/vmw_vsock/vsock_loopback.c
> > +++ b/net/vmw_vsock/vsock_loopback.c
> > @@ -15,7 +15,6 @@
> >  struct vsock_loopback {
> >         struct workqueue_struct *workqueue;
> >
> > -       spinlock_t pkt_list_lock; /* protects pkt_list */
> >         struct sk_buff_head pkt_queue;
> >         struct work_struct pkt_work;
> >  };
> > @@ -32,9 +31,7 @@ static int vsock_loopback_send_pkt(struct sk_buff *skb)
> >         struct vsock_loopback *vsock = &the_vsock_loopback;
> >         int len = skb->len;
> >
> > -       spin_lock_bh(&vsock->pkt_list_lock);
> >         skb_queue_tail(&vsock->pkt_queue, skb);
> Hello Stefano and Bobby,
>
> Small remark, may be here we can use virtio_vsock_skb_queue_tail() instead of skb_queue_tail().
> skb_queue_tail() disables irqs during spinlock access, while  virtio_vsock_skb_queue_tail()
> uses spin_lock_bh(). vhost and virtio transports use virtio_vsock_skb_queue_tail().
>

Yep, but this shouldn't be related.
I would make this change in a separate patch. ;-)

Thanks,
Stefano

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [syzbot] [kvm?] [net?] [virt?] general protection fault in virtio_transport_purge_skbs
  2023-03-24  9:06     ` Stefano Garzarella
       [not found]       ` <46ba9b55-c6ff-925c-d51a-8da9d1abd2f2@sberdevices.ru>
@ 2023-03-24 11:58       ` Stefano Garzarella
  1 sibling, 0 replies; 5+ messages in thread
From: Stefano Garzarella @ 2023-03-24 11:58 UTC (permalink / raw)
  To: syzbot, Bobby Eshleman, Bobby Eshleman
  Cc: Krasnov Arseniy Vladimirovich, Krasnov Arseniy, kvm, netdev,
	syzkaller-bugs, linux-kernel, virtualization, edumazet, stefanha,
	kuba, pabeni, davem

On Fri, Mar 24, 2023 at 10:06 AM Stefano Garzarella <sgarzare@redhat.com> wrote:
>
> On Fri, Mar 24, 2023 at 9:55 AM Stefano Garzarella <sgarzare@redhat.com> wrote:
> >
> > On Fri, Mar 24, 2023 at 9:31 AM Stefano Garzarella <sgarzare@redhat.com> wrote:
> > >
> > > Hi Bobby,
> > > can you take a look at this report?
> > >
> > > It seems related to the changes we made to support skbuff.
> >
> > Could it be a problem of concurrent access to pkt_queue ?
> >
> > IIUC we should hold pkt_queue.lock when we call skb_queue_splice_init()
> > and remove pkt_list_lock. (or hold pkt_list_lock when calling
> > virtio_transport_purge_skbs, but pkt_list_lock seems useless now that
> > we use skbuff)
> >
>

Patch posted here:
https://lore.kernel.org/netdev/20230324115450.11268-1-sgarzare@redhat.com/

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2023-03-24 11:58 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <000000000000708b1005f79acf5c@google.com>
2023-03-24  8:31 ` [syzbot] [kvm?] [net?] [virt?] general protection fault in virtio_transport_purge_skbs Stefano Garzarella
2023-03-24  8:55   ` Stefano Garzarella
2023-03-24  9:06     ` Stefano Garzarella
     [not found]       ` <46ba9b55-c6ff-925c-d51a-8da9d1abd2f2@sberdevices.ru>
2023-03-24  9:30         ` Stefano Garzarella
2023-03-24 11:58       ` Stefano Garzarella

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).