linux-nfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* kernel oops in generic/013 on an rdma mount (over either soft roce or iwarp)
@ 2020-11-09 22:55 Olga Kornievskaia
  2020-11-09 22:59 ` Chuck Lever
  0 siblings, 1 reply; 16+ messages in thread
From: Olga Kornievskaia @ 2020-11-09 22:55 UTC (permalink / raw)
  To: Chuck Lever; +Cc: linux-nfs

Hi Chuck,

generic/013 on 5.10-rc3 under both soft RoCE and iWarp produce the
following kernel oops.
Are you aware of it? 5.9 ran fine. In 5.10-rc1/rc2 both soft RoCE and
iWarp were broken (outside of nfs) so can't test. I'll see what I can
find out more but wanted to run it by you first. Thank you.

[  126.767318] run fstests generic/013 at 2020-11-09 17:03:25
[  126.931805] BUG: unable to handle page fault for address: ffffa085363bb010
[  126.935622] #PF: supervisor write access in kernel mode
[  126.938202] #PF: error_code(0x0003) - permissions violation
[  126.941042] PGD 3fe02067 P4D 3fe02067 PUD 3fe06067 PMD 74e77063 PTE
80000000763bb061
[  126.944882] Oops: 0003 [#1] SMP PTI
[  126.946985] CPU: 0 PID: 2924 Comm: fsstress Not tainted 5.10.0-rc3+ #32
[  126.950482] Hardware name: VMware, Inc. VMware Virtual
Platform/440BX Desktop Reference Platform, BIOS 6.00 02/27/2020
[  126.955680] RIP: 0010:rpcrdma_convert_iovs.isra.32+0x125/0x190 [rpcrdma]
[  126.959175] Code: 03 74 70 83 f9 05 74 6b 49 8b 45 18 48 85 c0 74
43 49 8b 4d 10 89 c2 89 ce 81 e6 ff 0f 00 00 85 c0 74 31 bf 00 10 00
00 89 f8 <49> 89 48 10 29 f0 49 c7 40 08 00 00 00 00 39 d0 0f 47 c2 49
83 c0
[  126.968901] RSP: 0018:ffffc32703137a68 EFLAGS: 00010286
[  126.971423] RAX: 0000000000001000 RBX: 0000000000000000 RCX: ffffa08542daf000
[  126.974807] RDX: 00000000f34df06c RSI: 0000000000000000 RDI: 0000000000001000
[  126.978224] RBP: 0000000000000000 R08: ffffa085363bb000 R09: 0000000000001000
[  126.982701] R10: ffffeef9c0006f48 R11: ffffa0853ffd60c0 R12: 000000000000cb35
[  126.986327] R13: ffffa0853628a060 R14: ffffa08534f195d0 R15: ffffa0851e213358
[  126.989769] FS:  00007fab74973740(0000) GS:ffffa0853be00000(0000)
knlGS:0000000000000000
[  126.993803] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  126.996953] CR2: ffffa085363bb010 CR3: 0000000074fd0002 CR4: 00000000001706f0
[  127.000593] Call Trace:
[  127.001907]  rpcrdma_marshal_req+0x4b9/0xb30 [rpcrdma]
[  127.004789]  ? lock_timer_base+0x67/0x80
[  127.006710]  xprt_rdma_send_request+0x48/0xd0 [rpcrdma]
[  127.009257]  xprt_transmit+0x130/0x3f0 [sunrpc]
[  127.011499]  ? rpc_clnt_swap_deactivate+0x30/0x30 [sunrpc]
[  127.014225]  ?
rpc_wake_up_task_on_wq_queue_action_locked+0x230/0x230 [sunrpc]
[  127.017848]  call_transmit+0x63/0x70 [sunrpc]
[  127.019973]  __rpc_execute+0x75/0x3e0 [sunrpc]
[  127.022135]  ? xprt_iter_get_helper+0x17/0x30 [sunrpc]
[  127.024793]  rpc_run_task+0x153/0x170 [sunrpc]
[  127.027098]  nfs4_call_sync_custom+0xb/0x30 [nfsv4]
[  127.029617]  nfs4_do_call_sync+0x69/0x90 [nfsv4]
[  127.032001]  _nfs42_proc_listxattrs+0x143/0x200 [nfsv4]
[  127.034766]  nfs42_proc_listxattrs+0x8e/0xc0 [nfsv4]
[  127.037160]  nfs4_listxattr+0x1b8/0x210 [nfsv4]
[  127.039454]  ? __check_object_size+0x162/0x180
[  127.041606]  listxattr+0xd1/0xf0
[  127.043163]  path_listxattr+0x5f/0xb0
[  127.044969]  do_syscall_64+0x33/0x40
[  127.047200]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
[  127.049644] RIP: 0033:0x7fab74296c8b
[  127.051440] Code: f0 ff ff 73 01 c3 48 8b 0d fa 21 2c 00 f7 d8 64
89 01 48 83 c8 ff c3 0f 1f 84 00 00 00 00 00 f3 0f 1e fa b8 c2 00 00
00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d cd 21 2c 00 f7 d8 64 89
01 48
[  127.060978] RSP: 002b:00007fffcddc4a38 EFLAGS: 00000202 ORIG_RAX:
00000000000000c2
[  127.064848] RAX: ffffffffffffffda RBX: 000000000000002a RCX: 00007fab74296c8b
[  127.068244] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000674440
[  127.071642] RBP: 00000000000001f4 R08: 0000000000000000 R09: 00007fffcddc4687
[  127.075214] R10: 0000000000000004 R11: 0000000000000202 R12: 000000000000002a
[  127.078667] R13: 0000000000403e60 R14: 0000000000000000 R15: 0000000000000000
[  127.082783] Modules linked in: cts rpcsec_gss_krb5 nfsv4
dns_resolver nfs lockd grace nfs_ssc rpcrdma rdma_rxe ip6_udp_tunnel
udp_tunnel rdma_ucm rdma_cm iw_cm ib_cm ib_uverbs ib_core nls_utf8
isofs fuse rfcomm nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 nft_fib
nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_ct
nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 tun bridge stp llc
ip6_tables nft_compat ip_set nf_tables nfnetlink bnep
vmw_vsock_vmci_transport vsock snd_seq_midi snd_seq_midi_event
intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul
vmw_balloon ghash_clmulni_intel joydev btusb btrtl pcspkr btbcm
btintel uvcvideo videobuf2_vmalloc videobuf2_memops videobuf2_v4l2
videobuf2_common videodev snd_ens1371 bluetooth snd_ac97_codec
ac97_bus rfkill mc snd_seq snd_pcm ecdh_generic ecc snd_timer
snd_rawmidi snd_seq_device snd soundcore vmw_vmci i2c_piix4
auth_rpcgss sunrpc ip_tables xfs libcrc32c sr_mod cdrom sg ata_generic
vmwgfx drm_kms_helper nvme crc32c_intel serio_raw
[  127.082841]  syscopyarea sysfillrect sysimgblt fb_sys_fops
nvme_core t10_pi cec vmxnet3 ata_piix ahci libahci ttm libata drm
[  127.132635] CR2: ffffa085363bb010
[  127.134527] ---[ end trace 912ce02a00d98fdf ]---

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: kernel oops in generic/013 on an rdma mount (over either soft roce or iwarp)
  2020-11-09 22:55 kernel oops in generic/013 on an rdma mount (over either soft roce or iwarp) Olga Kornievskaia
@ 2020-11-09 22:59 ` Chuck Lever
  2020-11-09 23:07   ` Olga Kornievskaia
  0 siblings, 1 reply; 16+ messages in thread
From: Chuck Lever @ 2020-11-09 22:59 UTC (permalink / raw)
  To: Olga Kornievskaia; +Cc: Linux NFS Mailing List



> On Nov 9, 2020, at 5:55 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
> 
> Hi Chuck,
> 
> generic/013 on 5.10-rc3 under both soft RoCE and iWarp produce the
> following kernel oops.
> Are you aware of it? 5.9 ran fine. In 5.10-rc1/rc2 both soft RoCE and
> iWarp were broken (outside of nfs) so can't test. I'll see what I can
> find out more but wanted to run it by you first. Thank you.

Could be this:

https://lore.kernel.org/linux-nfs/160416263202.2615192.7554388264467271587.stgit@manet.1015granger.net/T/#u




> 
> [  126.767318] run fstests generic/013 at 2020-11-09 17:03:25
> [  126.931805] BUG: unable to handle page fault for address: ffffa085363bb010
> [  126.935622] #PF: supervisor write access in kernel mode
> [  126.938202] #PF: error_code(0x0003) - permissions violation
> [  126.941042] PGD 3fe02067 P4D 3fe02067 PUD 3fe06067 PMD 74e77063 PTE
> 80000000763bb061
> [  126.944882] Oops: 0003 [#1] SMP PTI
> [  126.946985] CPU: 0 PID: 2924 Comm: fsstress Not tainted 5.10.0-rc3+ #32
> [  126.950482] Hardware name: VMware, Inc. VMware Virtual
> Platform/440BX Desktop Reference Platform, BIOS 6.00 02/27/2020
> [  126.955680] RIP: 0010:rpcrdma_convert_iovs.isra.32+0x125/0x190 [rpcrdma]
> [  126.959175] Code: 03 74 70 83 f9 05 74 6b 49 8b 45 18 48 85 c0 74
> 43 49 8b 4d 10 89 c2 89 ce 81 e6 ff 0f 00 00 85 c0 74 31 bf 00 10 00
> 00 89 f8 <49> 89 48 10 29 f0 49 c7 40 08 00 00 00 00 39 d0 0f 47 c2 49
> 83 c0
> [  126.968901] RSP: 0018:ffffc32703137a68 EFLAGS: 00010286
> [  126.971423] RAX: 0000000000001000 RBX: 0000000000000000 RCX: ffffa08542daf000
> [  126.974807] RDX: 00000000f34df06c RSI: 0000000000000000 RDI: 0000000000001000
> [  126.978224] RBP: 0000000000000000 R08: ffffa085363bb000 R09: 0000000000001000
> [  126.982701] R10: ffffeef9c0006f48 R11: ffffa0853ffd60c0 R12: 000000000000cb35
> [  126.986327] R13: ffffa0853628a060 R14: ffffa08534f195d0 R15: ffffa0851e213358
> [  126.989769] FS:  00007fab74973740(0000) GS:ffffa0853be00000(0000)
> knlGS:0000000000000000
> [  126.993803] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [  126.996953] CR2: ffffa085363bb010 CR3: 0000000074fd0002 CR4: 00000000001706f0
> [  127.000593] Call Trace:
> [  127.001907]  rpcrdma_marshal_req+0x4b9/0xb30 [rpcrdma]
> [  127.004789]  ? lock_timer_base+0x67/0x80
> [  127.006710]  xprt_rdma_send_request+0x48/0xd0 [rpcrdma]
> [  127.009257]  xprt_transmit+0x130/0x3f0 [sunrpc]
> [  127.011499]  ? rpc_clnt_swap_deactivate+0x30/0x30 [sunrpc]
> [  127.014225]  ?
> rpc_wake_up_task_on_wq_queue_action_locked+0x230/0x230 [sunrpc]
> [  127.017848]  call_transmit+0x63/0x70 [sunrpc]
> [  127.019973]  __rpc_execute+0x75/0x3e0 [sunrpc]
> [  127.022135]  ? xprt_iter_get_helper+0x17/0x30 [sunrpc]
> [  127.024793]  rpc_run_task+0x153/0x170 [sunrpc]
> [  127.027098]  nfs4_call_sync_custom+0xb/0x30 [nfsv4]
> [  127.029617]  nfs4_do_call_sync+0x69/0x90 [nfsv4]
> [  127.032001]  _nfs42_proc_listxattrs+0x143/0x200 [nfsv4]
> [  127.034766]  nfs42_proc_listxattrs+0x8e/0xc0 [nfsv4]
> [  127.037160]  nfs4_listxattr+0x1b8/0x210 [nfsv4]
> [  127.039454]  ? __check_object_size+0x162/0x180
> [  127.041606]  listxattr+0xd1/0xf0
> [  127.043163]  path_listxattr+0x5f/0xb0
> [  127.044969]  do_syscall_64+0x33/0x40
> [  127.047200]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
> [  127.049644] RIP: 0033:0x7fab74296c8b
> [  127.051440] Code: f0 ff ff 73 01 c3 48 8b 0d fa 21 2c 00 f7 d8 64
> 89 01 48 83 c8 ff c3 0f 1f 84 00 00 00 00 00 f3 0f 1e fa b8 c2 00 00
> 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d cd 21 2c 00 f7 d8 64 89
> 01 48
> [  127.060978] RSP: 002b:00007fffcddc4a38 EFLAGS: 00000202 ORIG_RAX:
> 00000000000000c2
> [  127.064848] RAX: ffffffffffffffda RBX: 000000000000002a RCX: 00007fab74296c8b
> [  127.068244] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000674440
> [  127.071642] RBP: 00000000000001f4 R08: 0000000000000000 R09: 00007fffcddc4687
> [  127.075214] R10: 0000000000000004 R11: 0000000000000202 R12: 000000000000002a
> [  127.078667] R13: 0000000000403e60 R14: 0000000000000000 R15: 0000000000000000
> [  127.082783] Modules linked in: cts rpcsec_gss_krb5 nfsv4
> dns_resolver nfs lockd grace nfs_ssc rpcrdma rdma_rxe ip6_udp_tunnel
> udp_tunnel rdma_ucm rdma_cm iw_cm ib_cm ib_uverbs ib_core nls_utf8
> isofs fuse rfcomm nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 nft_fib
> nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_ct
> nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 tun bridge stp llc
> ip6_tables nft_compat ip_set nf_tables nfnetlink bnep
> vmw_vsock_vmci_transport vsock snd_seq_midi snd_seq_midi_event
> intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul
> vmw_balloon ghash_clmulni_intel joydev btusb btrtl pcspkr btbcm
> btintel uvcvideo videobuf2_vmalloc videobuf2_memops videobuf2_v4l2
> videobuf2_common videodev snd_ens1371 bluetooth snd_ac97_codec
> ac97_bus rfkill mc snd_seq snd_pcm ecdh_generic ecc snd_timer
> snd_rawmidi snd_seq_device snd soundcore vmw_vmci i2c_piix4
> auth_rpcgss sunrpc ip_tables xfs libcrc32c sr_mod cdrom sg ata_generic
> vmwgfx drm_kms_helper nvme crc32c_intel serio_raw
> [  127.082841]  syscopyarea sysfillrect sysimgblt fb_sys_fops
> nvme_core t10_pi cec vmxnet3 ata_piix ahci libahci ttm libata drm
> [  127.132635] CR2: ffffa085363bb010
> [  127.134527] ---[ end trace 912ce02a00d98fdf ]---

--
Chuck Lever




^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: kernel oops in generic/013 on an rdma mount (over either soft roce or iwarp)
  2020-11-09 22:59 ` Chuck Lever
@ 2020-11-09 23:07   ` Olga Kornievskaia
  2020-11-09 23:17     ` Olga Kornievskaia
  0 siblings, 1 reply; 16+ messages in thread
From: Olga Kornievskaia @ 2020-11-09 23:07 UTC (permalink / raw)
  To: Chuck Lever; +Cc: Linux NFS Mailing List

On Mon, Nov 9, 2020 at 6:01 PM Chuck Lever <chuck.lever@oracle.com> wrote:
>
>
>
> > On Nov 9, 2020, at 5:55 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
> >
> > Hi Chuck,
> >
> > generic/013 on 5.10-rc3 under both soft RoCE and iWarp produce the
> > following kernel oops.
> > Are you aware of it? 5.9 ran fine. In 5.10-rc1/rc2 both soft RoCE and
> > iWarp were broken (outside of nfs) so can't test. I'll see what I can
> > find out more but wanted to run it by you first. Thank you.
>
> Could be this:
>
> https://lore.kernel.org/linux-nfs/160416263202.2615192.7554388264467271587.stgit@manet.1015granger.net/T/#u

So what does that mean: are you planning to post this patch? That
patch never ended in even 5.10-rc3?

>
>
>
>
> >
> > [  126.767318] run fstests generic/013 at 2020-11-09 17:03:25
> > [  126.931805] BUG: unable to handle page fault for address: ffffa085363bb010
> > [  126.935622] #PF: supervisor write access in kernel mode
> > [  126.938202] #PF: error_code(0x0003) - permissions violation
> > [  126.941042] PGD 3fe02067 P4D 3fe02067 PUD 3fe06067 PMD 74e77063 PTE
> > 80000000763bb061
> > [  126.944882] Oops: 0003 [#1] SMP PTI
> > [  126.946985] CPU: 0 PID: 2924 Comm: fsstress Not tainted 5.10.0-rc3+ #32
> > [  126.950482] Hardware name: VMware, Inc. VMware Virtual
> > Platform/440BX Desktop Reference Platform, BIOS 6.00 02/27/2020
> > [  126.955680] RIP: 0010:rpcrdma_convert_iovs.isra.32+0x125/0x190 [rpcrdma]
> > [  126.959175] Code: 03 74 70 83 f9 05 74 6b 49 8b 45 18 48 85 c0 74
> > 43 49 8b 4d 10 89 c2 89 ce 81 e6 ff 0f 00 00 85 c0 74 31 bf 00 10 00
> > 00 89 f8 <49> 89 48 10 29 f0 49 c7 40 08 00 00 00 00 39 d0 0f 47 c2 49
> > 83 c0
> > [  126.968901] RSP: 0018:ffffc32703137a68 EFLAGS: 00010286
> > [  126.971423] RAX: 0000000000001000 RBX: 0000000000000000 RCX: ffffa08542daf000
> > [  126.974807] RDX: 00000000f34df06c RSI: 0000000000000000 RDI: 0000000000001000
> > [  126.978224] RBP: 0000000000000000 R08: ffffa085363bb000 R09: 0000000000001000
> > [  126.982701] R10: ffffeef9c0006f48 R11: ffffa0853ffd60c0 R12: 000000000000cb35
> > [  126.986327] R13: ffffa0853628a060 R14: ffffa08534f195d0 R15: ffffa0851e213358
> > [  126.989769] FS:  00007fab74973740(0000) GS:ffffa0853be00000(0000)
> > knlGS:0000000000000000
> > [  126.993803] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> > [  126.996953] CR2: ffffa085363bb010 CR3: 0000000074fd0002 CR4: 00000000001706f0
> > [  127.000593] Call Trace:
> > [  127.001907]  rpcrdma_marshal_req+0x4b9/0xb30 [rpcrdma]
> > [  127.004789]  ? lock_timer_base+0x67/0x80
> > [  127.006710]  xprt_rdma_send_request+0x48/0xd0 [rpcrdma]
> > [  127.009257]  xprt_transmit+0x130/0x3f0 [sunrpc]
> > [  127.011499]  ? rpc_clnt_swap_deactivate+0x30/0x30 [sunrpc]
> > [  127.014225]  ?
> > rpc_wake_up_task_on_wq_queue_action_locked+0x230/0x230 [sunrpc]
> > [  127.017848]  call_transmit+0x63/0x70 [sunrpc]
> > [  127.019973]  __rpc_execute+0x75/0x3e0 [sunrpc]
> > [  127.022135]  ? xprt_iter_get_helper+0x17/0x30 [sunrpc]
> > [  127.024793]  rpc_run_task+0x153/0x170 [sunrpc]
> > [  127.027098]  nfs4_call_sync_custom+0xb/0x30 [nfsv4]
> > [  127.029617]  nfs4_do_call_sync+0x69/0x90 [nfsv4]
> > [  127.032001]  _nfs42_proc_listxattrs+0x143/0x200 [nfsv4]
> > [  127.034766]  nfs42_proc_listxattrs+0x8e/0xc0 [nfsv4]
> > [  127.037160]  nfs4_listxattr+0x1b8/0x210 [nfsv4]
> > [  127.039454]  ? __check_object_size+0x162/0x180
> > [  127.041606]  listxattr+0xd1/0xf0
> > [  127.043163]  path_listxattr+0x5f/0xb0
> > [  127.044969]  do_syscall_64+0x33/0x40
> > [  127.047200]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
> > [  127.049644] RIP: 0033:0x7fab74296c8b
> > [  127.051440] Code: f0 ff ff 73 01 c3 48 8b 0d fa 21 2c 00 f7 d8 64
> > 89 01 48 83 c8 ff c3 0f 1f 84 00 00 00 00 00 f3 0f 1e fa b8 c2 00 00
> > 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d cd 21 2c 00 f7 d8 64 89
> > 01 48
> > [  127.060978] RSP: 002b:00007fffcddc4a38 EFLAGS: 00000202 ORIG_RAX:
> > 00000000000000c2
> > [  127.064848] RAX: ffffffffffffffda RBX: 000000000000002a RCX: 00007fab74296c8b
> > [  127.068244] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000674440
> > [  127.071642] RBP: 00000000000001f4 R08: 0000000000000000 R09: 00007fffcddc4687
> > [  127.075214] R10: 0000000000000004 R11: 0000000000000202 R12: 000000000000002a
> > [  127.078667] R13: 0000000000403e60 R14: 0000000000000000 R15: 0000000000000000
> > [  127.082783] Modules linked in: cts rpcsec_gss_krb5 nfsv4
> > dns_resolver nfs lockd grace nfs_ssc rpcrdma rdma_rxe ip6_udp_tunnel
> > udp_tunnel rdma_ucm rdma_cm iw_cm ib_cm ib_uverbs ib_core nls_utf8
> > isofs fuse rfcomm nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 nft_fib
> > nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_ct
> > nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 tun bridge stp llc
> > ip6_tables nft_compat ip_set nf_tables nfnetlink bnep
> > vmw_vsock_vmci_transport vsock snd_seq_midi snd_seq_midi_event
> > intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul
> > vmw_balloon ghash_clmulni_intel joydev btusb btrtl pcspkr btbcm
> > btintel uvcvideo videobuf2_vmalloc videobuf2_memops videobuf2_v4l2
> > videobuf2_common videodev snd_ens1371 bluetooth snd_ac97_codec
> > ac97_bus rfkill mc snd_seq snd_pcm ecdh_generic ecc snd_timer
> > snd_rawmidi snd_seq_device snd soundcore vmw_vmci i2c_piix4
> > auth_rpcgss sunrpc ip_tables xfs libcrc32c sr_mod cdrom sg ata_generic
> > vmwgfx drm_kms_helper nvme crc32c_intel serio_raw
> > [  127.082841]  syscopyarea sysfillrect sysimgblt fb_sys_fops
> > nvme_core t10_pi cec vmxnet3 ata_piix ahci libahci ttm libata drm
> > [  127.132635] CR2: ffffa085363bb010
> > [  127.134527] ---[ end trace 912ce02a00d98fdf ]---
>
> --
> Chuck Lever
>
>
>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: kernel oops in generic/013 on an rdma mount (over either soft roce or iwarp)
  2020-11-09 23:07   ` Olga Kornievskaia
@ 2020-11-09 23:17     ` Olga Kornievskaia
  2020-11-10 14:41       ` Chuck Lever
  0 siblings, 1 reply; 16+ messages in thread
From: Olga Kornievskaia @ 2020-11-09 23:17 UTC (permalink / raw)
  To: Chuck Lever; +Cc: Linux NFS Mailing List

On Mon, Nov 9, 2020 at 6:07 PM Olga Kornievskaia <aglo@umich.edu> wrote:
>
> On Mon, Nov 9, 2020 at 6:01 PM Chuck Lever <chuck.lever@oracle.com> wrote:
> >
> >
> >
> > > On Nov 9, 2020, at 5:55 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
> > >
> > > Hi Chuck,
> > >
> > > generic/013 on 5.10-rc3 under both soft RoCE and iWarp produce the
> > > following kernel oops.
> > > Are you aware of it? 5.9 ran fine. In 5.10-rc1/rc2 both soft RoCE and
> > > iWarp were broken (outside of nfs) so can't test. I'll see what I can
> > > find out more but wanted to run it by you first. Thank you.
> >
> > Could be this:
> >
> > https://lore.kernel.org/linux-nfs/160416263202.2615192.7554388264467271587.stgit@manet.1015granger.net/T/#u
>
> So what does that mean: are you planning to post this patch? That
> patch never ended in even 5.10-rc3?

Which those changes applied, I get the following oops:

[   54.501538] run fstests generic/013 at 2020-11-09 18:10:16
[   65.555863] general protection fault, probably for non-canonical
address 0x28fb180000000: 0000 [#1] SMP PTI
[   65.562715] CPU: 0 PID: 490 Comm: kworker/0:1H Not tainted 5.10.0-rc3+ #32
[   65.566089] Hardware name: VMware, Inc. VMware Virtual
Platform/440BX Desktop Reference Platform, BIOS 6.00 02/27/2020
[   65.571259] Workqueue: ib-comp-wq ib_cq_poll_work [ib_core]
[   65.574099] RIP: 0010:rpcrdma_complete_rqst+0x294/0x400 [rpcrdma]
[   65.577254] Code: 4c 63 c2 48 c1 f9 06 48 c1 e1 0c 48 03 0d c4 88
ed e9 48 01 f1 49 83 f8 08 0f 82 68 ff ff ff 48 8b 30 48 8d 79 08 48
83 e7 f8 <48> 89 31 4a 8b 74 00 f8 4a 89 74 01 f8 48 29 f9 48 89 c6 48
29 ce
[   65.587561] RSP: 0018:ffffadbcc18efdd8 EFLAGS: 00010202
[   65.590890] RAX: ffff98a1ddbd208c RBX: ffff98a1b0c20fc0 RCX: 00028fb180000000
[   65.594829] RDX: 0000000000000008 RSI: 0100000000003178 RDI: 00028fb180000008
[   65.598956] RBP: ffff98a1ba249200 R08: 0000000000000008 R09: 0000000000000008
[   65.602641] R10: ffff98a1b0c20fb8 R11: 0000000000000008 R12: ffff98a1f44b8010
[   65.607044] R13: 0000000000000000 R14: 0000000000000078 R15: 0000000000001000
[   65.611062] FS:  0000000000000000(0000) GS:ffff98a1fbe00000(0000)
knlGS:0000000000000000
[   65.615928] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[   65.620071] CR2: 00007f048c00b668 CR3: 0000000005bde005 CR4: 00000000001706f0
[   65.623661] Call Trace:
[   65.624907]  __ib_process_cq+0x89/0x150 [ib_core]
[   65.627238]  ib_cq_poll_work+0x26/0x80 [ib_core]
[   65.629623]  process_one_work+0x1a4/0x340
[   65.632506]  ? process_one_work+0x340/0x340
[   65.634627]  worker_thread+0x30/0x370
[   65.636395]  ? process_one_work+0x340/0x340
[   65.639333]  kthread+0x116/0x130
[   65.642022]  ? kthread_park+0x80/0x80
[   65.645183]  ret_from_fork+0x22/0x30
[   65.647019] Modules linked in: cts rpcsec_gss_krb5 nfsv4
dns_resolver nfs lockd grace nfs_ssc rpcrdma rdma_ucm rdma_cm iw_cm
ib_cm ib_uverbs siw ib_core nls_utf8 isofs fuse rfcomm nft_fib_inet
nft_fib_ipv4 nft_fib_ipv6 nft_fib nft_reject_inet nf_reject_ipv4
nf_reject_ipv6 nft_reject nft_ct nf_conntrack nf_defrag_ipv6
nf_defrag_ipv4 tun bridge stp llc ip6_tables nft_compat ip_set
nf_tables nfnetlink bnep vmw_vsock_vmci_transport vsock snd_seq_midi
snd_seq_midi_event intel_rapl_msr intel_rapl_common crct10dif_pclmul
crc32_pclmul vmw_balloon ghash_clmulni_intel btusb btrtl btbcm btintel
pcspkr joydev uvcvideo snd_ens1371 videobuf2_vmalloc snd_ac97_codec
videobuf2_memops ac97_bus videobuf2_v4l2 videobuf2_common bluetooth
snd_seq videodev rfkill snd_pcm mc ecdh_generic ecc snd_timer
snd_rawmidi snd_seq_device snd soundcore vmw_vmci i2c_piix4
auth_rpcgss sunrpc ip_tables xfs libcrc32c sr_mod cdrom sg
crc32c_intel ata_generic serio_raw vmwgfx nvme drm_kms_helper
syscopyarea sysfillrect sysimgblt
[   65.647074]  nvme_core t10_pi fb_sys_fops ata_piix ahci libahci
vmxnet3 cec ttm libata drm
[   65.705629] ---[ end trace acdae4b270638f48 ]---


>
> >
> >
> >
> >
> > >
> > > [  126.767318] run fstests generic/013 at 2020-11-09 17:03:25
> > > [  126.931805] BUG: unable to handle page fault for address: ffffa085363bb010
> > > [  126.935622] #PF: supervisor write access in kernel mode
> > > [  126.938202] #PF: error_code(0x0003) - permissions violation
> > > [  126.941042] PGD 3fe02067 P4D 3fe02067 PUD 3fe06067 PMD 74e77063 PTE
> > > 80000000763bb061
> > > [  126.944882] Oops: 0003 [#1] SMP PTI
> > > [  126.946985] CPU: 0 PID: 2924 Comm: fsstress Not tainted 5.10.0-rc3+ #32
> > > [  126.950482] Hardware name: VMware, Inc. VMware Virtual
> > > Platform/440BX Desktop Reference Platform, BIOS 6.00 02/27/2020
> > > [  126.955680] RIP: 0010:rpcrdma_convert_iovs.isra.32+0x125/0x190 [rpcrdma]
> > > [  126.959175] Code: 03 74 70 83 f9 05 74 6b 49 8b 45 18 48 85 c0 74
> > > 43 49 8b 4d 10 89 c2 89 ce 81 e6 ff 0f 00 00 85 c0 74 31 bf 00 10 00
> > > 00 89 f8 <49> 89 48 10 29 f0 49 c7 40 08 00 00 00 00 39 d0 0f 47 c2 49
> > > 83 c0
> > > [  126.968901] RSP: 0018:ffffc32703137a68 EFLAGS: 00010286
> > > [  126.971423] RAX: 0000000000001000 RBX: 0000000000000000 RCX: ffffa08542daf000
> > > [  126.974807] RDX: 00000000f34df06c RSI: 0000000000000000 RDI: 0000000000001000
> > > [  126.978224] RBP: 0000000000000000 R08: ffffa085363bb000 R09: 0000000000001000
> > > [  126.982701] R10: ffffeef9c0006f48 R11: ffffa0853ffd60c0 R12: 000000000000cb35
> > > [  126.986327] R13: ffffa0853628a060 R14: ffffa08534f195d0 R15: ffffa0851e213358
> > > [  126.989769] FS:  00007fab74973740(0000) GS:ffffa0853be00000(0000)
> > > knlGS:0000000000000000
> > > [  126.993803] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> > > [  126.996953] CR2: ffffa085363bb010 CR3: 0000000074fd0002 CR4: 00000000001706f0
> > > [  127.000593] Call Trace:
> > > [  127.001907]  rpcrdma_marshal_req+0x4b9/0xb30 [rpcrdma]
> > > [  127.004789]  ? lock_timer_base+0x67/0x80
> > > [  127.006710]  xprt_rdma_send_request+0x48/0xd0 [rpcrdma]
> > > [  127.009257]  xprt_transmit+0x130/0x3f0 [sunrpc]
> > > [  127.011499]  ? rpc_clnt_swap_deactivate+0x30/0x30 [sunrpc]
> > > [  127.014225]  ?
> > > rpc_wake_up_task_on_wq_queue_action_locked+0x230/0x230 [sunrpc]
> > > [  127.017848]  call_transmit+0x63/0x70 [sunrpc]
> > > [  127.019973]  __rpc_execute+0x75/0x3e0 [sunrpc]
> > > [  127.022135]  ? xprt_iter_get_helper+0x17/0x30 [sunrpc]
> > > [  127.024793]  rpc_run_task+0x153/0x170 [sunrpc]
> > > [  127.027098]  nfs4_call_sync_custom+0xb/0x30 [nfsv4]
> > > [  127.029617]  nfs4_do_call_sync+0x69/0x90 [nfsv4]
> > > [  127.032001]  _nfs42_proc_listxattrs+0x143/0x200 [nfsv4]
> > > [  127.034766]  nfs42_proc_listxattrs+0x8e/0xc0 [nfsv4]
> > > [  127.037160]  nfs4_listxattr+0x1b8/0x210 [nfsv4]
> > > [  127.039454]  ? __check_object_size+0x162/0x180
> > > [  127.041606]  listxattr+0xd1/0xf0
> > > [  127.043163]  path_listxattr+0x5f/0xb0
> > > [  127.044969]  do_syscall_64+0x33/0x40
> > > [  127.047200]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
> > > [  127.049644] RIP: 0033:0x7fab74296c8b
> > > [  127.051440] Code: f0 ff ff 73 01 c3 48 8b 0d fa 21 2c 00 f7 d8 64
> > > 89 01 48 83 c8 ff c3 0f 1f 84 00 00 00 00 00 f3 0f 1e fa b8 c2 00 00
> > > 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d cd 21 2c 00 f7 d8 64 89
> > > 01 48
> > > [  127.060978] RSP: 002b:00007fffcddc4a38 EFLAGS: 00000202 ORIG_RAX:
> > > 00000000000000c2
> > > [  127.064848] RAX: ffffffffffffffda RBX: 000000000000002a RCX: 00007fab74296c8b
> > > [  127.068244] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000674440
> > > [  127.071642] RBP: 00000000000001f4 R08: 0000000000000000 R09: 00007fffcddc4687
> > > [  127.075214] R10: 0000000000000004 R11: 0000000000000202 R12: 000000000000002a
> > > [  127.078667] R13: 0000000000403e60 R14: 0000000000000000 R15: 0000000000000000
> > > [  127.082783] Modules linked in: cts rpcsec_gss_krb5 nfsv4
> > > dns_resolver nfs lockd grace nfs_ssc rpcrdma rdma_rxe ip6_udp_tunnel
> > > udp_tunnel rdma_ucm rdma_cm iw_cm ib_cm ib_uverbs ib_core nls_utf8
> > > isofs fuse rfcomm nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 nft_fib
> > > nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_ct
> > > nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 tun bridge stp llc
> > > ip6_tables nft_compat ip_set nf_tables nfnetlink bnep
> > > vmw_vsock_vmci_transport vsock snd_seq_midi snd_seq_midi_event
> > > intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul
> > > vmw_balloon ghash_clmulni_intel joydev btusb btrtl pcspkr btbcm
> > > btintel uvcvideo videobuf2_vmalloc videobuf2_memops videobuf2_v4l2
> > > videobuf2_common videodev snd_ens1371 bluetooth snd_ac97_codec
> > > ac97_bus rfkill mc snd_seq snd_pcm ecdh_generic ecc snd_timer
> > > snd_rawmidi snd_seq_device snd soundcore vmw_vmci i2c_piix4
> > > auth_rpcgss sunrpc ip_tables xfs libcrc32c sr_mod cdrom sg ata_generic
> > > vmwgfx drm_kms_helper nvme crc32c_intel serio_raw
> > > [  127.082841]  syscopyarea sysfillrect sysimgblt fb_sys_fops
> > > nvme_core t10_pi cec vmxnet3 ata_piix ahci libahci ttm libata drm
> > > [  127.132635] CR2: ffffa085363bb010
> > > [  127.134527] ---[ end trace 912ce02a00d98fdf ]---
> >
> > --
> > Chuck Lever
> >
> >
> >

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: kernel oops in generic/013 on an rdma mount (over either soft roce or iwarp)
  2020-11-09 23:17     ` Olga Kornievskaia
@ 2020-11-10 14:41       ` Chuck Lever
  2020-11-10 16:51         ` Olga Kornievskaia
  0 siblings, 1 reply; 16+ messages in thread
From: Chuck Lever @ 2020-11-10 14:41 UTC (permalink / raw)
  To: Olga Kornievskaia; +Cc: Linux NFS Mailing List



> On Nov 9, 2020, at 6:17 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
> 
> On Mon, Nov 9, 2020 at 6:07 PM Olga Kornievskaia <aglo@umich.edu> wrote:
>> 
>> On Mon, Nov 9, 2020 at 6:01 PM Chuck Lever <chuck.lever@oracle.com> wrote:
>>> 
>>> 
>>> 
>>>> On Nov 9, 2020, at 5:55 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
>>>> 
>>>> Hi Chuck,
>>>> 
>>>> generic/013 on 5.10-rc3 under both soft RoCE and iWarp produce the
>>>> following kernel oops.
>>>> Are you aware of it? 5.9 ran fine. In 5.10-rc1/rc2 both soft RoCE and
>>>> iWarp were broken (outside of nfs) so can't test. I'll see what I can
>>>> find out more but wanted to run it by you first. Thank you.
>>> 
>>> Could be this:
>>> 
>>> https://lore.kernel.org/linux-nfs/160416263202.2615192.7554388264467271587.stgit@manet.1015granger.net/T/#u
>> 
>> So what does that mean: are you planning to post this patch? That
>> patch never ended in even 5.10-rc3?

The URL refers to a linux-nfs mail archive, so that patch has already
been posted (in October). The client maintainers need to merge it.


> Which those changes applied, I get the following oops:

What's your workload? Do you have a reproducer?

What's the output of

$ scripts/faddr2line net/sunrpc/xprtrdma/rpc_rdma.o rpcrdma_complete_rqst+0x294

(On my system it's in the middle of rpcrdma_inline_fixup(), for example).


> [   54.501538] run fstests generic/013 at 2020-11-09 18:10:16
> [   65.555863] general protection fault, probably for non-canonical
> address 0x28fb180000000: 0000 [#1] SMP PTI
> [   65.562715] CPU: 0 PID: 490 Comm: kworker/0:1H Not tainted 5.10.0-rc3+ #32
> [   65.566089] Hardware name: VMware, Inc. VMware Virtual
> Platform/440BX Desktop Reference Platform, BIOS 6.00 02/27/2020
> [   65.571259] Workqueue: ib-comp-wq ib_cq_poll_work [ib_core]
> [   65.574099] RIP: 0010:rpcrdma_complete_rqst+0x294/0x400 [rpcrdma]
> [   65.577254] Code: 4c 63 c2 48 c1 f9 06 48 c1 e1 0c 48 03 0d c4 88
> ed e9 48 01 f1 49 83 f8 08 0f 82 68 ff ff ff 48 8b 30 48 8d 79 08 48
> 83 e7 f8 <48> 89 31 4a 8b 74 00 f8 4a 89 74 01 f8 48 29 f9 48 89 c6 48
> 29 ce
> [   65.587561] RSP: 0018:ffffadbcc18efdd8 EFLAGS: 00010202
> [   65.590890] RAX: ffff98a1ddbd208c RBX: ffff98a1b0c20fc0 RCX: 00028fb180000000
> [   65.594829] RDX: 0000000000000008 RSI: 0100000000003178 RDI: 00028fb180000008
> [   65.598956] RBP: ffff98a1ba249200 R08: 0000000000000008 R09: 0000000000000008
> [   65.602641] R10: ffff98a1b0c20fb8 R11: 0000000000000008 R12: ffff98a1f44b8010
> [   65.607044] R13: 0000000000000000 R14: 0000000000000078 R15: 0000000000001000
> [   65.611062] FS:  0000000000000000(0000) GS:ffff98a1fbe00000(0000)
> knlGS:0000000000000000
> [   65.615928] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [   65.620071] CR2: 00007f048c00b668 CR3: 0000000005bde005 CR4: 00000000001706f0
> [   65.623661] Call Trace:
> [   65.624907]  __ib_process_cq+0x89/0x150 [ib_core]
> [   65.627238]  ib_cq_poll_work+0x26/0x80 [ib_core]
> [   65.629623]  process_one_work+0x1a4/0x340
> [   65.632506]  ? process_one_work+0x340/0x340
> [   65.634627]  worker_thread+0x30/0x370
> [   65.636395]  ? process_one_work+0x340/0x340
> [   65.639333]  kthread+0x116/0x130
> [   65.642022]  ? kthread_park+0x80/0x80
> [   65.645183]  ret_from_fork+0x22/0x30
> [   65.647019] Modules linked in: cts rpcsec_gss_krb5 nfsv4
> dns_resolver nfs lockd grace nfs_ssc rpcrdma rdma_ucm rdma_cm iw_cm
> ib_cm ib_uverbs siw ib_core nls_utf8 isofs fuse rfcomm nft_fib_inet
> nft_fib_ipv4 nft_fib_ipv6 nft_fib nft_reject_inet nf_reject_ipv4
> nf_reject_ipv6 nft_reject nft_ct nf_conntrack nf_defrag_ipv6
> nf_defrag_ipv4 tun bridge stp llc ip6_tables nft_compat ip_set
> nf_tables nfnetlink bnep vmw_vsock_vmci_transport vsock snd_seq_midi
> snd_seq_midi_event intel_rapl_msr intel_rapl_common crct10dif_pclmul
> crc32_pclmul vmw_balloon ghash_clmulni_intel btusb btrtl btbcm btintel
> pcspkr joydev uvcvideo snd_ens1371 videobuf2_vmalloc snd_ac97_codec
> videobuf2_memops ac97_bus videobuf2_v4l2 videobuf2_common bluetooth
> snd_seq videodev rfkill snd_pcm mc ecdh_generic ecc snd_timer
> snd_rawmidi snd_seq_device snd soundcore vmw_vmci i2c_piix4
> auth_rpcgss sunrpc ip_tables xfs libcrc32c sr_mod cdrom sg
> crc32c_intel ata_generic serio_raw vmwgfx nvme drm_kms_helper
> syscopyarea sysfillrect sysimgblt
> [   65.647074]  nvme_core t10_pi fb_sys_fops ata_piix ahci libahci
> vmxnet3 cec ttm libata drm
> [   65.705629] ---[ end trace acdae4b270638f48 ]---
> 
> 
>> 
>>> 
>>> 
>>> 
>>> 
>>>> 
>>>> [  126.767318] run fstests generic/013 at 2020-11-09 17:03:25
>>>> [  126.931805] BUG: unable to handle page fault for address: ffffa085363bb010
>>>> [  126.935622] #PF: supervisor write access in kernel mode
>>>> [  126.938202] #PF: error_code(0x0003) - permissions violation
>>>> [  126.941042] PGD 3fe02067 P4D 3fe02067 PUD 3fe06067 PMD 74e77063 PTE
>>>> 80000000763bb061
>>>> [  126.944882] Oops: 0003 [#1] SMP PTI
>>>> [  126.946985] CPU: 0 PID: 2924 Comm: fsstress Not tainted 5.10.0-rc3+ #32
>>>> [  126.950482] Hardware name: VMware, Inc. VMware Virtual
>>>> Platform/440BX Desktop Reference Platform, BIOS 6.00 02/27/2020
>>>> [  126.955680] RIP: 0010:rpcrdma_convert_iovs.isra.32+0x125/0x190 [rpcrdma]
>>>> [  126.959175] Code: 03 74 70 83 f9 05 74 6b 49 8b 45 18 48 85 c0 74
>>>> 43 49 8b 4d 10 89 c2 89 ce 81 e6 ff 0f 00 00 85 c0 74 31 bf 00 10 00
>>>> 00 89 f8 <49> 89 48 10 29 f0 49 c7 40 08 00 00 00 00 39 d0 0f 47 c2 49
>>>> 83 c0
>>>> [  126.968901] RSP: 0018:ffffc32703137a68 EFLAGS: 00010286
>>>> [  126.971423] RAX: 0000000000001000 RBX: 0000000000000000 RCX: ffffa08542daf000
>>>> [  126.974807] RDX: 00000000f34df06c RSI: 0000000000000000 RDI: 0000000000001000
>>>> [  126.978224] RBP: 0000000000000000 R08: ffffa085363bb000 R09: 0000000000001000
>>>> [  126.982701] R10: ffffeef9c0006f48 R11: ffffa0853ffd60c0 R12: 000000000000cb35
>>>> [  126.986327] R13: ffffa0853628a060 R14: ffffa08534f195d0 R15: ffffa0851e213358
>>>> [  126.989769] FS:  00007fab74973740(0000) GS:ffffa0853be00000(0000)
>>>> knlGS:0000000000000000
>>>> [  126.993803] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>>>> [  126.996953] CR2: ffffa085363bb010 CR3: 0000000074fd0002 CR4: 00000000001706f0
>>>> [  127.000593] Call Trace:
>>>> [  127.001907]  rpcrdma_marshal_req+0x4b9/0xb30 [rpcrdma]
>>>> [  127.004789]  ? lock_timer_base+0x67/0x80
>>>> [  127.006710]  xprt_rdma_send_request+0x48/0xd0 [rpcrdma]
>>>> [  127.009257]  xprt_transmit+0x130/0x3f0 [sunrpc]
>>>> [  127.011499]  ? rpc_clnt_swap_deactivate+0x30/0x30 [sunrpc]
>>>> [  127.014225]  ?
>>>> rpc_wake_up_task_on_wq_queue_action_locked+0x230/0x230 [sunrpc]
>>>> [  127.017848]  call_transmit+0x63/0x70 [sunrpc]
>>>> [  127.019973]  __rpc_execute+0x75/0x3e0 [sunrpc]
>>>> [  127.022135]  ? xprt_iter_get_helper+0x17/0x30 [sunrpc]
>>>> [  127.024793]  rpc_run_task+0x153/0x170 [sunrpc]
>>>> [  127.027098]  nfs4_call_sync_custom+0xb/0x30 [nfsv4]
>>>> [  127.029617]  nfs4_do_call_sync+0x69/0x90 [nfsv4]
>>>> [  127.032001]  _nfs42_proc_listxattrs+0x143/0x200 [nfsv4]
>>>> [  127.034766]  nfs42_proc_listxattrs+0x8e/0xc0 [nfsv4]
>>>> [  127.037160]  nfs4_listxattr+0x1b8/0x210 [nfsv4]
>>>> [  127.039454]  ? __check_object_size+0x162/0x180
>>>> [  127.041606]  listxattr+0xd1/0xf0
>>>> [  127.043163]  path_listxattr+0x5f/0xb0
>>>> [  127.044969]  do_syscall_64+0x33/0x40
>>>> [  127.047200]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
>>>> [  127.049644] RIP: 0033:0x7fab74296c8b
>>>> [  127.051440] Code: f0 ff ff 73 01 c3 48 8b 0d fa 21 2c 00 f7 d8 64
>>>> 89 01 48 83 c8 ff c3 0f 1f 84 00 00 00 00 00 f3 0f 1e fa b8 c2 00 00
>>>> 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d cd 21 2c 00 f7 d8 64 89
>>>> 01 48
>>>> [  127.060978] RSP: 002b:00007fffcddc4a38 EFLAGS: 00000202 ORIG_RAX:
>>>> 00000000000000c2
>>>> [  127.064848] RAX: ffffffffffffffda RBX: 000000000000002a RCX: 00007fab74296c8b
>>>> [  127.068244] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000674440
>>>> [  127.071642] RBP: 00000000000001f4 R08: 0000000000000000 R09: 00007fffcddc4687
>>>> [  127.075214] R10: 0000000000000004 R11: 0000000000000202 R12: 000000000000002a
>>>> [  127.078667] R13: 0000000000403e60 R14: 0000000000000000 R15: 0000000000000000
>>>> [  127.082783] Modules linked in: cts rpcsec_gss_krb5 nfsv4
>>>> dns_resolver nfs lockd grace nfs_ssc rpcrdma rdma_rxe ip6_udp_tunnel
>>>> udp_tunnel rdma_ucm rdma_cm iw_cm ib_cm ib_uverbs ib_core nls_utf8
>>>> isofs fuse rfcomm nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 nft_fib
>>>> nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_ct
>>>> nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 tun bridge stp llc
>>>> ip6_tables nft_compat ip_set nf_tables nfnetlink bnep
>>>> vmw_vsock_vmci_transport vsock snd_seq_midi snd_seq_midi_event
>>>> intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul
>>>> vmw_balloon ghash_clmulni_intel joydev btusb btrtl pcspkr btbcm
>>>> btintel uvcvideo videobuf2_vmalloc videobuf2_memops videobuf2_v4l2
>>>> videobuf2_common videodev snd_ens1371 bluetooth snd_ac97_codec
>>>> ac97_bus rfkill mc snd_seq snd_pcm ecdh_generic ecc snd_timer
>>>> snd_rawmidi snd_seq_device snd soundcore vmw_vmci i2c_piix4
>>>> auth_rpcgss sunrpc ip_tables xfs libcrc32c sr_mod cdrom sg ata_generic
>>>> vmwgfx drm_kms_helper nvme crc32c_intel serio_raw
>>>> [  127.082841]  syscopyarea sysfillrect sysimgblt fb_sys_fops
>>>> nvme_core t10_pi cec vmxnet3 ata_piix ahci libahci ttm libata drm
>>>> [  127.132635] CR2: ffffa085363bb010
>>>> [  127.134527] ---[ end trace 912ce02a00d98fdf ]---
>>> 
>>> --
>>> Chuck Lever

--
Chuck Lever




^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: kernel oops in generic/013 on an rdma mount (over either soft roce or iwarp)
  2020-11-10 14:41       ` Chuck Lever
@ 2020-11-10 16:51         ` Olga Kornievskaia
  2020-11-10 17:44           ` Chuck Lever
  0 siblings, 1 reply; 16+ messages in thread
From: Olga Kornievskaia @ 2020-11-10 16:51 UTC (permalink / raw)
  To: Chuck Lever, Anna Schumaker; +Cc: Linux NFS Mailing List

On Tue, Nov 10, 2020 at 9:42 AM Chuck Lever <chuck.lever@oracle.com> wrote:
>
>
>
> > On Nov 9, 2020, at 6:17 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
> >
> > On Mon, Nov 9, 2020 at 6:07 PM Olga Kornievskaia <aglo@umich.edu> wrote:
> >>
> >> On Mon, Nov 9, 2020 at 6:01 PM Chuck Lever <chuck.lever@oracle.com> wrote:
> >>>
> >>>
> >>>
> >>>> On Nov 9, 2020, at 5:55 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
> >>>>
> >>>> Hi Chuck,
> >>>>
> >>>> generic/013 on 5.10-rc3 under both soft RoCE and iWarp produce the
> >>>> following kernel oops.
> >>>> Are you aware of it? 5.9 ran fine. In 5.10-rc1/rc2 both soft RoCE and
> >>>> iWarp were broken (outside of nfs) so can't test. I'll see what I can
> >>>> find out more but wanted to run it by you first. Thank you.
> >>>
> >>> Could be this:
> >>>
> >>> https://lore.kernel.org/linux-nfs/160416263202.2615192.7554388264467271587.stgit@manet.1015granger.net/T/#u
> >>
> >> So what does that mean: are you planning to post this patch? That
> >> patch never ended in even 5.10-rc3?
>
> The URL refers to a linux-nfs mail archive, so that patch has already
> been posted (in October). The client maintainers need to merge it.

Anna, can you add this with the 5.10 fixes?

> > Which those changes applied, I get the following oops:
>
> What's your workload? Do you have a reproducer?

I ran generic/013 linux-to-linux.

> What's the output of
>
> $ scripts/faddr2line net/sunrpc/xprtrdma/rpc_rdma.o rpcrdma_complete_rqst+0x294

[aglo@localhost linux-nfs]$ uname -a
Linux localhost.localdomain 5.10.0-rc3+ #32 SMP Mon Nov 9 15:41:14 EST
2020 x86_64 x86_64 x86_64 GNU/Linux
[aglo@localhost linux-nfs]$ scripts/faddr2line
net/sunrpc/xprtrdma/rpc_rdma.o rpcrdma_complete_rqst+0x294
rpcrdma_complete_rqst+0x294/0x400:
memcpy at /home/aglo/linux-nfs/./include/linux/string.h:399
(inlined by) rpcrdma_inline_fixup at
/home/aglo/linux-nfs/net/sunrpc/xprtrdma/rpc_rdma.c:1074
(inlined by) rpcrdma_decode_msg at
/home/aglo/linux-nfs/net/sunrpc/xprtrdma/rpc_rdma.c:1278
(inlined by) rpcrdma_complete_rqst at
/home/aglo/linux-nfs/net/sunrpc/xprtrdma/rpc_rdma.c:1357
[aglo@localhost linux-nfs]$ gdb net/sunrpc/xprtrdma/rpcrdma.ko
GNU gdb (GDB) Red Hat Enterprise Linux 8.2-11.el8
Copyright (C) 2018 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Type "show copying" and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
    <http://www.gnu.org/software/gdb/documentation/>.

For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from net/sunrpc/xprtrdma/rpcrdma.ko...done.
(gdb) l *(rpcrdma_complete_rqst+0x294)
0x1784 is in rpcrdma_complete_rqst (./include/linux/string.h:399).
394 if (q_size < size)
395 __read_overflow2();
396 }
397 if (p_size < size || q_size < size)
398 fortify_panic(__func__);
399 return __underlying_memcpy(p, q, size);
400 }
401
402 __FORTIFY_INLINE void *memmove(void *p, const void *q, __kernel_size_t size)
403 {
(gdb)

Also:
(gdb) l *(__ib_process_cq+0x89)
0x3db9 is in __ib_process_cq (drivers/infiniband/core/cq.c:107).
102 * want to bound this call, thus we need unsigned
103 * minimum here.
104 */
105 while ((n = __poll_cq(cq, min_t(u32, batch,
106 budget - completed), wcs)) > 0) {
107 for (i = 0; i < n; i++) {
108 struct ib_wc *wc = &wcs[i];
109
110 if (wc->wr_cqe)
111 wc->wr_cqe->done(cq, wc);

>
> (On my system it's in the middle of rpcrdma_inline_fixup(), for example).
>
>
> > [   54.501538] run fstests generic/013 at 2020-11-09 18:10:16
> > [   65.555863] general protection fault, probably for non-canonical
> > address 0x28fb180000000: 0000 [#1] SMP PTI
> > [   65.562715] CPU: 0 PID: 490 Comm: kworker/0:1H Not tainted 5.10.0-rc3+ #32
> > [   65.566089] Hardware name: VMware, Inc. VMware Virtual
> > Platform/440BX Desktop Reference Platform, BIOS 6.00 02/27/2020
> > [   65.571259] Workqueue: ib-comp-wq ib_cq_poll_work [ib_core]
> > [   65.574099] RIP: 0010:rpcrdma_complete_rqst+0x294/0x400 [rpcrdma]
> > [   65.577254] Code: 4c 63 c2 48 c1 f9 06 48 c1 e1 0c 48 03 0d c4 88
> > ed e9 48 01 f1 49 83 f8 08 0f 82 68 ff ff ff 48 8b 30 48 8d 79 08 48
> > 83 e7 f8 <48> 89 31 4a 8b 74 00 f8 4a 89 74 01 f8 48 29 f9 48 89 c6 48
> > 29 ce
> > [   65.587561] RSP: 0018:ffffadbcc18efdd8 EFLAGS: 00010202
> > [   65.590890] RAX: ffff98a1ddbd208c RBX: ffff98a1b0c20fc0 RCX: 00028fb180000000
> > [   65.594829] RDX: 0000000000000008 RSI: 0100000000003178 RDI: 00028fb180000008
> > [   65.598956] RBP: ffff98a1ba249200 R08: 0000000000000008 R09: 0000000000000008
> > [   65.602641] R10: ffff98a1b0c20fb8 R11: 0000000000000008 R12: ffff98a1f44b8010
> > [   65.607044] R13: 0000000000000000 R14: 0000000000000078 R15: 0000000000001000
> > [   65.611062] FS:  0000000000000000(0000) GS:ffff98a1fbe00000(0000)
> > knlGS:0000000000000000
> > [   65.615928] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> > [   65.620071] CR2: 00007f048c00b668 CR3: 0000000005bde005 CR4: 00000000001706f0
> > [   65.623661] Call Trace:
> > [   65.624907]  __ib_process_cq+0x89/0x150 [ib_core]
> > [   65.627238]  ib_cq_poll_work+0x26/0x80 [ib_core]
> > [   65.629623]  process_one_work+0x1a4/0x340
> > [   65.632506]  ? process_one_work+0x340/0x340
> > [   65.634627]  worker_thread+0x30/0x370
> > [   65.636395]  ? process_one_work+0x340/0x340
> > [   65.639333]  kthread+0x116/0x130
> > [   65.642022]  ? kthread_park+0x80/0x80
> > [   65.645183]  ret_from_fork+0x22/0x30
> > [   65.647019] Modules linked in: cts rpcsec_gss_krb5 nfsv4
> > dns_resolver nfs lockd grace nfs_ssc rpcrdma rdma_ucm rdma_cm iw_cm
> > ib_cm ib_uverbs siw ib_core nls_utf8 isofs fuse rfcomm nft_fib_inet
> > nft_fib_ipv4 nft_fib_ipv6 nft_fib nft_reject_inet nf_reject_ipv4
> > nf_reject_ipv6 nft_reject nft_ct nf_conntrack nf_defrag_ipv6
> > nf_defrag_ipv4 tun bridge stp llc ip6_tables nft_compat ip_set
> > nf_tables nfnetlink bnep vmw_vsock_vmci_transport vsock snd_seq_midi
> > snd_seq_midi_event intel_rapl_msr intel_rapl_common crct10dif_pclmul
> > crc32_pclmul vmw_balloon ghash_clmulni_intel btusb btrtl btbcm btintel
> > pcspkr joydev uvcvideo snd_ens1371 videobuf2_vmalloc snd_ac97_codec
> > videobuf2_memops ac97_bus videobuf2_v4l2 videobuf2_common bluetooth
> > snd_seq videodev rfkill snd_pcm mc ecdh_generic ecc snd_timer
> > snd_rawmidi snd_seq_device snd soundcore vmw_vmci i2c_piix4
> > auth_rpcgss sunrpc ip_tables xfs libcrc32c sr_mod cdrom sg
> > crc32c_intel ata_generic serio_raw vmwgfx nvme drm_kms_helper
> > syscopyarea sysfillrect sysimgblt
> > [   65.647074]  nvme_core t10_pi fb_sys_fops ata_piix ahci libahci
> > vmxnet3 cec ttm libata drm
> > [   65.705629] ---[ end trace acdae4b270638f48 ]---
> >
> >
> >>
> >>>
> >>>
> >>>
> >>>
> >>>>
> >>>> [  126.767318] run fstests generic/013 at 2020-11-09 17:03:25
> >>>> [  126.931805] BUG: unable to handle page fault for address: ffffa085363bb010
> >>>> [  126.935622] #PF: supervisor write access in kernel mode
> >>>> [  126.938202] #PF: error_code(0x0003) - permissions violation
> >>>> [  126.941042] PGD 3fe02067 P4D 3fe02067 PUD 3fe06067 PMD 74e77063 PTE
> >>>> 80000000763bb061
> >>>> [  126.944882] Oops: 0003 [#1] SMP PTI
> >>>> [  126.946985] CPU: 0 PID: 2924 Comm: fsstress Not tainted 5.10.0-rc3+ #32
> >>>> [  126.950482] Hardware name: VMware, Inc. VMware Virtual
> >>>> Platform/440BX Desktop Reference Platform, BIOS 6.00 02/27/2020
> >>>> [  126.955680] RIP: 0010:rpcrdma_convert_iovs.isra.32+0x125/0x190 [rpcrdma]
> >>>> [  126.959175] Code: 03 74 70 83 f9 05 74 6b 49 8b 45 18 48 85 c0 74
> >>>> 43 49 8b 4d 10 89 c2 89 ce 81 e6 ff 0f 00 00 85 c0 74 31 bf 00 10 00
> >>>> 00 89 f8 <49> 89 48 10 29 f0 49 c7 40 08 00 00 00 00 39 d0 0f 47 c2 49
> >>>> 83 c0
> >>>> [  126.968901] RSP: 0018:ffffc32703137a68 EFLAGS: 00010286
> >>>> [  126.971423] RAX: 0000000000001000 RBX: 0000000000000000 RCX: ffffa08542daf000
> >>>> [  126.974807] RDX: 00000000f34df06c RSI: 0000000000000000 RDI: 0000000000001000
> >>>> [  126.978224] RBP: 0000000000000000 R08: ffffa085363bb000 R09: 0000000000001000
> >>>> [  126.982701] R10: ffffeef9c0006f48 R11: ffffa0853ffd60c0 R12: 000000000000cb35
> >>>> [  126.986327] R13: ffffa0853628a060 R14: ffffa08534f195d0 R15: ffffa0851e213358
> >>>> [  126.989769] FS:  00007fab74973740(0000) GS:ffffa0853be00000(0000)
> >>>> knlGS:0000000000000000
> >>>> [  126.993803] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> >>>> [  126.996953] CR2: ffffa085363bb010 CR3: 0000000074fd0002 CR4: 00000000001706f0
> >>>> [  127.000593] Call Trace:
> >>>> [  127.001907]  rpcrdma_marshal_req+0x4b9/0xb30 [rpcrdma]
> >>>> [  127.004789]  ? lock_timer_base+0x67/0x80
> >>>> [  127.006710]  xprt_rdma_send_request+0x48/0xd0 [rpcrdma]
> >>>> [  127.009257]  xprt_transmit+0x130/0x3f0 [sunrpc]
> >>>> [  127.011499]  ? rpc_clnt_swap_deactivate+0x30/0x30 [sunrpc]
> >>>> [  127.014225]  ?
> >>>> rpc_wake_up_task_on_wq_queue_action_locked+0x230/0x230 [sunrpc]
> >>>> [  127.017848]  call_transmit+0x63/0x70 [sunrpc]
> >>>> [  127.019973]  __rpc_execute+0x75/0x3e0 [sunrpc]
> >>>> [  127.022135]  ? xprt_iter_get_helper+0x17/0x30 [sunrpc]
> >>>> [  127.024793]  rpc_run_task+0x153/0x170 [sunrpc]
> >>>> [  127.027098]  nfs4_call_sync_custom+0xb/0x30 [nfsv4]
> >>>> [  127.029617]  nfs4_do_call_sync+0x69/0x90 [nfsv4]
> >>>> [  127.032001]  _nfs42_proc_listxattrs+0x143/0x200 [nfsv4]
> >>>> [  127.034766]  nfs42_proc_listxattrs+0x8e/0xc0 [nfsv4]
> >>>> [  127.037160]  nfs4_listxattr+0x1b8/0x210 [nfsv4]
> >>>> [  127.039454]  ? __check_object_size+0x162/0x180
> >>>> [  127.041606]  listxattr+0xd1/0xf0
> >>>> [  127.043163]  path_listxattr+0x5f/0xb0
> >>>> [  127.044969]  do_syscall_64+0x33/0x40
> >>>> [  127.047200]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
> >>>> [  127.049644] RIP: 0033:0x7fab74296c8b
> >>>> [  127.051440] Code: f0 ff ff 73 01 c3 48 8b 0d fa 21 2c 00 f7 d8 64
> >>>> 89 01 48 83 c8 ff c3 0f 1f 84 00 00 00 00 00 f3 0f 1e fa b8 c2 00 00
> >>>> 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d cd 21 2c 00 f7 d8 64 89
> >>>> 01 48
> >>>> [  127.060978] RSP: 002b:00007fffcddc4a38 EFLAGS: 00000202 ORIG_RAX:
> >>>> 00000000000000c2
> >>>> [  127.064848] RAX: ffffffffffffffda RBX: 000000000000002a RCX: 00007fab74296c8b
> >>>> [  127.068244] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000674440
> >>>> [  127.071642] RBP: 00000000000001f4 R08: 0000000000000000 R09: 00007fffcddc4687
> >>>> [  127.075214] R10: 0000000000000004 R11: 0000000000000202 R12: 000000000000002a
> >>>> [  127.078667] R13: 0000000000403e60 R14: 0000000000000000 R15: 0000000000000000
> >>>> [  127.082783] Modules linked in: cts rpcsec_gss_krb5 nfsv4
> >>>> dns_resolver nfs lockd grace nfs_ssc rpcrdma rdma_rxe ip6_udp_tunnel
> >>>> udp_tunnel rdma_ucm rdma_cm iw_cm ib_cm ib_uverbs ib_core nls_utf8
> >>>> isofs fuse rfcomm nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 nft_fib
> >>>> nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_ct
> >>>> nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 tun bridge stp llc
> >>>> ip6_tables nft_compat ip_set nf_tables nfnetlink bnep
> >>>> vmw_vsock_vmci_transport vsock snd_seq_midi snd_seq_midi_event
> >>>> intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul
> >>>> vmw_balloon ghash_clmulni_intel joydev btusb btrtl pcspkr btbcm
> >>>> btintel uvcvideo videobuf2_vmalloc videobuf2_memops videobuf2_v4l2
> >>>> videobuf2_common videodev snd_ens1371 bluetooth snd_ac97_codec
> >>>> ac97_bus rfkill mc snd_seq snd_pcm ecdh_generic ecc snd_timer
> >>>> snd_rawmidi snd_seq_device snd soundcore vmw_vmci i2c_piix4
> >>>> auth_rpcgss sunrpc ip_tables xfs libcrc32c sr_mod cdrom sg ata_generic
> >>>> vmwgfx drm_kms_helper nvme crc32c_intel serio_raw
> >>>> [  127.082841]  syscopyarea sysfillrect sysimgblt fb_sys_fops
> >>>> nvme_core t10_pi cec vmxnet3 ata_piix ahci libahci ttm libata drm
> >>>> [  127.132635] CR2: ffffa085363bb010
> >>>> [  127.134527] ---[ end trace 912ce02a00d98fdf ]---
> >>>
> >>> --
> >>> Chuck Lever
>
> --
> Chuck Lever
>
>
>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: kernel oops in generic/013 on an rdma mount (over either soft roce or iwarp)
  2020-11-10 16:51         ` Olga Kornievskaia
@ 2020-11-10 17:44           ` Chuck Lever
  2020-11-10 18:18             ` Olga Kornievskaia
  0 siblings, 1 reply; 16+ messages in thread
From: Chuck Lever @ 2020-11-10 17:44 UTC (permalink / raw)
  To: Olga Kornievskaia; +Cc: Anna Schumaker, Linux NFS Mailing List


> On Nov 10, 2020, at 11:51 AM, Olga Kornievskaia <aglo@umich.edu> wrote:
> 
> On Tue, Nov 10, 2020 at 9:42 AM Chuck Lever <chuck.lever@oracle.com> wrote:
>> 
>>> Which those changes applied, I get the following oops:
>> 
>> What's your workload? Do you have a reproducer?
> 
> I ran generic/013 linux-to-linux.

I'm not able to reproduce the problem.

xfstest: mount options: vers=4.2,proto=rdma,sec=sys,rsize=262144,wsize=131072
 
FSTYP         -- nfs
PLATFORM      -- Linux/x86_64 manet 5.10.0-rc1-00015-g6d4bab79ed4f #1297 SMP Sat Oct 31 12:56:30 EDT 2020

generic/001 22s ...  22s
generic/002 1s ...  2s
generic/003	[not run] this test requires a valid $SCRATCH_DEV
generic/004	[not run] O_TMPFILE is not supported
generic/005 1s ...  2s
generic/006 10s ...  9s
generic/007 40s ...  39s
generic/008	[not run] xfs_io fzero  failed (old kernel/wrong fs?)
generic/009	[not run] xfs_io fzero  failed (old kernel/wrong fs?)
generic/010	[not run] /home/cel/src/xfstests/src/dbtest not built
generic/011 6s ...  6s
generic/012	[not run] xfs_io fiemap  failed (old kernel/wrong fs?)
generic/013 9s ...  9s
generic/014 10s ...  8s
generic/015	[not run] this test requires a valid $SCRATCH_DEV
generic/016	[not run] xfs_io fiemap  failed (old kernel/wrong fs?)
generic/017	[not run] this test requires a valid $SCRATCH_DEV
generic/018	[not run] this test requires a valid $SCRATCH_DEV

I must be missing something that you have in your environment.


--
Chuck Lever




^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: kernel oops in generic/013 on an rdma mount (over either soft roce or iwarp)
  2020-11-10 17:44           ` Chuck Lever
@ 2020-11-10 18:18             ` Olga Kornievskaia
  2020-11-10 18:25               ` Chuck Lever
  0 siblings, 1 reply; 16+ messages in thread
From: Olga Kornievskaia @ 2020-11-10 18:18 UTC (permalink / raw)
  To: Chuck Lever; +Cc: Anna Schumaker, Linux NFS Mailing List

On Tue, Nov 10, 2020 at 12:44 PM Chuck Lever <chuck.lever@oracle.com> wrote:
>
>
> > On Nov 10, 2020, at 11:51 AM, Olga Kornievskaia <aglo@umich.edu> wrote:
> >
> > On Tue, Nov 10, 2020 at 9:42 AM Chuck Lever <chuck.lever@oracle.com> wrote:
> >>
> >>> Which those changes applied, I get the following oops:
> >>
> >> What's your workload? Do you have a reproducer?
> >
> > I ran generic/013 linux-to-linux.
>
> I'm not able to reproduce the problem.

Are you on hardware? This is over soft roce/iwarp. I will try hardware
but it'll take me time.

> xfstest: mount options: vers=4.2,proto=rdma,sec=sys,rsize=262144,wsize=131072
>
> FSTYP         -- nfs
> PLATFORM      -- Linux/x86_64 manet 5.10.0-rc1-00015-g6d4bab79ed4f #1297 SMP Sat Oct 31 12:56:30 EDT 2020
>
> generic/001 22s ...  22s
> generic/002 1s ...  2s
> generic/003     [not run] this test requires a valid $SCRATCH_DEV
> generic/004     [not run] O_TMPFILE is not supported
> generic/005 1s ...  2s
> generic/006 10s ...  9s
> generic/007 40s ...  39s
> generic/008     [not run] xfs_io fzero  failed (old kernel/wrong fs?)
> generic/009     [not run] xfs_io fzero  failed (old kernel/wrong fs?)
> generic/010     [not run] /home/cel/src/xfstests/src/dbtest not built
> generic/011 6s ...  6s
> generic/012     [not run] xfs_io fiemap  failed (old kernel/wrong fs?)
> generic/013 9s ...  9s
> generic/014 10s ...  8s
> generic/015     [not run] this test requires a valid $SCRATCH_DEV
> generic/016     [not run] xfs_io fiemap  failed (old kernel/wrong fs?)
> generic/017     [not run] this test requires a valid $SCRATCH_DEV
> generic/018     [not run] this test requires a valid $SCRATCH_DEV
>
> I must be missing something that you have in your environment.
>
>
> --
> Chuck Lever
>
>
>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: kernel oops in generic/013 on an rdma mount (over either soft roce or iwarp)
  2020-11-10 18:18             ` Olga Kornievskaia
@ 2020-11-10 18:25               ` Chuck Lever
  2020-11-11 21:42                 ` Olga Kornievskaia
  0 siblings, 1 reply; 16+ messages in thread
From: Chuck Lever @ 2020-11-10 18:25 UTC (permalink / raw)
  To: Olga Kornievskaia; +Cc: Anna Schumaker, Linux NFS Mailing List



> On Nov 10, 2020, at 1:18 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
> 
> On Tue, Nov 10, 2020 at 12:44 PM Chuck Lever <chuck.lever@oracle.com> wrote:
>> 
>> 
>>> On Nov 10, 2020, at 11:51 AM, Olga Kornievskaia <aglo@umich.edu> wrote:
>>> 
>>> On Tue, Nov 10, 2020 at 9:42 AM Chuck Lever <chuck.lever@oracle.com> wrote:
>>>> 
>>>>> Which those changes applied, I get the following oops:
>>>> 
>>>> What's your workload? Do you have a reproducer?
>>> 
>>> I ran generic/013 linux-to-linux.
>> 
>> I'm not able to reproduce the problem.
> 
> Are you on hardware? This is over soft roce/iwarp. I will try hardware
> but it'll take me time.

Since it appears to work correctly when a hardware RDMA device is in
use, that approach would be a waste of your time, methinks. Can you try
debugging with your soft RDMA device?

Start by identifying what NFS operation is failing, and what configuration
of chunks it is using.


>> xfstest: mount options: vers=4.2,proto=rdma,sec=sys,rsize=262144,wsize=131072
>> 
>> FSTYP         -- nfs
>> PLATFORM      -- Linux/x86_64 manet 5.10.0-rc1-00015-g6d4bab79ed4f #1297 SMP Sat Oct 31 12:56:30 EDT 2020
>> 
>> generic/001 22s ...  22s
>> generic/002 1s ...  2s
>> generic/003     [not run] this test requires a valid $SCRATCH_DEV
>> generic/004     [not run] O_TMPFILE is not supported
>> generic/005 1s ...  2s
>> generic/006 10s ...  9s
>> generic/007 40s ...  39s
>> generic/008     [not run] xfs_io fzero  failed (old kernel/wrong fs?)
>> generic/009     [not run] xfs_io fzero  failed (old kernel/wrong fs?)
>> generic/010     [not run] /home/cel/src/xfstests/src/dbtest not built
>> generic/011 6s ...  6s
>> generic/012     [not run] xfs_io fiemap  failed (old kernel/wrong fs?)
>> generic/013 9s ...  9s
>> generic/014 10s ...  8s
>> generic/015     [not run] this test requires a valid $SCRATCH_DEV
>> generic/016     [not run] xfs_io fiemap  failed (old kernel/wrong fs?)
>> generic/017     [not run] this test requires a valid $SCRATCH_DEV
>> generic/018     [not run] this test requires a valid $SCRATCH_DEV
>> 
>> I must be missing something that you have in your environment.
>> 
>> 
>> --
>> Chuck Lever

--
Chuck Lever




^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: kernel oops in generic/013 on an rdma mount (over either soft roce or iwarp)
  2020-11-10 18:25               ` Chuck Lever
@ 2020-11-11 21:42                 ` Olga Kornievskaia
  2020-11-12 15:27                   ` Chuck Lever
  0 siblings, 1 reply; 16+ messages in thread
From: Olga Kornievskaia @ 2020-11-11 21:42 UTC (permalink / raw)
  To: Chuck Lever; +Cc: Anna Schumaker, Linux NFS Mailing List

On Tue, Nov 10, 2020 at 1:25 PM Chuck Lever <chuck.lever@oracle.com> wrote:
>
>
>
> > On Nov 10, 2020, at 1:18 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
> >
> > On Tue, Nov 10, 2020 at 12:44 PM Chuck Lever <chuck.lever@oracle.com> wrote:
> >>
> >>
> >>> On Nov 10, 2020, at 11:51 AM, Olga Kornievskaia <aglo@umich.edu> wrote:
> >>>
> >>> On Tue, Nov 10, 2020 at 9:42 AM Chuck Lever <chuck.lever@oracle.com> wrote:
> >>>>
> >>>>> Which those changes applied, I get the following oops:
> >>>>
> >>>> What's your workload? Do you have a reproducer?
> >>>
> >>> I ran generic/013 linux-to-linux.
> >>
> >> I'm not able to reproduce the problem.
> >
> > Are you on hardware? This is over soft roce/iwarp. I will try hardware
> > but it'll take me time.
>
> Since it appears to work correctly when a hardware RDMA device is in
> use, that approach would be a waste of your time, methinks. Can you try
> debugging with your soft RDMA device?

Turns out I currently can't test it on hardware and unclear when I'd
be able to do so.

> Start by identifying what NFS operation is failing, and what configuration
> of chunks it is using.

This happens after decoding LIST_XATTRS reply. It's a send only reply.
I can't tell if the real problem is in the nfs4_xdr_dec_listxattrs()
and it's overwritting memory that messes with rpcrdma_complete_rqst()
or it's the rdma problem.

Running it with Kasan shows the following:

[  538.505743] BUG: KASAN: wild-memory-access in
rpcrdma_complete_rqst+0x41b/0x680 [rpcrdma]
[  538.512019] Write of size 8 at addr 0005088000000000 by task kworker/1:1H/493
[  538.517285]
[  538.518219] CPU: 1 PID: 493 Comm: kworker/1:1H Not tainted 5.10.0-rc3+ #33
[  538.521811] Hardware name: VMware, Inc. VMware Virtual
Platform/440BX Desktop Reference Platform, BIOS 6.00 02/27/2020
[  538.529220] Workqueue: ib-comp-wq ib_cq_poll_work [ib_core]
[  538.532722] Call Trace:
[  538.534366]  dump_stack+0x7c/0xa2
[  538.536473]  ? rpcrdma_complete_rqst+0x41b/0x680 [rpcrdma]
[  538.539514]  ? rpcrdma_complete_rqst+0x41b/0x680 [rpcrdma]
[  538.542817]  kasan_report.cold.9+0x6a/0x7c
[  538.545952]  ? rpcrdma_complete_rqst+0x41b/0x680 [rpcrdma]
[  538.551001]  check_memory_region+0x198/0x200
[  538.553763]  memcpy+0x38/0x60
[  538.555612]  rpcrdma_complete_rqst+0x41b/0x680 [rpcrdma]
[  538.558974]  ? rpcrdma_reset_cwnd+0x70/0x70 [rpcrdma]
[  538.562162]  ? ktime_get+0x4f/0xb0
[  538.564072]  ? rpcrdma_reply_handler+0x4ca/0x640 [rpcrdma]
[  538.567066]  __ib_process_cq+0xa7/0x1f0 [ib_core]
[  538.569905]  ib_cq_poll_work+0x31/0xb0 [ib_core]
[  538.573151]  process_one_work+0x387/0x680
[  538.575798]  worker_thread+0x57/0x5a0
[  538.577917]  ? process_one_work+0x680/0x680
[  538.581095]  kthread+0x1c8/0x1f0
[  538.583271]  ? kthread_parkme+0x40/0x40
[  538.585637]  ret_from_fork+0x22/0x30
[  538.587688] ==================================================================
[  538.591920] Disabling lock debugging due to kernel taint
[  538.595267] general protection fault, probably for non-canonical
address 0x5088000000000: 0000 [#1] SMP KASAN PTI
[  538.601982] CPU: 1 PID: 3623 Comm: fsstress Tainted: G    B
    5.10.0-rc3+ #33
[  538.609032] Hardware name: VMware, Inc. VMware Virtual
Platform/440BX Desktop Reference Platform, BIOS 6.00 02/27/2020
[  538.619678] RIP: 0010:memcpy_orig+0xf5/0x10f
[  538.623892] Code: 00 00 00 00 00 83 fa 04 72 1b 8b 0e 44 8b 44 16
fc 89 0f 44 89 44 17 fc c3 66 90 66 2e 0f 1f 84 00 00 00 00 00 83 ea
01 72 19 <0f> b6 0e 74 12 4c 0f b6 46 01 4c 0f b6 0c 16 44 88 47 01 44
88 0c
[  538.636726] RSP: 0018:ffff888018707628 EFLAGS: 00010202
[  538.641125] RAX: ffff888009098855 RBX: 0000000000000008 RCX: ffffffffc19eca2d
[  538.645793] RDX: 0000000000000001 RSI: 0005088000000000 RDI: ffff888009098855
[  538.650290] RBP: 0000000000000000 R08: ffffed100121310b R09: ffffed100121310b
[  538.654879] R10: ffff888009098856 R11: ffffed100121310a R12: ffff888018707788
[  538.658700] R13: ffff888009098858 R14: ffff888009098857 R15: 0000000000000002
[  538.662871] FS:  00007f12be1c8740(0000) GS:ffff88805ca40000(0000)
knlGS:0000000000000000
[  538.667424] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  538.670768] CR2: 00007f12be1c7000 CR3: 000000005c37a004 CR4: 00000000001706e0
[  538.675865] Call Trace:
[  538.677376]  nfs4_xdr_dec_listxattrs+0x31d/0x3c0 [nfsv4]
[  538.681151]  ? nfs4_xdr_dec_read_plus+0x360/0x360 [nfsv4]
[  538.684232]  ? xdr_inline_decode+0x1e/0x260 [sunrpc]
[  538.687114]  call_decode+0x365/0x390 [sunrpc]
[  538.689626]  ? rpc_decode_header+0x770/0x770 [sunrpc]
[  538.692410]  ? var_wake_function+0x80/0x80
[  538.694634]  ?
xprt_request_retransmit_after_disconnect.isra.15+0x5e/0x80 [sunrpc]
[  538.698920]  ? rpc_decode_header+0x770/0x770 [sunrpc]
[  538.701578]  ? rpc_decode_header+0x770/0x770 [sunrpc]
[  538.704252]  __rpc_execute+0x11c/0x6e0 [sunrpc]
[  538.706829]  ? trace_event_raw_event_xprt_cong_event+0x270/0x270 [sunrpc]
[  538.710654]  ? rpc_make_runnable+0x54/0xe0 [sunrpc]
[  538.713416]  rpc_run_task+0x29c/0x2c0 [sunrpc]
[  538.715806]  nfs4_call_sync_custom+0xc/0x40 [nfsv4]
[  538.718551]  nfs4_do_call_sync+0x114/0x160 [nfsv4]
[  538.721571]  ? nfs4_call_sync_custom+0x40/0x40 [nfsv4]
[  538.724333]  ? __alloc_pages_nodemask+0x200/0x410
[  538.726794]  ? kasan_unpoison_shadow+0x30/0x40
[  538.729147]  ? __kasan_kmalloc.constprop.8+0xc1/0xd0
[  538.731733]  _nfs42_proc_listxattrs+0x1f6/0x2f0 [nfsv4]
[  538.734504]  ? nfs42_offload_cancel_done+0x50/0x50 [nfsv4]
[  538.737173]  ? __kernel_text_address+0xe/0x30
[  538.739400]  ? unwind_get_return_address+0x2f/0x50
[  538.741727]  ? create_prof_cpu_mask+0x20/0x20
[  538.744141]  ? stack_trace_consume_entry+0x80/0x80
[  538.746585]  ? _raw_spin_lock+0x7a/0xd0
[  538.748477]  nfs42_proc_listxattrs+0xf4/0x150 [nfsv4]
[  538.750920]  ? nfs42_proc_setxattr+0x150/0x150 [nfsv4]
[  538.753557]  ? nfs4_xattr_cache_list+0x91/0x120 [nfsv4]
[  538.756313]  nfs4_listxattr+0x34d/0x3d0 [nfsv4]
[  538.758506]  ? _nfs4_proc_access+0x260/0x260 [nfsv4]
[  538.760859]  ? __ia32_sys_rename+0x40/0x40
[  538.762897]  ? selinux_quota_on+0xf0/0xf0
[  538.764827]  ? __check_object_size+0x178/0x220
[  538.767063]  ? kasan_unpoison_shadow+0x30/0x40
[  538.769233]  ? security_inode_listxattr+0x53/0x60
[  538.771962]  listxattr+0x5b/0xf0
[  538.773980]  path_listxattr+0xa1/0x100
[  538.776147]  ? listxattr+0xf0/0xf0
[  538.778064]  do_syscall_64+0x33/0x40
[  538.780714]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
[  538.783440] RIP: 0033:0x7f12bdaebc8b
[  538.785341] Code: f0 ff ff 73 01 c3 48 8b 0d fa 21 2c 00 f7 d8 64
89 01 48 83 c8 ff c3 0f 1f 84 00 00 00 00 00 f3 0f 1e fa b8 c2 00 00
00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d cd 21 2c 00 f7 d8 64 89
01 48
[  538.794845] RSP: 002b:00007ffc7947cff8 EFLAGS: 00000206 ORIG_RAX:
00000000000000c2
[  538.798985] RAX: ffffffffffffffda RBX: 0000000001454650 RCX: 00007f12bdaebc8b
[  538.802840] RDX: 0000000000000018 RSI: 000000000145c150 RDI: 0000000001454650
[  538.807093] RBP: 000000000145c150 R08: 00007f12bddaeba0 R09: 00007ffc7947cc46
[  538.811487] R10: 0000000000000000 R11: 0000000000000206 R12: 00000000000002dd
[  538.816713] R13: 0000000000000018 R14: 0000000000000018 R15: 0000000000000000
[  538.820437] Modules linked in: rpcrdma rdma_rxe ip6_udp_tunnel
udp_tunnel rdma_ucm rdma_cm iw_cm ib_cm ib_uverbs ib_core cts
rpcsec_gss_krb5 nfsv4 dns_resolver nfs lockd grace nfs_ssc nls_utf8
isofs fuse rfcomm nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 nft_fib
nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_ct
nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 tun bridge stp llc
ip6_tables nft_compat ip_set nf_tables nfnetlink bnep
vmw_vsock_vmci_transport vsock snd_seq_midi snd_seq_midi_event
intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul
vmw_balloon ghash_clmulni_intel joydev pcspkr btusb uvcvideo btrtl
btbcm btintel videobuf2_vmalloc videobuf2_memops videobuf2_v4l2
videobuf2_common snd_ens1371 snd_ac97_codec ac97_bus snd_seq snd_pcm
videodev bluetooth mc rfkill ecdh_generic ecc snd_timer snd_rawmidi
snd_seq_device snd soundcore vmw_vmci i2c_piix4 auth_rpcgss sunrpc
ip_tables xfs libcrc32c sr_mod cdrom sg crc32c_intel ata_generic
serio_raw nvme vmwgfx drm_kms_helper
[  538.820577]  syscopyarea sysfillrect sysimgblt fb_sys_fops cec
nvme_core t10_pi ata_piix ahci libahci ttm vmxnet3 drm libata
[  538.869541] ---[ end trace 4abb2d95a72e9aab ]---



> >> xfstest: mount options: vers=4.2,proto=rdma,sec=sys,rsize=262144,wsize=131072
> >>
> >> FSTYP         -- nfs
> >> PLATFORM      -- Linux/x86_64 manet 5.10.0-rc1-00015-g6d4bab79ed4f #1297 SMP Sat Oct 31 12:56:30 EDT 2020
> >>
> >> generic/001 22s ...  22s
> >> generic/002 1s ...  2s
> >> generic/003     [not run] this test requires a valid $SCRATCH_DEV
> >> generic/004     [not run] O_TMPFILE is not supported
> >> generic/005 1s ...  2s
> >> generic/006 10s ...  9s
> >> generic/007 40s ...  39s
> >> generic/008     [not run] xfs_io fzero  failed (old kernel/wrong fs?)
> >> generic/009     [not run] xfs_io fzero  failed (old kernel/wrong fs?)
> >> generic/010     [not run] /home/cel/src/xfstests/src/dbtest not built
> >> generic/011 6s ...  6s
> >> generic/012     [not run] xfs_io fiemap  failed (old kernel/wrong fs?)
> >> generic/013 9s ...  9s
> >> generic/014 10s ...  8s
> >> generic/015     [not run] this test requires a valid $SCRATCH_DEV
> >> generic/016     [not run] xfs_io fiemap  failed (old kernel/wrong fs?)
> >> generic/017     [not run] this test requires a valid $SCRATCH_DEV
> >> generic/018     [not run] this test requires a valid $SCRATCH_DEV
> >>
> >> I must be missing something that you have in your environment.
> >>
> >>
> >> --
> >> Chuck Lever
>
> --
> Chuck Lever
>
>
>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: kernel oops in generic/013 on an rdma mount (over either soft roce or iwarp)
  2020-11-11 21:42                 ` Olga Kornievskaia
@ 2020-11-12 15:27                   ` Chuck Lever
  2020-11-12 15:37                     ` Olga Kornievskaia
  0 siblings, 1 reply; 16+ messages in thread
From: Chuck Lever @ 2020-11-12 15:27 UTC (permalink / raw)
  To: Olga Kornievskaia; +Cc: Anna Schumaker, Linux NFS Mailing List



> On Nov 11, 2020, at 4:42 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
> 
> On Tue, Nov 10, 2020 at 1:25 PM Chuck Lever <chuck.lever@oracle.com> wrote:
> 
>> Start by identifying what NFS operation is failing, and what configuration
>> of chunks it is using.
> 
> This happens after decoding LIST_XATTRS reply. It's a send only reply.
> I can't tell if the real problem is in the nfs4_xdr_dec_listxattrs()
> and it's overwritting memory that messes with rpcrdma_complete_rqst()
> or it's the rdma problem.
> 
> Running it with Kasan shows the following:
> 
> [  538.505743] BUG: KASAN: wild-memory-access in
> rpcrdma_complete_rqst+0x41b/0x680 [rpcrdma]
> [  538.512019] Write of size 8 at addr 0005088000000000 by task kworker/1:1H/493
> [  538.517285]
> [  538.518219] CPU: 1 PID: 493 Comm: kworker/1:1H Not tainted 5.10.0-rc3+ #33
> [  538.521811] Hardware name: VMware, Inc. VMware Virtual
> Platform/440BX Desktop Reference Platform, BIOS 6.00 02/27/2020
> [  538.529220] Workqueue: ib-comp-wq ib_cq_poll_work [ib_core]
> [  538.532722] Call Trace:
> [  538.534366]  dump_stack+0x7c/0xa2
> [  538.536473]  ? rpcrdma_complete_rqst+0x41b/0x680 [rpcrdma]
> [  538.539514]  ? rpcrdma_complete_rqst+0x41b/0x680 [rpcrdma]
> [  538.542817]  kasan_report.cold.9+0x6a/0x7c
> [  538.545952]  ? rpcrdma_complete_rqst+0x41b/0x680 [rpcrdma]
> [  538.551001]  check_memory_region+0x198/0x200
> [  538.553763]  memcpy+0x38/0x60
> [  538.555612]  rpcrdma_complete_rqst+0x41b/0x680 [rpcrdma]
> [  538.558974]  ? rpcrdma_reset_cwnd+0x70/0x70 [rpcrdma]
> [  538.562162]  ? ktime_get+0x4f/0xb0
> [  538.564072]  ? rpcrdma_reply_handler+0x4ca/0x640 [rpcrdma]
> [  538.567066]  __ib_process_cq+0xa7/0x1f0 [ib_core]
> [  538.569905]  ib_cq_poll_work+0x31/0xb0 [ib_core]
> [  538.573151]  process_one_work+0x387/0x680
> [  538.575798]  worker_thread+0x57/0x5a0
> [  538.577917]  ? process_one_work+0x680/0x680
> [  538.581095]  kthread+0x1c8/0x1f0
> [  538.583271]  ? kthread_parkme+0x40/0x40
> [  538.585637]  ret_from_fork+0x22/0x30
> [  538.587688] ==================================================================
> [  538.591920] Disabling lock debugging due to kernel taint
> [  538.595267] general protection fault, probably for non-canonical
> address 0x5088000000000: 0000 [#1] SMP KASAN PTI
> [  538.601982] CPU: 1 PID: 3623 Comm: fsstress Tainted: G    B
>    5.10.0-rc3+ #33
> [  538.609032] Hardware name: VMware, Inc. VMware Virtual
> Platform/440BX Desktop Reference Platform, BIOS 6.00 02/27/2020
> [  538.619678] RIP: 0010:memcpy_orig+0xf5/0x10f
> [  538.623892] Code: 00 00 00 00 00 83 fa 04 72 1b 8b 0e 44 8b 44 16
> fc 89 0f 44 89 44 17 fc c3 66 90 66 2e 0f 1f 84 00 00 00 00 00 83 ea
> 01 72 19 <0f> b6 0e 74 12 4c 0f b6 46 01 4c 0f b6 0c 16 44 88 47 01 44
> 88 0c
> [  538.636726] RSP: 0018:ffff888018707628 EFLAGS: 00010202
> [  538.641125] RAX: ffff888009098855 RBX: 0000000000000008 RCX: ffffffffc19eca2d
> [  538.645793] RDX: 0000000000000001 RSI: 0005088000000000 RDI: ffff888009098855
> [  538.650290] RBP: 0000000000000000 R08: ffffed100121310b R09: ffffed100121310b
> [  538.654879] R10: ffff888009098856 R11: ffffed100121310a R12: ffff888018707788
> [  538.658700] R13: ffff888009098858 R14: ffff888009098857 R15: 0000000000000002
> [  538.662871] FS:  00007f12be1c8740(0000) GS:ffff88805ca40000(0000)
> knlGS:0000000000000000
> [  538.667424] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [  538.670768] CR2: 00007f12be1c7000 CR3: 000000005c37a004 CR4: 00000000001706e0
> [  538.675865] Call Trace:
> [  538.677376]  nfs4_xdr_dec_listxattrs+0x31d/0x3c0 [nfsv4]
> [  538.681151]  ? nfs4_xdr_dec_read_plus+0x360/0x360 [nfsv4]
> [  538.684232]  ? xdr_inline_decode+0x1e/0x260 [sunrpc]
> [  538.687114]  call_decode+0x365/0x390 [sunrpc]
> [  538.689626]  ? rpc_decode_header+0x770/0x770 [sunrpc]
> [  538.692410]  ? var_wake_function+0x80/0x80
> [  538.694634]  ?
> xprt_request_retransmit_after_disconnect.isra.15+0x5e/0x80 [sunrpc]
> [  538.698920]  ? rpc_decode_header+0x770/0x770 [sunrpc]
> [  538.701578]  ? rpc_decode_header+0x770/0x770 [sunrpc]
> [  538.704252]  __rpc_execute+0x11c/0x6e0 [sunrpc]
> [  538.706829]  ? trace_event_raw_event_xprt_cong_event+0x270/0x270 [sunrpc]
> [  538.710654]  ? rpc_make_runnable+0x54/0xe0 [sunrpc]
> [  538.713416]  rpc_run_task+0x29c/0x2c0 [sunrpc]
> [  538.715806]  nfs4_call_sync_custom+0xc/0x40 [nfsv4]
> [  538.718551]  nfs4_do_call_sync+0x114/0x160 [nfsv4]
> [  538.721571]  ? nfs4_call_sync_custom+0x40/0x40 [nfsv4]
> [  538.724333]  ? __alloc_pages_nodemask+0x200/0x410
> [  538.726794]  ? kasan_unpoison_shadow+0x30/0x40
> [  538.729147]  ? __kasan_kmalloc.constprop.8+0xc1/0xd0
> [  538.731733]  _nfs42_proc_listxattrs+0x1f6/0x2f0 [nfsv4]
> [  538.734504]  ? nfs42_offload_cancel_done+0x50/0x50 [nfsv4]
> [  538.737173]  ? __kernel_text_address+0xe/0x30
> [  538.739400]  ? unwind_get_return_address+0x2f/0x50
> [  538.741727]  ? create_prof_cpu_mask+0x20/0x20
> [  538.744141]  ? stack_trace_consume_entry+0x80/0x80
> [  538.746585]  ? _raw_spin_lock+0x7a/0xd0
> [  538.748477]  nfs42_proc_listxattrs+0xf4/0x150 [nfsv4]
> [  538.750920]  ? nfs42_proc_setxattr+0x150/0x150 [nfsv4]
> [  538.753557]  ? nfs4_xattr_cache_list+0x91/0x120 [nfsv4]
> [  538.756313]  nfs4_listxattr+0x34d/0x3d0 [nfsv4]
> [  538.758506]  ? _nfs4_proc_access+0x260/0x260 [nfsv4]
> [  538.760859]  ? __ia32_sys_rename+0x40/0x40
> [  538.762897]  ? selinux_quota_on+0xf0/0xf0
> [  538.764827]  ? __check_object_size+0x178/0x220
> [  538.767063]  ? kasan_unpoison_shadow+0x30/0x40
> [  538.769233]  ? security_inode_listxattr+0x53/0x60
> [  538.771962]  listxattr+0x5b/0xf0
> [  538.773980]  path_listxattr+0xa1/0x100
> [  538.776147]  ? listxattr+0xf0/0xf0
> [  538.778064]  do_syscall_64+0x33/0x40
> [  538.780714]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
> [  538.783440] RIP: 0033:0x7f12bdaebc8b
> [  538.785341] Code: f0 ff ff 73 01 c3 48 8b 0d fa 21 2c 00 f7 d8 64
> 89 01 48 83 c8 ff c3 0f 1f 84 00 00 00 00 00 f3 0f 1e fa b8 c2 00 00
> 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d cd 21 2c 00 f7 d8 64 89
> 01 48
> [  538.794845] RSP: 002b:00007ffc7947cff8 EFLAGS: 00000206 ORIG_RAX:
> 00000000000000c2
> [  538.798985] RAX: ffffffffffffffda RBX: 0000000001454650 RCX: 00007f12bdaebc8b
> [  538.802840] RDX: 0000000000000018 RSI: 000000000145c150 RDI: 0000000001454650
> [  538.807093] RBP: 000000000145c150 R08: 00007f12bddaeba0 R09: 00007ffc7947cc46
> [  538.811487] R10: 0000000000000000 R11: 0000000000000206 R12: 00000000000002dd
> [  538.816713] R13: 0000000000000018 R14: 0000000000000018 R15: 0000000000000000
> [  538.820437] Modules linked in: rpcrdma rdma_rxe ip6_udp_tunnel
> udp_tunnel rdma_ucm rdma_cm iw_cm ib_cm ib_uverbs ib_core cts
> rpcsec_gss_krb5 nfsv4 dns_resolver nfs lockd grace nfs_ssc nls_utf8
> isofs fuse rfcomm nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 nft_fib
> nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_ct
> nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 tun bridge stp llc
> ip6_tables nft_compat ip_set nf_tables nfnetlink bnep
> vmw_vsock_vmci_transport vsock snd_seq_midi snd_seq_midi_event
> intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul
> vmw_balloon ghash_clmulni_intel joydev pcspkr btusb uvcvideo btrtl
> btbcm btintel videobuf2_vmalloc videobuf2_memops videobuf2_v4l2
> videobuf2_common snd_ens1371 snd_ac97_codec ac97_bus snd_seq snd_pcm
> videodev bluetooth mc rfkill ecdh_generic ecc snd_timer snd_rawmidi
> snd_seq_device snd soundcore vmw_vmci i2c_piix4 auth_rpcgss sunrpc
> ip_tables xfs libcrc32c sr_mod cdrom sg crc32c_intel ata_generic
> serio_raw nvme vmwgfx drm_kms_helper
> [  538.820577]  syscopyarea sysfillrect sysimgblt fb_sys_fops cec
> nvme_core t10_pi ata_piix ahci libahci ttm vmxnet3 drm libata
> [  538.869541] ---[ end trace 4abb2d95a72e9aab ]---

I'm running a v5.10-rc3 client now with the fix applied and
KASAN enabled. I traced my xfstests run and I'm definitely
exercising the LISTXATTRS path:

# trace-cmd report -F rpc_request | grep LISTXATTR | wc -l
462

No crash or KASAN splat. I'm still missing something.

I sometimes apply small patches from the mailing list by hand
editing the modified file. Are you sure you applied my fix
correctly?


--
Chuck Lever




^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: kernel oops in generic/013 on an rdma mount (over either soft roce or iwarp)
  2020-11-12 15:27                   ` Chuck Lever
@ 2020-11-12 15:37                     ` Olga Kornievskaia
  2020-11-12 20:48                       ` Chuck Lever
  0 siblings, 1 reply; 16+ messages in thread
From: Olga Kornievskaia @ 2020-11-12 15:37 UTC (permalink / raw)
  To: Chuck Lever; +Cc: Anna Schumaker, Linux NFS Mailing List

On Thu, Nov 12, 2020 at 10:28 AM Chuck Lever <chuck.lever@oracle.com> wrote:
>
>
>
> > On Nov 11, 2020, at 4:42 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
> >
> > On Tue, Nov 10, 2020 at 1:25 PM Chuck Lever <chuck.lever@oracle.com> wrote:
> >
> >> Start by identifying what NFS operation is failing, and what configuration
> >> of chunks it is using.
> >
> > This happens after decoding LIST_XATTRS reply. It's a send only reply.
> > I can't tell if the real problem is in the nfs4_xdr_dec_listxattrs()
> > and it's overwritting memory that messes with rpcrdma_complete_rqst()
> > or it's the rdma problem.
> >
> > Running it with Kasan shows the following:
> >
> > [  538.505743] BUG: KASAN: wild-memory-access in
> > rpcrdma_complete_rqst+0x41b/0x680 [rpcrdma]
> > [  538.512019] Write of size 8 at addr 0005088000000000 by task kworker/1:1H/493
> > [  538.517285]
> > [  538.518219] CPU: 1 PID: 493 Comm: kworker/1:1H Not tainted 5.10.0-rc3+ #33
> > [  538.521811] Hardware name: VMware, Inc. VMware Virtual
> > Platform/440BX Desktop Reference Platform, BIOS 6.00 02/27/2020
> > [  538.529220] Workqueue: ib-comp-wq ib_cq_poll_work [ib_core]
> > [  538.532722] Call Trace:
> > [  538.534366]  dump_stack+0x7c/0xa2
> > [  538.536473]  ? rpcrdma_complete_rqst+0x41b/0x680 [rpcrdma]
> > [  538.539514]  ? rpcrdma_complete_rqst+0x41b/0x680 [rpcrdma]
> > [  538.542817]  kasan_report.cold.9+0x6a/0x7c
> > [  538.545952]  ? rpcrdma_complete_rqst+0x41b/0x680 [rpcrdma]
> > [  538.551001]  check_memory_region+0x198/0x200
> > [  538.553763]  memcpy+0x38/0x60
> > [  538.555612]  rpcrdma_complete_rqst+0x41b/0x680 [rpcrdma]
> > [  538.558974]  ? rpcrdma_reset_cwnd+0x70/0x70 [rpcrdma]
> > [  538.562162]  ? ktime_get+0x4f/0xb0
> > [  538.564072]  ? rpcrdma_reply_handler+0x4ca/0x640 [rpcrdma]
> > [  538.567066]  __ib_process_cq+0xa7/0x1f0 [ib_core]
> > [  538.569905]  ib_cq_poll_work+0x31/0xb0 [ib_core]
> > [  538.573151]  process_one_work+0x387/0x680
> > [  538.575798]  worker_thread+0x57/0x5a0
> > [  538.577917]  ? process_one_work+0x680/0x680
> > [  538.581095]  kthread+0x1c8/0x1f0
> > [  538.583271]  ? kthread_parkme+0x40/0x40
> > [  538.585637]  ret_from_fork+0x22/0x30
> > [  538.587688] ==================================================================
> > [  538.591920] Disabling lock debugging due to kernel taint
> > [  538.595267] general protection fault, probably for non-canonical
> > address 0x5088000000000: 0000 [#1] SMP KASAN PTI
> > [  538.601982] CPU: 1 PID: 3623 Comm: fsstress Tainted: G    B
> >    5.10.0-rc3+ #33
> > [  538.609032] Hardware name: VMware, Inc. VMware Virtual
> > Platform/440BX Desktop Reference Platform, BIOS 6.00 02/27/2020
> > [  538.619678] RIP: 0010:memcpy_orig+0xf5/0x10f
> > [  538.623892] Code: 00 00 00 00 00 83 fa 04 72 1b 8b 0e 44 8b 44 16
> > fc 89 0f 44 89 44 17 fc c3 66 90 66 2e 0f 1f 84 00 00 00 00 00 83 ea
> > 01 72 19 <0f> b6 0e 74 12 4c 0f b6 46 01 4c 0f b6 0c 16 44 88 47 01 44
> > 88 0c
> > [  538.636726] RSP: 0018:ffff888018707628 EFLAGS: 00010202
> > [  538.641125] RAX: ffff888009098855 RBX: 0000000000000008 RCX: ffffffffc19eca2d
> > [  538.645793] RDX: 0000000000000001 RSI: 0005088000000000 RDI: ffff888009098855
> > [  538.650290] RBP: 0000000000000000 R08: ffffed100121310b R09: ffffed100121310b
> > [  538.654879] R10: ffff888009098856 R11: ffffed100121310a R12: ffff888018707788
> > [  538.658700] R13: ffff888009098858 R14: ffff888009098857 R15: 0000000000000002
> > [  538.662871] FS:  00007f12be1c8740(0000) GS:ffff88805ca40000(0000)
> > knlGS:0000000000000000
> > [  538.667424] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> > [  538.670768] CR2: 00007f12be1c7000 CR3: 000000005c37a004 CR4: 00000000001706e0
> > [  538.675865] Call Trace:
> > [  538.677376]  nfs4_xdr_dec_listxattrs+0x31d/0x3c0 [nfsv4]
> > [  538.681151]  ? nfs4_xdr_dec_read_plus+0x360/0x360 [nfsv4]
> > [  538.684232]  ? xdr_inline_decode+0x1e/0x260 [sunrpc]
> > [  538.687114]  call_decode+0x365/0x390 [sunrpc]
> > [  538.689626]  ? rpc_decode_header+0x770/0x770 [sunrpc]
> > [  538.692410]  ? var_wake_function+0x80/0x80
> > [  538.694634]  ?
> > xprt_request_retransmit_after_disconnect.isra.15+0x5e/0x80 [sunrpc]
> > [  538.698920]  ? rpc_decode_header+0x770/0x770 [sunrpc]
> > [  538.701578]  ? rpc_decode_header+0x770/0x770 [sunrpc]
> > [  538.704252]  __rpc_execute+0x11c/0x6e0 [sunrpc]
> > [  538.706829]  ? trace_event_raw_event_xprt_cong_event+0x270/0x270 [sunrpc]
> > [  538.710654]  ? rpc_make_runnable+0x54/0xe0 [sunrpc]
> > [  538.713416]  rpc_run_task+0x29c/0x2c0 [sunrpc]
> > [  538.715806]  nfs4_call_sync_custom+0xc/0x40 [nfsv4]
> > [  538.718551]  nfs4_do_call_sync+0x114/0x160 [nfsv4]
> > [  538.721571]  ? nfs4_call_sync_custom+0x40/0x40 [nfsv4]
> > [  538.724333]  ? __alloc_pages_nodemask+0x200/0x410
> > [  538.726794]  ? kasan_unpoison_shadow+0x30/0x40
> > [  538.729147]  ? __kasan_kmalloc.constprop.8+0xc1/0xd0
> > [  538.731733]  _nfs42_proc_listxattrs+0x1f6/0x2f0 [nfsv4]
> > [  538.734504]  ? nfs42_offload_cancel_done+0x50/0x50 [nfsv4]
> > [  538.737173]  ? __kernel_text_address+0xe/0x30
> > [  538.739400]  ? unwind_get_return_address+0x2f/0x50
> > [  538.741727]  ? create_prof_cpu_mask+0x20/0x20
> > [  538.744141]  ? stack_trace_consume_entry+0x80/0x80
> > [  538.746585]  ? _raw_spin_lock+0x7a/0xd0
> > [  538.748477]  nfs42_proc_listxattrs+0xf4/0x150 [nfsv4]
> > [  538.750920]  ? nfs42_proc_setxattr+0x150/0x150 [nfsv4]
> > [  538.753557]  ? nfs4_xattr_cache_list+0x91/0x120 [nfsv4]
> > [  538.756313]  nfs4_listxattr+0x34d/0x3d0 [nfsv4]
> > [  538.758506]  ? _nfs4_proc_access+0x260/0x260 [nfsv4]
> > [  538.760859]  ? __ia32_sys_rename+0x40/0x40
> > [  538.762897]  ? selinux_quota_on+0xf0/0xf0
> > [  538.764827]  ? __check_object_size+0x178/0x220
> > [  538.767063]  ? kasan_unpoison_shadow+0x30/0x40
> > [  538.769233]  ? security_inode_listxattr+0x53/0x60
> > [  538.771962]  listxattr+0x5b/0xf0
> > [  538.773980]  path_listxattr+0xa1/0x100
> > [  538.776147]  ? listxattr+0xf0/0xf0
> > [  538.778064]  do_syscall_64+0x33/0x40
> > [  538.780714]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
> > [  538.783440] RIP: 0033:0x7f12bdaebc8b
> > [  538.785341] Code: f0 ff ff 73 01 c3 48 8b 0d fa 21 2c 00 f7 d8 64
> > 89 01 48 83 c8 ff c3 0f 1f 84 00 00 00 00 00 f3 0f 1e fa b8 c2 00 00
> > 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d cd 21 2c 00 f7 d8 64 89
> > 01 48
> > [  538.794845] RSP: 002b:00007ffc7947cff8 EFLAGS: 00000206 ORIG_RAX:
> > 00000000000000c2
> > [  538.798985] RAX: ffffffffffffffda RBX: 0000000001454650 RCX: 00007f12bdaebc8b
> > [  538.802840] RDX: 0000000000000018 RSI: 000000000145c150 RDI: 0000000001454650
> > [  538.807093] RBP: 000000000145c150 R08: 00007f12bddaeba0 R09: 00007ffc7947cc46
> > [  538.811487] R10: 0000000000000000 R11: 0000000000000206 R12: 00000000000002dd
> > [  538.816713] R13: 0000000000000018 R14: 0000000000000018 R15: 0000000000000000
> > [  538.820437] Modules linked in: rpcrdma rdma_rxe ip6_udp_tunnel
> > udp_tunnel rdma_ucm rdma_cm iw_cm ib_cm ib_uverbs ib_core cts
> > rpcsec_gss_krb5 nfsv4 dns_resolver nfs lockd grace nfs_ssc nls_utf8
> > isofs fuse rfcomm nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 nft_fib
> > nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_ct
> > nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 tun bridge stp llc
> > ip6_tables nft_compat ip_set nf_tables nfnetlink bnep
> > vmw_vsock_vmci_transport vsock snd_seq_midi snd_seq_midi_event
> > intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul
> > vmw_balloon ghash_clmulni_intel joydev pcspkr btusb uvcvideo btrtl
> > btbcm btintel videobuf2_vmalloc videobuf2_memops videobuf2_v4l2
> > videobuf2_common snd_ens1371 snd_ac97_codec ac97_bus snd_seq snd_pcm
> > videodev bluetooth mc rfkill ecdh_generic ecc snd_timer snd_rawmidi
> > snd_seq_device snd soundcore vmw_vmci i2c_piix4 auth_rpcgss sunrpc
> > ip_tables xfs libcrc32c sr_mod cdrom sg crc32c_intel ata_generic
> > serio_raw nvme vmwgfx drm_kms_helper
> > [  538.820577]  syscopyarea sysfillrect sysimgblt fb_sys_fops cec
> > nvme_core t10_pi ata_piix ahci libahci ttm vmxnet3 drm libata
> > [  538.869541] ---[ end trace 4abb2d95a72e9aab ]---
>
> I'm running a v5.10-rc3 client now with the fix applied and
> KASAN enabled. I traced my xfstests run and I'm definitely
> exercising the LISTXATTRS path:
>
> # trace-cmd report -F rpc_request | grep LISTXATTR | wc -l
> 462
>
> No crash or KASAN splat. I'm still missing something.
>
> I sometimes apply small patches from the mailing list by hand
> editing the modified file. Are you sure you applied my fix
> correctly?

Again I think the difference is the SoftRoce vs hardware (I still have
no ability to test this on hardware). I ran tcp mount with KASAN and
generic/013 doesn't trigger anything. Something about softRoCE (or
iWarp) that either reveals a problem about the NFS code or perhaps
it's softRoCE but KASAN is typically good in identifying the
problematic spot.

>
>
> --
> Chuck Lever
>
>
>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: kernel oops in generic/013 on an rdma mount (over either soft roce or iwarp)
  2020-11-12 15:37                     ` Olga Kornievskaia
@ 2020-11-12 20:48                       ` Chuck Lever
  2020-11-13 18:49                         ` Olga Kornievskaia
  0 siblings, 1 reply; 16+ messages in thread
From: Chuck Lever @ 2020-11-12 20:48 UTC (permalink / raw)
  To: Olga Kornievskaia; +Cc: Anna Schumaker, Linux NFS Mailing List



> On Nov 12, 2020, at 10:37 AM, Olga Kornievskaia <aglo@umich.edu> wrote:
> 
> On Thu, Nov 12, 2020 at 10:28 AM Chuck Lever <chuck.lever@oracle.com> wrote:
>> 
>> 
>> 
>>> On Nov 11, 2020, at 4:42 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
>>> 
>>> On Tue, Nov 10, 2020 at 1:25 PM Chuck Lever <chuck.lever@oracle.com> wrote:
>>> 
>>>> Start by identifying what NFS operation is failing, and what configuration
>>>> of chunks it is using.
>>> 
>>> This happens after decoding LIST_XATTRS reply. It's a send only reply.
>>> I can't tell if the real problem is in the nfs4_xdr_dec_listxattrs()
>>> and it's overwritting memory that messes with rpcrdma_complete_rqst()
>>> or it's the rdma problem.
>>> 
>>> Running it with Kasan shows the following:
>>> 
>>> [  538.505743] BUG: KASAN: wild-memory-access in
>>> rpcrdma_complete_rqst+0x41b/0x680 [rpcrdma]
>>> [  538.512019] Write of size 8 at addr 0005088000000000 by task kworker/1:1H/493
>>> [  538.517285]
>>> [  538.518219] CPU: 1 PID: 493 Comm: kworker/1:1H Not tainted 5.10.0-rc3+ #33
>>> [  538.521811] Hardware name: VMware, Inc. VMware Virtual
>>> Platform/440BX Desktop Reference Platform, BIOS 6.00 02/27/2020
>>> [  538.529220] Workqueue: ib-comp-wq ib_cq_poll_work [ib_core]
>>> [  538.532722] Call Trace:
>>> [  538.534366]  dump_stack+0x7c/0xa2
>>> [  538.536473]  ? rpcrdma_complete_rqst+0x41b/0x680 [rpcrdma]
>>> [  538.539514]  ? rpcrdma_complete_rqst+0x41b/0x680 [rpcrdma]
>>> [  538.542817]  kasan_report.cold.9+0x6a/0x7c
>>> [  538.545952]  ? rpcrdma_complete_rqst+0x41b/0x680 [rpcrdma]
>>> [  538.551001]  check_memory_region+0x198/0x200
>>> [  538.553763]  memcpy+0x38/0x60
>>> [  538.555612]  rpcrdma_complete_rqst+0x41b/0x680 [rpcrdma]
>>> [  538.558974]  ? rpcrdma_reset_cwnd+0x70/0x70 [rpcrdma]
>>> [  538.562162]  ? ktime_get+0x4f/0xb0
>>> [  538.564072]  ? rpcrdma_reply_handler+0x4ca/0x640 [rpcrdma]
>>> [  538.567066]  __ib_process_cq+0xa7/0x1f0 [ib_core]
>>> [  538.569905]  ib_cq_poll_work+0x31/0xb0 [ib_core]
>>> [  538.573151]  process_one_work+0x387/0x680
>>> [  538.575798]  worker_thread+0x57/0x5a0
>>> [  538.577917]  ? process_one_work+0x680/0x680
>>> [  538.581095]  kthread+0x1c8/0x1f0
>>> [  538.583271]  ? kthread_parkme+0x40/0x40
>>> [  538.585637]  ret_from_fork+0x22/0x30
>>> [  538.587688] ==================================================================
>>> [  538.591920] Disabling lock debugging due to kernel taint
>>> [  538.595267] general protection fault, probably for non-canonical
>>> address 0x5088000000000: 0000 [#1] SMP KASAN PTI
>>> [  538.601982] CPU: 1 PID: 3623 Comm: fsstress Tainted: G    B
>>>   5.10.0-rc3+ #33
>>> [  538.609032] Hardware name: VMware, Inc. VMware Virtual
>>> Platform/440BX Desktop Reference Platform, BIOS 6.00 02/27/2020
>>> [  538.619678] RIP: 0010:memcpy_orig+0xf5/0x10f
>>> [  538.623892] Code: 00 00 00 00 00 83 fa 04 72 1b 8b 0e 44 8b 44 16
>>> fc 89 0f 44 89 44 17 fc c3 66 90 66 2e 0f 1f 84 00 00 00 00 00 83 ea
>>> 01 72 19 <0f> b6 0e 74 12 4c 0f b6 46 01 4c 0f b6 0c 16 44 88 47 01 44
>>> 88 0c
>>> [  538.636726] RSP: 0018:ffff888018707628 EFLAGS: 00010202
>>> [  538.641125] RAX: ffff888009098855 RBX: 0000000000000008 RCX: ffffffffc19eca2d
>>> [  538.645793] RDX: 0000000000000001 RSI: 0005088000000000 RDI: ffff888009098855
>>> [  538.650290] RBP: 0000000000000000 R08: ffffed100121310b R09: ffffed100121310b
>>> [  538.654879] R10: ffff888009098856 R11: ffffed100121310a R12: ffff888018707788
>>> [  538.658700] R13: ffff888009098858 R14: ffff888009098857 R15: 0000000000000002
>>> [  538.662871] FS:  00007f12be1c8740(0000) GS:ffff88805ca40000(0000)
>>> knlGS:0000000000000000
>>> [  538.667424] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>>> [  538.670768] CR2: 00007f12be1c7000 CR3: 000000005c37a004 CR4: 00000000001706e0
>>> [  538.675865] Call Trace:
>>> [  538.677376]  nfs4_xdr_dec_listxattrs+0x31d/0x3c0 [nfsv4]
>>> [  538.681151]  ? nfs4_xdr_dec_read_plus+0x360/0x360 [nfsv4]
>>> [  538.684232]  ? xdr_inline_decode+0x1e/0x260 [sunrpc]
>>> [  538.687114]  call_decode+0x365/0x390 [sunrpc]
>>> [  538.689626]  ? rpc_decode_header+0x770/0x770 [sunrpc]
>>> [  538.692410]  ? var_wake_function+0x80/0x80
>>> [  538.694634]  ?
>>> xprt_request_retransmit_after_disconnect.isra.15+0x5e/0x80 [sunrpc]
>>> [  538.698920]  ? rpc_decode_header+0x770/0x770 [sunrpc]
>>> [  538.701578]  ? rpc_decode_header+0x770/0x770 [sunrpc]
>>> [  538.704252]  __rpc_execute+0x11c/0x6e0 [sunrpc]
>>> [  538.706829]  ? trace_event_raw_event_xprt_cong_event+0x270/0x270 [sunrpc]
>>> [  538.710654]  ? rpc_make_runnable+0x54/0xe0 [sunrpc]
>>> [  538.713416]  rpc_run_task+0x29c/0x2c0 [sunrpc]
>>> [  538.715806]  nfs4_call_sync_custom+0xc/0x40 [nfsv4]
>>> [  538.718551]  nfs4_do_call_sync+0x114/0x160 [nfsv4]
>>> [  538.721571]  ? nfs4_call_sync_custom+0x40/0x40 [nfsv4]
>>> [  538.724333]  ? __alloc_pages_nodemask+0x200/0x410
>>> [  538.726794]  ? kasan_unpoison_shadow+0x30/0x40
>>> [  538.729147]  ? __kasan_kmalloc.constprop.8+0xc1/0xd0
>>> [  538.731733]  _nfs42_proc_listxattrs+0x1f6/0x2f0 [nfsv4]
>>> [  538.734504]  ? nfs42_offload_cancel_done+0x50/0x50 [nfsv4]
>>> [  538.737173]  ? __kernel_text_address+0xe/0x30
>>> [  538.739400]  ? unwind_get_return_address+0x2f/0x50
>>> [  538.741727]  ? create_prof_cpu_mask+0x20/0x20
>>> [  538.744141]  ? stack_trace_consume_entry+0x80/0x80
>>> [  538.746585]  ? _raw_spin_lock+0x7a/0xd0
>>> [  538.748477]  nfs42_proc_listxattrs+0xf4/0x150 [nfsv4]
>>> [  538.750920]  ? nfs42_proc_setxattr+0x150/0x150 [nfsv4]
>>> [  538.753557]  ? nfs4_xattr_cache_list+0x91/0x120 [nfsv4]
>>> [  538.756313]  nfs4_listxattr+0x34d/0x3d0 [nfsv4]
>>> [  538.758506]  ? _nfs4_proc_access+0x260/0x260 [nfsv4]
>>> [  538.760859]  ? __ia32_sys_rename+0x40/0x40
>>> [  538.762897]  ? selinux_quota_on+0xf0/0xf0
>>> [  538.764827]  ? __check_object_size+0x178/0x220
>>> [  538.767063]  ? kasan_unpoison_shadow+0x30/0x40
>>> [  538.769233]  ? security_inode_listxattr+0x53/0x60
>>> [  538.771962]  listxattr+0x5b/0xf0
>>> [  538.773980]  path_listxattr+0xa1/0x100
>>> [  538.776147]  ? listxattr+0xf0/0xf0
>>> [  538.778064]  do_syscall_64+0x33/0x40
>>> [  538.780714]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
>>> [  538.783440] RIP: 0033:0x7f12bdaebc8b
>>> [  538.785341] Code: f0 ff ff 73 01 c3 48 8b 0d fa 21 2c 00 f7 d8 64
>>> 89 01 48 83 c8 ff c3 0f 1f 84 00 00 00 00 00 f3 0f 1e fa b8 c2 00 00
>>> 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d cd 21 2c 00 f7 d8 64 89
>>> 01 48
>>> [  538.794845] RSP: 002b:00007ffc7947cff8 EFLAGS: 00000206 ORIG_RAX:
>>> 00000000000000c2
>>> [  538.798985] RAX: ffffffffffffffda RBX: 0000000001454650 RCX: 00007f12bdaebc8b
>>> [  538.802840] RDX: 0000000000000018 RSI: 000000000145c150 RDI: 0000000001454650
>>> [  538.807093] RBP: 000000000145c150 R08: 00007f12bddaeba0 R09: 00007ffc7947cc46
>>> [  538.811487] R10: 0000000000000000 R11: 0000000000000206 R12: 00000000000002dd
>>> [  538.816713] R13: 0000000000000018 R14: 0000000000000018 R15: 0000000000000000
>>> [  538.820437] Modules linked in: rpcrdma rdma_rxe ip6_udp_tunnel
>>> udp_tunnel rdma_ucm rdma_cm iw_cm ib_cm ib_uverbs ib_core cts
>>> rpcsec_gss_krb5 nfsv4 dns_resolver nfs lockd grace nfs_ssc nls_utf8
>>> isofs fuse rfcomm nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 nft_fib
>>> nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_ct
>>> nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 tun bridge stp llc
>>> ip6_tables nft_compat ip_set nf_tables nfnetlink bnep
>>> vmw_vsock_vmci_transport vsock snd_seq_midi snd_seq_midi_event
>>> intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul
>>> vmw_balloon ghash_clmulni_intel joydev pcspkr btusb uvcvideo btrtl
>>> btbcm btintel videobuf2_vmalloc videobuf2_memops videobuf2_v4l2
>>> videobuf2_common snd_ens1371 snd_ac97_codec ac97_bus snd_seq snd_pcm
>>> videodev bluetooth mc rfkill ecdh_generic ecc snd_timer snd_rawmidi
>>> snd_seq_device snd soundcore vmw_vmci i2c_piix4 auth_rpcgss sunrpc
>>> ip_tables xfs libcrc32c sr_mod cdrom sg crc32c_intel ata_generic
>>> serio_raw nvme vmwgfx drm_kms_helper
>>> [  538.820577]  syscopyarea sysfillrect sysimgblt fb_sys_fops cec
>>> nvme_core t10_pi ata_piix ahci libahci ttm vmxnet3 drm libata
>>> [  538.869541] ---[ end trace 4abb2d95a72e9aab ]---
>> 
>> I'm running a v5.10-rc3 client now with the fix applied and
>> KASAN enabled. I traced my xfstests run and I'm definitely
>> exercising the LISTXATTRS path:
>> 
>> # trace-cmd report -F rpc_request | grep LISTXATTR | wc -l
>> 462
>> 
>> No crash or KASAN splat. I'm still missing something.
>> 
>> I sometimes apply small patches from the mailing list by hand
>> editing the modified file. Are you sure you applied my fix
>> correctly?
> 
> Again I think the difference is the SoftRoce vs hardware

I'm not finding that plausible... Can you troubleshoot this further
to demonstrate how soft RoCE is contributing to this issue?


--
Chuck Lever




^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: kernel oops in generic/013 on an rdma mount (over either soft roce or iwarp)
  2020-11-12 20:48                       ` Chuck Lever
@ 2020-11-13 18:49                         ` Olga Kornievskaia
  2020-11-13 19:03                           ` Chuck Lever
  0 siblings, 1 reply; 16+ messages in thread
From: Olga Kornievskaia @ 2020-11-13 18:49 UTC (permalink / raw)
  To: Chuck Lever, Frank van der Linden; +Cc: Anna Schumaker, Linux NFS Mailing List

On Thu, Nov 12, 2020 at 3:49 PM Chuck Lever <chuck.lever@oracle.com> wrote:
>
>
>
> > On Nov 12, 2020, at 10:37 AM, Olga Kornievskaia <aglo@umich.edu> wrote:
> >
> > On Thu, Nov 12, 2020 at 10:28 AM Chuck Lever <chuck.lever@oracle.com> wrote:
> >>
> >>
> >>
> >>> On Nov 11, 2020, at 4:42 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
> >>>
> >>> On Tue, Nov 10, 2020 at 1:25 PM Chuck Lever <chuck.lever@oracle.com> wrote:
> >>>
> >>>> Start by identifying what NFS operation is failing, and what configuration
> >>>> of chunks it is using.
> >>>
> >>> This happens after decoding LIST_XATTRS reply. It's a send only reply.
> >>> I can't tell if the real problem is in the nfs4_xdr_dec_listxattrs()
> >>> and it's overwritting memory that messes with rpcrdma_complete_rqst()
> >>> or it's the rdma problem.
> >>>
> >>> Running it with Kasan shows the following:
> >>>
> >>> [  538.505743] BUG: KASAN: wild-memory-access in
> >>> rpcrdma_complete_rqst+0x41b/0x680 [rpcrdma]
> >>> [  538.512019] Write of size 8 at addr 0005088000000000 by task kworker/1:1H/493
> >>> [  538.517285]
> >>> [  538.518219] CPU: 1 PID: 493 Comm: kworker/1:1H Not tainted 5.10.0-rc3+ #33
> >>> [  538.521811] Hardware name: VMware, Inc. VMware Virtual
> >>> Platform/440BX Desktop Reference Platform, BIOS 6.00 02/27/2020
> >>> [  538.529220] Workqueue: ib-comp-wq ib_cq_poll_work [ib_core]
> >>> [  538.532722] Call Trace:
> >>> [  538.534366]  dump_stack+0x7c/0xa2
> >>> [  538.536473]  ? rpcrdma_complete_rqst+0x41b/0x680 [rpcrdma]
> >>> [  538.539514]  ? rpcrdma_complete_rqst+0x41b/0x680 [rpcrdma]
> >>> [  538.542817]  kasan_report.cold.9+0x6a/0x7c
> >>> [  538.545952]  ? rpcrdma_complete_rqst+0x41b/0x680 [rpcrdma]
> >>> [  538.551001]  check_memory_region+0x198/0x200
> >>> [  538.553763]  memcpy+0x38/0x60
> >>> [  538.555612]  rpcrdma_complete_rqst+0x41b/0x680 [rpcrdma]
> >>> [  538.558974]  ? rpcrdma_reset_cwnd+0x70/0x70 [rpcrdma]
> >>> [  538.562162]  ? ktime_get+0x4f/0xb0
> >>> [  538.564072]  ? rpcrdma_reply_handler+0x4ca/0x640 [rpcrdma]
> >>> [  538.567066]  __ib_process_cq+0xa7/0x1f0 [ib_core]
> >>> [  538.569905]  ib_cq_poll_work+0x31/0xb0 [ib_core]
> >>> [  538.573151]  process_one_work+0x387/0x680
> >>> [  538.575798]  worker_thread+0x57/0x5a0
> >>> [  538.577917]  ? process_one_work+0x680/0x680
> >>> [  538.581095]  kthread+0x1c8/0x1f0
> >>> [  538.583271]  ? kthread_parkme+0x40/0x40
> >>> [  538.585637]  ret_from_fork+0x22/0x30
> >>> [  538.587688] ==================================================================
> >>> [  538.591920] Disabling lock debugging due to kernel taint
> >>> [  538.595267] general protection fault, probably for non-canonical
> >>> address 0x5088000000000: 0000 [#1] SMP KASAN PTI
> >>> [  538.601982] CPU: 1 PID: 3623 Comm: fsstress Tainted: G    B
> >>>   5.10.0-rc3+ #33
> >>> [  538.609032] Hardware name: VMware, Inc. VMware Virtual
> >>> Platform/440BX Desktop Reference Platform, BIOS 6.00 02/27/2020
> >>> [  538.619678] RIP: 0010:memcpy_orig+0xf5/0x10f
> >>> [  538.623892] Code: 00 00 00 00 00 83 fa 04 72 1b 8b 0e 44 8b 44 16
> >>> fc 89 0f 44 89 44 17 fc c3 66 90 66 2e 0f 1f 84 00 00 00 00 00 83 ea
> >>> 01 72 19 <0f> b6 0e 74 12 4c 0f b6 46 01 4c 0f b6 0c 16 44 88 47 01 44
> >>> 88 0c
> >>> [  538.636726] RSP: 0018:ffff888018707628 EFLAGS: 00010202
> >>> [  538.641125] RAX: ffff888009098855 RBX: 0000000000000008 RCX: ffffffffc19eca2d
> >>> [  538.645793] RDX: 0000000000000001 RSI: 0005088000000000 RDI: ffff888009098855
> >>> [  538.650290] RBP: 0000000000000000 R08: ffffed100121310b R09: ffffed100121310b
> >>> [  538.654879] R10: ffff888009098856 R11: ffffed100121310a R12: ffff888018707788
> >>> [  538.658700] R13: ffff888009098858 R14: ffff888009098857 R15: 0000000000000002
> >>> [  538.662871] FS:  00007f12be1c8740(0000) GS:ffff88805ca40000(0000)
> >>> knlGS:0000000000000000
> >>> [  538.667424] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> >>> [  538.670768] CR2: 00007f12be1c7000 CR3: 000000005c37a004 CR4: 00000000001706e0
> >>> [  538.675865] Call Trace:
> >>> [  538.677376]  nfs4_xdr_dec_listxattrs+0x31d/0x3c0 [nfsv4]
> >>> [  538.681151]  ? nfs4_xdr_dec_read_plus+0x360/0x360 [nfsv4]
> >>> [  538.684232]  ? xdr_inline_decode+0x1e/0x260 [sunrpc]
> >>> [  538.687114]  call_decode+0x365/0x390 [sunrpc]
> >>> [  538.689626]  ? rpc_decode_header+0x770/0x770 [sunrpc]
> >>> [  538.692410]  ? var_wake_function+0x80/0x80
> >>> [  538.694634]  ?
> >>> xprt_request_retransmit_after_disconnect.isra.15+0x5e/0x80 [sunrpc]
> >>> [  538.698920]  ? rpc_decode_header+0x770/0x770 [sunrpc]
> >>> [  538.701578]  ? rpc_decode_header+0x770/0x770 [sunrpc]
> >>> [  538.704252]  __rpc_execute+0x11c/0x6e0 [sunrpc]
> >>> [  538.706829]  ? trace_event_raw_event_xprt_cong_event+0x270/0x270 [sunrpc]
> >>> [  538.710654]  ? rpc_make_runnable+0x54/0xe0 [sunrpc]
> >>> [  538.713416]  rpc_run_task+0x29c/0x2c0 [sunrpc]
> >>> [  538.715806]  nfs4_call_sync_custom+0xc/0x40 [nfsv4]
> >>> [  538.718551]  nfs4_do_call_sync+0x114/0x160 [nfsv4]
> >>> [  538.721571]  ? nfs4_call_sync_custom+0x40/0x40 [nfsv4]
> >>> [  538.724333]  ? __alloc_pages_nodemask+0x200/0x410
> >>> [  538.726794]  ? kasan_unpoison_shadow+0x30/0x40
> >>> [  538.729147]  ? __kasan_kmalloc.constprop.8+0xc1/0xd0
> >>> [  538.731733]  _nfs42_proc_listxattrs+0x1f6/0x2f0 [nfsv4]
> >>> [  538.734504]  ? nfs42_offload_cancel_done+0x50/0x50 [nfsv4]
> >>> [  538.737173]  ? __kernel_text_address+0xe/0x30
> >>> [  538.739400]  ? unwind_get_return_address+0x2f/0x50
> >>> [  538.741727]  ? create_prof_cpu_mask+0x20/0x20
> >>> [  538.744141]  ? stack_trace_consume_entry+0x80/0x80
> >>> [  538.746585]  ? _raw_spin_lock+0x7a/0xd0
> >>> [  538.748477]  nfs42_proc_listxattrs+0xf4/0x150 [nfsv4]
> >>> [  538.750920]  ? nfs42_proc_setxattr+0x150/0x150 [nfsv4]
> >>> [  538.753557]  ? nfs4_xattr_cache_list+0x91/0x120 [nfsv4]
> >>> [  538.756313]  nfs4_listxattr+0x34d/0x3d0 [nfsv4]
> >>> [  538.758506]  ? _nfs4_proc_access+0x260/0x260 [nfsv4]
> >>> [  538.760859]  ? __ia32_sys_rename+0x40/0x40
> >>> [  538.762897]  ? selinux_quota_on+0xf0/0xf0
> >>> [  538.764827]  ? __check_object_size+0x178/0x220
> >>> [  538.767063]  ? kasan_unpoison_shadow+0x30/0x40
> >>> [  538.769233]  ? security_inode_listxattr+0x53/0x60
> >>> [  538.771962]  listxattr+0x5b/0xf0
> >>> [  538.773980]  path_listxattr+0xa1/0x100
> >>> [  538.776147]  ? listxattr+0xf0/0xf0
> >>> [  538.778064]  do_syscall_64+0x33/0x40
> >>> [  538.780714]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
> >>> [  538.783440] RIP: 0033:0x7f12bdaebc8b
> >>> [  538.785341] Code: f0 ff ff 73 01 c3 48 8b 0d fa 21 2c 00 f7 d8 64
> >>> 89 01 48 83 c8 ff c3 0f 1f 84 00 00 00 00 00 f3 0f 1e fa b8 c2 00 00
> >>> 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d cd 21 2c 00 f7 d8 64 89
> >>> 01 48
> >>> [  538.794845] RSP: 002b:00007ffc7947cff8 EFLAGS: 00000206 ORIG_RAX:
> >>> 00000000000000c2
> >>> [  538.798985] RAX: ffffffffffffffda RBX: 0000000001454650 RCX: 00007f12bdaebc8b
> >>> [  538.802840] RDX: 0000000000000018 RSI: 000000000145c150 RDI: 0000000001454650
> >>> [  538.807093] RBP: 000000000145c150 R08: 00007f12bddaeba0 R09: 00007ffc7947cc46
> >>> [  538.811487] R10: 0000000000000000 R11: 0000000000000206 R12: 00000000000002dd
> >>> [  538.816713] R13: 0000000000000018 R14: 0000000000000018 R15: 0000000000000000
> >>> [  538.820437] Modules linked in: rpcrdma rdma_rxe ip6_udp_tunnel
> >>> udp_tunnel rdma_ucm rdma_cm iw_cm ib_cm ib_uverbs ib_core cts
> >>> rpcsec_gss_krb5 nfsv4 dns_resolver nfs lockd grace nfs_ssc nls_utf8
> >>> isofs fuse rfcomm nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 nft_fib
> >>> nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_ct
> >>> nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 tun bridge stp llc
> >>> ip6_tables nft_compat ip_set nf_tables nfnetlink bnep
> >>> vmw_vsock_vmci_transport vsock snd_seq_midi snd_seq_midi_event
> >>> intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul
> >>> vmw_balloon ghash_clmulni_intel joydev pcspkr btusb uvcvideo btrtl
> >>> btbcm btintel videobuf2_vmalloc videobuf2_memops videobuf2_v4l2
> >>> videobuf2_common snd_ens1371 snd_ac97_codec ac97_bus snd_seq snd_pcm
> >>> videodev bluetooth mc rfkill ecdh_generic ecc snd_timer snd_rawmidi
> >>> snd_seq_device snd soundcore vmw_vmci i2c_piix4 auth_rpcgss sunrpc
> >>> ip_tables xfs libcrc32c sr_mod cdrom sg crc32c_intel ata_generic
> >>> serio_raw nvme vmwgfx drm_kms_helper
> >>> [  538.820577]  syscopyarea sysfillrect sysimgblt fb_sys_fops cec
> >>> nvme_core t10_pi ata_piix ahci libahci ttm vmxnet3 drm libata
> >>> [  538.869541] ---[ end trace 4abb2d95a72e9aab ]---
> >>
> >> I'm running a v5.10-rc3 client now with the fix applied and
> >> KASAN enabled. I traced my xfstests run and I'm definitely
> >> exercising the LISTXATTRS path:
> >>
> >> # trace-cmd report -F rpc_request | grep LISTXATTR | wc -l
> >> 462
> >>
> >> No crash or KASAN splat. I'm still missing something.
> >>
> >> I sometimes apply small patches from the mailing list by hand
> >> editing the modified file. Are you sure you applied my fix
> >> correctly?
> >
> > Again I think the difference is the SoftRoce vs hardware
>
> I'm not finding that plausible... Can you troubleshoot this further
> to demonstrate how soft RoCE is contributing to this issue?

Btw I'm not the only one. Anna's tests are failing too.

OK so all this started with xattr patches (Trond's tags/nfs-for-5.9-1
don't pass the generic/013). Your patch isn't sufficient to fix it.
When I increased decode_listxattrs_maxsz by 5 then Oops went away (but
that's a hack). Here's a fix on top of your fix that allows for my
tests to run.

diff --git a/fs/nfs/nfs42xdr.c b/fs/nfs/nfs42xdr.c
index d0ddf90c9be4..49573b285946 100644
--- a/fs/nfs/nfs42xdr.c
+++ b/fs/nfs/nfs42xdr.c
@@ -179,7 +179,7 @@
                                 1 + nfs4_xattr_name_maxsz + 1)
 #define decode_setxattr_maxsz   (op_decode_hdr_maxsz +
decode_change_info_maxsz)
 #define encode_listxattrs_maxsz  (op_encode_hdr_maxsz + 2 + 1)
-#define decode_listxattrs_maxsz  (op_decode_hdr_maxsz + 2 + 1 + 1 + 1)
+#define decode_listxattrs_maxsz  (op_decode_hdr_maxsz + 2 + 1 + 1 + 5)
 #define encode_removexattr_maxsz (op_encode_hdr_maxsz + 1 + \
                                  nfs4_xattr_name_maxsz)
 #define decode_removexattr_maxsz (op_decode_hdr_maxsz + \

I would like Frank to comment on what the decode_listattrs_maxsz
should be. According to the RFC:

/// struct LISTXATTRS4resok {
   ///         nfs_cookie4    lxr_cookie;
   ///         xattrkey4      lxr_names<>;
   ///         bool           lxr_eof;
   /// };

This is current code (with Chuck's fix): op_decode_hdr_maxsz + 2 + 1 + 1 + 1

I don't see how the xattrkey4  lxr_names<> can be estimated by a
constant number. So I think the real fix should be:

diff --git a/fs/nfs/nfs42xdr.c b/fs/nfs/nfs42xdr.c
index d0ddf90c9be4..73b44f8c036d 100644
--- a/fs/nfs/nfs42xdr.c
+++ b/fs/nfs/nfs42xdr.c
@@ -179,7 +179,8 @@
                                 1 + nfs4_xattr_name_maxsz + 1)
 #define decode_setxattr_maxsz   (op_decode_hdr_maxsz +
decode_change_info_maxsz)
 #define encode_listxattrs_maxsz  (op_encode_hdr_maxsz + 2 + 1)
-#define decode_listxattrs_maxsz  (op_decode_hdr_maxsz + 2 + 1 + 1 + 1)
+#define decode_listxattrs_maxsz  (op_decode_hdr_maxsz + 2 + 1 + 1 + \
+                                 XDR_QUADLEN(NFS4_OPAQUE_LIMIT))
 #define encode_removexattr_maxsz (op_encode_hdr_maxsz + 1 + \
                                  nfs4_xattr_name_maxsz)
 #define decode_removexattr_maxsz (op_decode_hdr_maxsz + \








>

>
> --
> Chuck Lever
>
>
>

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: kernel oops in generic/013 on an rdma mount (over either soft roce or iwarp)
  2020-11-13 18:49                         ` Olga Kornievskaia
@ 2020-11-13 19:03                           ` Chuck Lever
  2020-11-13 19:12                             ` Olga Kornievskaia
  0 siblings, 1 reply; 16+ messages in thread
From: Chuck Lever @ 2020-11-13 19:03 UTC (permalink / raw)
  To: Olga Kornievskaia
  Cc: Frank van der Linden, Anna Schumaker, Linux NFS Mailing List



> On Nov 13, 2020, at 1:49 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
> 
> On Thu, Nov 12, 2020 at 3:49 PM Chuck Lever <chuck.lever@oracle.com> wrote:
>> 
>> 
>> 
>>> On Nov 12, 2020, at 10:37 AM, Olga Kornievskaia <aglo@umich.edu> wrote:
>>> 
>>> On Thu, Nov 12, 2020 at 10:28 AM Chuck Lever <chuck.lever@oracle.com> wrote:
>>>> 
>>>> 
>>>> 
>>>>> On Nov 11, 2020, at 4:42 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
>>>>> 
>>>>> On Tue, Nov 10, 2020 at 1:25 PM Chuck Lever <chuck.lever@oracle.com> wrote:
>>>>> 
>>>>>> Start by identifying what NFS operation is failing, and what configuration
>>>>>> of chunks it is using.
>>>>> 
>>>>> This happens after decoding LIST_XATTRS reply. It's a send only reply.
>>>>> I can't tell if the real problem is in the nfs4_xdr_dec_listxattrs()
>>>>> and it's overwritting memory that messes with rpcrdma_complete_rqst()
>>>>> or it's the rdma problem.
>>>>> 
>>>>> Running it with Kasan shows the following:
>>>>> 
>>>>> [  538.505743] BUG: KASAN: wild-memory-access in
>>>>> rpcrdma_complete_rqst+0x41b/0x680 [rpcrdma]
>>>>> [  538.512019] Write of size 8 at addr 0005088000000000 by task kworker/1:1H/493
>>>>> [  538.517285]
>>>>> [  538.518219] CPU: 1 PID: 493 Comm: kworker/1:1H Not tainted 5.10.0-rc3+ #33
>>>>> [  538.521811] Hardware name: VMware, Inc. VMware Virtual
>>>>> Platform/440BX Desktop Reference Platform, BIOS 6.00 02/27/2020
>>>>> [  538.529220] Workqueue: ib-comp-wq ib_cq_poll_work [ib_core]
>>>>> [  538.532722] Call Trace:
>>>>> [  538.534366]  dump_stack+0x7c/0xa2
>>>>> [  538.536473]  ? rpcrdma_complete_rqst+0x41b/0x680 [rpcrdma]
>>>>> [  538.539514]  ? rpcrdma_complete_rqst+0x41b/0x680 [rpcrdma]
>>>>> [  538.542817]  kasan_report.cold.9+0x6a/0x7c
>>>>> [  538.545952]  ? rpcrdma_complete_rqst+0x41b/0x680 [rpcrdma]
>>>>> [  538.551001]  check_memory_region+0x198/0x200
>>>>> [  538.553763]  memcpy+0x38/0x60
>>>>> [  538.555612]  rpcrdma_complete_rqst+0x41b/0x680 [rpcrdma]
>>>>> [  538.558974]  ? rpcrdma_reset_cwnd+0x70/0x70 [rpcrdma]
>>>>> [  538.562162]  ? ktime_get+0x4f/0xb0
>>>>> [  538.564072]  ? rpcrdma_reply_handler+0x4ca/0x640 [rpcrdma]
>>>>> [  538.567066]  __ib_process_cq+0xa7/0x1f0 [ib_core]
>>>>> [  538.569905]  ib_cq_poll_work+0x31/0xb0 [ib_core]
>>>>> [  538.573151]  process_one_work+0x387/0x680
>>>>> [  538.575798]  worker_thread+0x57/0x5a0
>>>>> [  538.577917]  ? process_one_work+0x680/0x680
>>>>> [  538.581095]  kthread+0x1c8/0x1f0
>>>>> [  538.583271]  ? kthread_parkme+0x40/0x40
>>>>> [  538.585637]  ret_from_fork+0x22/0x30
>>>>> [  538.587688] ==================================================================
>>>>> [  538.591920] Disabling lock debugging due to kernel taint
>>>>> [  538.595267] general protection fault, probably for non-canonical
>>>>> address 0x5088000000000: 0000 [#1] SMP KASAN PTI
>>>>> [  538.601982] CPU: 1 PID: 3623 Comm: fsstress Tainted: G    B
>>>>>  5.10.0-rc3+ #33
>>>>> [  538.609032] Hardware name: VMware, Inc. VMware Virtual
>>>>> Platform/440BX Desktop Reference Platform, BIOS 6.00 02/27/2020
>>>>> [  538.619678] RIP: 0010:memcpy_orig+0xf5/0x10f
>>>>> [  538.623892] Code: 00 00 00 00 00 83 fa 04 72 1b 8b 0e 44 8b 44 16
>>>>> fc 89 0f 44 89 44 17 fc c3 66 90 66 2e 0f 1f 84 00 00 00 00 00 83 ea
>>>>> 01 72 19 <0f> b6 0e 74 12 4c 0f b6 46 01 4c 0f b6 0c 16 44 88 47 01 44
>>>>> 88 0c
>>>>> [  538.636726] RSP: 0018:ffff888018707628 EFLAGS: 00010202
>>>>> [  538.641125] RAX: ffff888009098855 RBX: 0000000000000008 RCX: ffffffffc19eca2d
>>>>> [  538.645793] RDX: 0000000000000001 RSI: 0005088000000000 RDI: ffff888009098855
>>>>> [  538.650290] RBP: 0000000000000000 R08: ffffed100121310b R09: ffffed100121310b
>>>>> [  538.654879] R10: ffff888009098856 R11: ffffed100121310a R12: ffff888018707788
>>>>> [  538.658700] R13: ffff888009098858 R14: ffff888009098857 R15: 0000000000000002
>>>>> [  538.662871] FS:  00007f12be1c8740(0000) GS:ffff88805ca40000(0000)
>>>>> knlGS:0000000000000000
>>>>> [  538.667424] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>>>>> [  538.670768] CR2: 00007f12be1c7000 CR3: 000000005c37a004 CR4: 00000000001706e0
>>>>> [  538.675865] Call Trace:
>>>>> [  538.677376]  nfs4_xdr_dec_listxattrs+0x31d/0x3c0 [nfsv4]
>>>>> [  538.681151]  ? nfs4_xdr_dec_read_plus+0x360/0x360 [nfsv4]
>>>>> [  538.684232]  ? xdr_inline_decode+0x1e/0x260 [sunrpc]
>>>>> [  538.687114]  call_decode+0x365/0x390 [sunrpc]
>>>>> [  538.689626]  ? rpc_decode_header+0x770/0x770 [sunrpc]
>>>>> [  538.692410]  ? var_wake_function+0x80/0x80
>>>>> [  538.694634]  ?
>>>>> xprt_request_retransmit_after_disconnect.isra.15+0x5e/0x80 [sunrpc]
>>>>> [  538.698920]  ? rpc_decode_header+0x770/0x770 [sunrpc]
>>>>> [  538.701578]  ? rpc_decode_header+0x770/0x770 [sunrpc]
>>>>> [  538.704252]  __rpc_execute+0x11c/0x6e0 [sunrpc]
>>>>> [  538.706829]  ? trace_event_raw_event_xprt_cong_event+0x270/0x270 [sunrpc]
>>>>> [  538.710654]  ? rpc_make_runnable+0x54/0xe0 [sunrpc]
>>>>> [  538.713416]  rpc_run_task+0x29c/0x2c0 [sunrpc]
>>>>> [  538.715806]  nfs4_call_sync_custom+0xc/0x40 [nfsv4]
>>>>> [  538.718551]  nfs4_do_call_sync+0x114/0x160 [nfsv4]
>>>>> [  538.721571]  ? nfs4_call_sync_custom+0x40/0x40 [nfsv4]
>>>>> [  538.724333]  ? __alloc_pages_nodemask+0x200/0x410
>>>>> [  538.726794]  ? kasan_unpoison_shadow+0x30/0x40
>>>>> [  538.729147]  ? __kasan_kmalloc.constprop.8+0xc1/0xd0
>>>>> [  538.731733]  _nfs42_proc_listxattrs+0x1f6/0x2f0 [nfsv4]
>>>>> [  538.734504]  ? nfs42_offload_cancel_done+0x50/0x50 [nfsv4]
>>>>> [  538.737173]  ? __kernel_text_address+0xe/0x30
>>>>> [  538.739400]  ? unwind_get_return_address+0x2f/0x50
>>>>> [  538.741727]  ? create_prof_cpu_mask+0x20/0x20
>>>>> [  538.744141]  ? stack_trace_consume_entry+0x80/0x80
>>>>> [  538.746585]  ? _raw_spin_lock+0x7a/0xd0
>>>>> [  538.748477]  nfs42_proc_listxattrs+0xf4/0x150 [nfsv4]
>>>>> [  538.750920]  ? nfs42_proc_setxattr+0x150/0x150 [nfsv4]
>>>>> [  538.753557]  ? nfs4_xattr_cache_list+0x91/0x120 [nfsv4]
>>>>> [  538.756313]  nfs4_listxattr+0x34d/0x3d0 [nfsv4]
>>>>> [  538.758506]  ? _nfs4_proc_access+0x260/0x260 [nfsv4]
>>>>> [  538.760859]  ? __ia32_sys_rename+0x40/0x40
>>>>> [  538.762897]  ? selinux_quota_on+0xf0/0xf0
>>>>> [  538.764827]  ? __check_object_size+0x178/0x220
>>>>> [  538.767063]  ? kasan_unpoison_shadow+0x30/0x40
>>>>> [  538.769233]  ? security_inode_listxattr+0x53/0x60
>>>>> [  538.771962]  listxattr+0x5b/0xf0
>>>>> [  538.773980]  path_listxattr+0xa1/0x100
>>>>> [  538.776147]  ? listxattr+0xf0/0xf0
>>>>> [  538.778064]  do_syscall_64+0x33/0x40
>>>>> [  538.780714]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
>>>>> [  538.783440] RIP: 0033:0x7f12bdaebc8b
>>>>> [  538.785341] Code: f0 ff ff 73 01 c3 48 8b 0d fa 21 2c 00 f7 d8 64
>>>>> 89 01 48 83 c8 ff c3 0f 1f 84 00 00 00 00 00 f3 0f 1e fa b8 c2 00 00
>>>>> 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d cd 21 2c 00 f7 d8 64 89
>>>>> 01 48
>>>>> [  538.794845] RSP: 002b:00007ffc7947cff8 EFLAGS: 00000206 ORIG_RAX:
>>>>> 00000000000000c2
>>>>> [  538.798985] RAX: ffffffffffffffda RBX: 0000000001454650 RCX: 00007f12bdaebc8b
>>>>> [  538.802840] RDX: 0000000000000018 RSI: 000000000145c150 RDI: 0000000001454650
>>>>> [  538.807093] RBP: 000000000145c150 R08: 00007f12bddaeba0 R09: 00007ffc7947cc46
>>>>> [  538.811487] R10: 0000000000000000 R11: 0000000000000206 R12: 00000000000002dd
>>>>> [  538.816713] R13: 0000000000000018 R14: 0000000000000018 R15: 0000000000000000
>>>>> [  538.820437] Modules linked in: rpcrdma rdma_rxe ip6_udp_tunnel
>>>>> udp_tunnel rdma_ucm rdma_cm iw_cm ib_cm ib_uverbs ib_core cts
>>>>> rpcsec_gss_krb5 nfsv4 dns_resolver nfs lockd grace nfs_ssc nls_utf8
>>>>> isofs fuse rfcomm nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 nft_fib
>>>>> nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_ct
>>>>> nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 tun bridge stp llc
>>>>> ip6_tables nft_compat ip_set nf_tables nfnetlink bnep
>>>>> vmw_vsock_vmci_transport vsock snd_seq_midi snd_seq_midi_event
>>>>> intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul
>>>>> vmw_balloon ghash_clmulni_intel joydev pcspkr btusb uvcvideo btrtl
>>>>> btbcm btintel videobuf2_vmalloc videobuf2_memops videobuf2_v4l2
>>>>> videobuf2_common snd_ens1371 snd_ac97_codec ac97_bus snd_seq snd_pcm
>>>>> videodev bluetooth mc rfkill ecdh_generic ecc snd_timer snd_rawmidi
>>>>> snd_seq_device snd soundcore vmw_vmci i2c_piix4 auth_rpcgss sunrpc
>>>>> ip_tables xfs libcrc32c sr_mod cdrom sg crc32c_intel ata_generic
>>>>> serio_raw nvme vmwgfx drm_kms_helper
>>>>> [  538.820577]  syscopyarea sysfillrect sysimgblt fb_sys_fops cec
>>>>> nvme_core t10_pi ata_piix ahci libahci ttm vmxnet3 drm libata
>>>>> [  538.869541] ---[ end trace 4abb2d95a72e9aab ]---
>>>> 
>>>> I'm running a v5.10-rc3 client now with the fix applied and
>>>> KASAN enabled. I traced my xfstests run and I'm definitely
>>>> exercising the LISTXATTRS path:
>>>> 
>>>> # trace-cmd report -F rpc_request | grep LISTXATTR | wc -l
>>>> 462
>>>> 
>>>> No crash or KASAN splat. I'm still missing something.
>>>> 
>>>> I sometimes apply small patches from the mailing list by hand
>>>> editing the modified file. Are you sure you applied my fix
>>>> correctly?
>>> 
>>> Again I think the difference is the SoftRoce vs hardware
>> 
>> I'm not finding that plausible... Can you troubleshoot this further
>> to demonstrate how soft RoCE is contributing to this issue?
> 
> Btw I'm not the only one. Anna's tests are failing too.
> 
> OK so all this started with xattr patches (Trond's tags/nfs-for-5.9-1
> don't pass the generic/013). Your patch isn't sufficient to fix it.

That might be true, but I'm not sure how you can reach that
conclusion with certainty, because you don't yet have a root
cause.


> When I increased decode_listxattrs_maxsz by 5 then Oops went away (but
> that's a hack). Here's a fix on top of your fix that allows for my
> tests to run.
> 
> diff --git a/fs/nfs/nfs42xdr.c b/fs/nfs/nfs42xdr.c
> index d0ddf90c9be4..49573b285946 100644
> --- a/fs/nfs/nfs42xdr.c
> +++ b/fs/nfs/nfs42xdr.c
> @@ -179,7 +179,7 @@
>                                 1 + nfs4_xattr_name_maxsz + 1)
> #define decode_setxattr_maxsz   (op_decode_hdr_maxsz +
> decode_change_info_maxsz)
> #define encode_listxattrs_maxsz  (op_encode_hdr_maxsz + 2 + 1)
> -#define decode_listxattrs_maxsz  (op_decode_hdr_maxsz + 2 + 1 + 1 + 1)
> +#define decode_listxattrs_maxsz  (op_decode_hdr_maxsz + 2 + 1 + 1 + 5)
> #define encode_removexattr_maxsz (op_encode_hdr_maxsz + 1 + \
>                                  nfs4_xattr_name_maxsz)
> #define decode_removexattr_maxsz (op_decode_hdr_maxsz + \
> 
> I would like Frank to comment on what the decode_listattrs_maxsz
> should be. According to the RFC:
> 
>   /// struct LISTXATTRS4resok {
>   ///         nfs_cookie4    lxr_cookie;
>   ///         xattrkey4      lxr_names<>;
>   ///         bool           lxr_eof;
>   /// };
> 
> This is current code (with Chuck's fix): op_decode_hdr_maxsz + 2 + 1 + 1 + 1
> 
> I don't see how the xattrkey4  lxr_names<> can be estimated by a
> constant number.

That's right, the size of xattrkey4 is not provided by the maxsz
constant.

nfs4_xdr_enc_listxattrs() invokes rpc_prepare_reply_pages() to set
up the pages needed for the variable part of the reply.


> So I think the real fix should be:
> 
> diff --git a/fs/nfs/nfs42xdr.c b/fs/nfs/nfs42xdr.c
> index d0ddf90c9be4..73b44f8c036d 100644
> --- a/fs/nfs/nfs42xdr.c
> +++ b/fs/nfs/nfs42xdr.c
> @@ -179,7 +179,8 @@
>                                 1 + nfs4_xattr_name_maxsz + 1)
> #define decode_setxattr_maxsz   (op_decode_hdr_maxsz +
> decode_change_info_maxsz)
> #define encode_listxattrs_maxsz  (op_encode_hdr_maxsz + 2 + 1)
> -#define decode_listxattrs_maxsz  (op_decode_hdr_maxsz + 2 + 1 + 1 + 1)
> +#define decode_listxattrs_maxsz  (op_decode_hdr_maxsz + 2 + 1 + 1 + \
> +                                 XDR_QUADLEN(NFS4_OPAQUE_LIMIT))
> #define encode_removexattr_maxsz (op_encode_hdr_maxsz + 1 + \
>                                  nfs4_xattr_name_maxsz)
> #define decode_removexattr_maxsz (op_decode_hdr_maxsz + \
> 
> 
> 
> 
> 
> 
> 
> 
>> 
> 
>> 
>> --
>> Chuck Lever

--
Chuck Lever




^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: kernel oops in generic/013 on an rdma mount (over either soft roce or iwarp)
  2020-11-13 19:03                           ` Chuck Lever
@ 2020-11-13 19:12                             ` Olga Kornievskaia
  0 siblings, 0 replies; 16+ messages in thread
From: Olga Kornievskaia @ 2020-11-13 19:12 UTC (permalink / raw)
  To: Chuck Lever; +Cc: Frank van der Linden, Anna Schumaker, Linux NFS Mailing List

On Fri, Nov 13, 2020 at 2:05 PM Chuck Lever <chuck.lever@oracle.com> wrote:
>
>
>
> > On Nov 13, 2020, at 1:49 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
> >
> > On Thu, Nov 12, 2020 at 3:49 PM Chuck Lever <chuck.lever@oracle.com> wrote:
> >>
> >>
> >>
> >>> On Nov 12, 2020, at 10:37 AM, Olga Kornievskaia <aglo@umich.edu> wrote:
> >>>
> >>> On Thu, Nov 12, 2020 at 10:28 AM Chuck Lever <chuck.lever@oracle.com> wrote:
> >>>>
> >>>>
> >>>>
> >>>>> On Nov 11, 2020, at 4:42 PM, Olga Kornievskaia <aglo@umich.edu> wrote:
> >>>>>
> >>>>> On Tue, Nov 10, 2020 at 1:25 PM Chuck Lever <chuck.lever@oracle.com> wrote:
> >>>>>
> >>>>>> Start by identifying what NFS operation is failing, and what configuration
> >>>>>> of chunks it is using.
> >>>>>
> >>>>> This happens after decoding LIST_XATTRS reply. It's a send only reply.
> >>>>> I can't tell if the real problem is in the nfs4_xdr_dec_listxattrs()
> >>>>> and it's overwritting memory that messes with rpcrdma_complete_rqst()
> >>>>> or it's the rdma problem.
> >>>>>
> >>>>> Running it with Kasan shows the following:
> >>>>>
> >>>>> [  538.505743] BUG: KASAN: wild-memory-access in
> >>>>> rpcrdma_complete_rqst+0x41b/0x680 [rpcrdma]
> >>>>> [  538.512019] Write of size 8 at addr 0005088000000000 by task kworker/1:1H/493
> >>>>> [  538.517285]
> >>>>> [  538.518219] CPU: 1 PID: 493 Comm: kworker/1:1H Not tainted 5.10.0-rc3+ #33
> >>>>> [  538.521811] Hardware name: VMware, Inc. VMware Virtual
> >>>>> Platform/440BX Desktop Reference Platform, BIOS 6.00 02/27/2020
> >>>>> [  538.529220] Workqueue: ib-comp-wq ib_cq_poll_work [ib_core]
> >>>>> [  538.532722] Call Trace:
> >>>>> [  538.534366]  dump_stack+0x7c/0xa2
> >>>>> [  538.536473]  ? rpcrdma_complete_rqst+0x41b/0x680 [rpcrdma]
> >>>>> [  538.539514]  ? rpcrdma_complete_rqst+0x41b/0x680 [rpcrdma]
> >>>>> [  538.542817]  kasan_report.cold.9+0x6a/0x7c
> >>>>> [  538.545952]  ? rpcrdma_complete_rqst+0x41b/0x680 [rpcrdma]
> >>>>> [  538.551001]  check_memory_region+0x198/0x200
> >>>>> [  538.553763]  memcpy+0x38/0x60
> >>>>> [  538.555612]  rpcrdma_complete_rqst+0x41b/0x680 [rpcrdma]
> >>>>> [  538.558974]  ? rpcrdma_reset_cwnd+0x70/0x70 [rpcrdma]
> >>>>> [  538.562162]  ? ktime_get+0x4f/0xb0
> >>>>> [  538.564072]  ? rpcrdma_reply_handler+0x4ca/0x640 [rpcrdma]
> >>>>> [  538.567066]  __ib_process_cq+0xa7/0x1f0 [ib_core]
> >>>>> [  538.569905]  ib_cq_poll_work+0x31/0xb0 [ib_core]
> >>>>> [  538.573151]  process_one_work+0x387/0x680
> >>>>> [  538.575798]  worker_thread+0x57/0x5a0
> >>>>> [  538.577917]  ? process_one_work+0x680/0x680
> >>>>> [  538.581095]  kthread+0x1c8/0x1f0
> >>>>> [  538.583271]  ? kthread_parkme+0x40/0x40
> >>>>> [  538.585637]  ret_from_fork+0x22/0x30
> >>>>> [  538.587688] ==================================================================
> >>>>> [  538.591920] Disabling lock debugging due to kernel taint
> >>>>> [  538.595267] general protection fault, probably for non-canonical
> >>>>> address 0x5088000000000: 0000 [#1] SMP KASAN PTI
> >>>>> [  538.601982] CPU: 1 PID: 3623 Comm: fsstress Tainted: G    B
> >>>>>  5.10.0-rc3+ #33
> >>>>> [  538.609032] Hardware name: VMware, Inc. VMware Virtual
> >>>>> Platform/440BX Desktop Reference Platform, BIOS 6.00 02/27/2020
> >>>>> [  538.619678] RIP: 0010:memcpy_orig+0xf5/0x10f
> >>>>> [  538.623892] Code: 00 00 00 00 00 83 fa 04 72 1b 8b 0e 44 8b 44 16
> >>>>> fc 89 0f 44 89 44 17 fc c3 66 90 66 2e 0f 1f 84 00 00 00 00 00 83 ea
> >>>>> 01 72 19 <0f> b6 0e 74 12 4c 0f b6 46 01 4c 0f b6 0c 16 44 88 47 01 44
> >>>>> 88 0c
> >>>>> [  538.636726] RSP: 0018:ffff888018707628 EFLAGS: 00010202
> >>>>> [  538.641125] RAX: ffff888009098855 RBX: 0000000000000008 RCX: ffffffffc19eca2d
> >>>>> [  538.645793] RDX: 0000000000000001 RSI: 0005088000000000 RDI: ffff888009098855
> >>>>> [  538.650290] RBP: 0000000000000000 R08: ffffed100121310b R09: ffffed100121310b
> >>>>> [  538.654879] R10: ffff888009098856 R11: ffffed100121310a R12: ffff888018707788
> >>>>> [  538.658700] R13: ffff888009098858 R14: ffff888009098857 R15: 0000000000000002
> >>>>> [  538.662871] FS:  00007f12be1c8740(0000) GS:ffff88805ca40000(0000)
> >>>>> knlGS:0000000000000000
> >>>>> [  538.667424] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> >>>>> [  538.670768] CR2: 00007f12be1c7000 CR3: 000000005c37a004 CR4: 00000000001706e0
> >>>>> [  538.675865] Call Trace:
> >>>>> [  538.677376]  nfs4_xdr_dec_listxattrs+0x31d/0x3c0 [nfsv4]
> >>>>> [  538.681151]  ? nfs4_xdr_dec_read_plus+0x360/0x360 [nfsv4]
> >>>>> [  538.684232]  ? xdr_inline_decode+0x1e/0x260 [sunrpc]
> >>>>> [  538.687114]  call_decode+0x365/0x390 [sunrpc]
> >>>>> [  538.689626]  ? rpc_decode_header+0x770/0x770 [sunrpc]
> >>>>> [  538.692410]  ? var_wake_function+0x80/0x80
> >>>>> [  538.694634]  ?
> >>>>> xprt_request_retransmit_after_disconnect.isra.15+0x5e/0x80 [sunrpc]
> >>>>> [  538.698920]  ? rpc_decode_header+0x770/0x770 [sunrpc]
> >>>>> [  538.701578]  ? rpc_decode_header+0x770/0x770 [sunrpc]
> >>>>> [  538.704252]  __rpc_execute+0x11c/0x6e0 [sunrpc]
> >>>>> [  538.706829]  ? trace_event_raw_event_xprt_cong_event+0x270/0x270 [sunrpc]
> >>>>> [  538.710654]  ? rpc_make_runnable+0x54/0xe0 [sunrpc]
> >>>>> [  538.713416]  rpc_run_task+0x29c/0x2c0 [sunrpc]
> >>>>> [  538.715806]  nfs4_call_sync_custom+0xc/0x40 [nfsv4]
> >>>>> [  538.718551]  nfs4_do_call_sync+0x114/0x160 [nfsv4]
> >>>>> [  538.721571]  ? nfs4_call_sync_custom+0x40/0x40 [nfsv4]
> >>>>> [  538.724333]  ? __alloc_pages_nodemask+0x200/0x410
> >>>>> [  538.726794]  ? kasan_unpoison_shadow+0x30/0x40
> >>>>> [  538.729147]  ? __kasan_kmalloc.constprop.8+0xc1/0xd0
> >>>>> [  538.731733]  _nfs42_proc_listxattrs+0x1f6/0x2f0 [nfsv4]
> >>>>> [  538.734504]  ? nfs42_offload_cancel_done+0x50/0x50 [nfsv4]
> >>>>> [  538.737173]  ? __kernel_text_address+0xe/0x30
> >>>>> [  538.739400]  ? unwind_get_return_address+0x2f/0x50
> >>>>> [  538.741727]  ? create_prof_cpu_mask+0x20/0x20
> >>>>> [  538.744141]  ? stack_trace_consume_entry+0x80/0x80
> >>>>> [  538.746585]  ? _raw_spin_lock+0x7a/0xd0
> >>>>> [  538.748477]  nfs42_proc_listxattrs+0xf4/0x150 [nfsv4]
> >>>>> [  538.750920]  ? nfs42_proc_setxattr+0x150/0x150 [nfsv4]
> >>>>> [  538.753557]  ? nfs4_xattr_cache_list+0x91/0x120 [nfsv4]
> >>>>> [  538.756313]  nfs4_listxattr+0x34d/0x3d0 [nfsv4]
> >>>>> [  538.758506]  ? _nfs4_proc_access+0x260/0x260 [nfsv4]
> >>>>> [  538.760859]  ? __ia32_sys_rename+0x40/0x40
> >>>>> [  538.762897]  ? selinux_quota_on+0xf0/0xf0
> >>>>> [  538.764827]  ? __check_object_size+0x178/0x220
> >>>>> [  538.767063]  ? kasan_unpoison_shadow+0x30/0x40
> >>>>> [  538.769233]  ? security_inode_listxattr+0x53/0x60
> >>>>> [  538.771962]  listxattr+0x5b/0xf0
> >>>>> [  538.773980]  path_listxattr+0xa1/0x100
> >>>>> [  538.776147]  ? listxattr+0xf0/0xf0
> >>>>> [  538.778064]  do_syscall_64+0x33/0x40
> >>>>> [  538.780714]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
> >>>>> [  538.783440] RIP: 0033:0x7f12bdaebc8b
> >>>>> [  538.785341] Code: f0 ff ff 73 01 c3 48 8b 0d fa 21 2c 00 f7 d8 64
> >>>>> 89 01 48 83 c8 ff c3 0f 1f 84 00 00 00 00 00 f3 0f 1e fa b8 c2 00 00
> >>>>> 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d cd 21 2c 00 f7 d8 64 89
> >>>>> 01 48
> >>>>> [  538.794845] RSP: 002b:00007ffc7947cff8 EFLAGS: 00000206 ORIG_RAX:
> >>>>> 00000000000000c2
> >>>>> [  538.798985] RAX: ffffffffffffffda RBX: 0000000001454650 RCX: 00007f12bdaebc8b
> >>>>> [  538.802840] RDX: 0000000000000018 RSI: 000000000145c150 RDI: 0000000001454650
> >>>>> [  538.807093] RBP: 000000000145c150 R08: 00007f12bddaeba0 R09: 00007ffc7947cc46
> >>>>> [  538.811487] R10: 0000000000000000 R11: 0000000000000206 R12: 00000000000002dd
> >>>>> [  538.816713] R13: 0000000000000018 R14: 0000000000000018 R15: 0000000000000000
> >>>>> [  538.820437] Modules linked in: rpcrdma rdma_rxe ip6_udp_tunnel
> >>>>> udp_tunnel rdma_ucm rdma_cm iw_cm ib_cm ib_uverbs ib_core cts
> >>>>> rpcsec_gss_krb5 nfsv4 dns_resolver nfs lockd grace nfs_ssc nls_utf8
> >>>>> isofs fuse rfcomm nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 nft_fib
> >>>>> nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_ct
> >>>>> nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 tun bridge stp llc
> >>>>> ip6_tables nft_compat ip_set nf_tables nfnetlink bnep
> >>>>> vmw_vsock_vmci_transport vsock snd_seq_midi snd_seq_midi_event
> >>>>> intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul
> >>>>> vmw_balloon ghash_clmulni_intel joydev pcspkr btusb uvcvideo btrtl
> >>>>> btbcm btintel videobuf2_vmalloc videobuf2_memops videobuf2_v4l2
> >>>>> videobuf2_common snd_ens1371 snd_ac97_codec ac97_bus snd_seq snd_pcm
> >>>>> videodev bluetooth mc rfkill ecdh_generic ecc snd_timer snd_rawmidi
> >>>>> snd_seq_device snd soundcore vmw_vmci i2c_piix4 auth_rpcgss sunrpc
> >>>>> ip_tables xfs libcrc32c sr_mod cdrom sg crc32c_intel ata_generic
> >>>>> serio_raw nvme vmwgfx drm_kms_helper
> >>>>> [  538.820577]  syscopyarea sysfillrect sysimgblt fb_sys_fops cec
> >>>>> nvme_core t10_pi ata_piix ahci libahci ttm vmxnet3 drm libata
> >>>>> [  538.869541] ---[ end trace 4abb2d95a72e9aab ]---
> >>>>
> >>>> I'm running a v5.10-rc3 client now with the fix applied and
> >>>> KASAN enabled. I traced my xfstests run and I'm definitely
> >>>> exercising the LISTXATTRS path:
> >>>>
> >>>> # trace-cmd report -F rpc_request | grep LISTXATTR | wc -l
> >>>> 462
> >>>>
> >>>> No crash or KASAN splat. I'm still missing something.
> >>>>
> >>>> I sometimes apply small patches from the mailing list by hand
> >>>> editing the modified file. Are you sure you applied my fix
> >>>> correctly?
> >>>
> >>> Again I think the difference is the SoftRoce vs hardware
> >>
> >> I'm not finding that plausible... Can you troubleshoot this further
> >> to demonstrate how soft RoCE is contributing to this issue?
> >
> > Btw I'm not the only one. Anna's tests are failing too.
> >
> > OK so all this started with xattr patches (Trond's tags/nfs-for-5.9-1
> > don't pass the generic/013). Your patch isn't sufficient to fix it.
>
> That might be true, but I'm not sure how you can reach that
> conclusion with certainty, because you don't yet have a root
> cause.

I dont understand the comment?  The logic is exactly the same as in
your fix which says that the lack of appropriately allocated memory
makes the memory access go to inappropriate memory.


>
> > When I increased decode_listxattrs_maxsz by 5 then Oops went away (but
> > that's a hack). Here's a fix on top of your fix that allows for my
> > tests to run.
> >
> > diff --git a/fs/nfs/nfs42xdr.c b/fs/nfs/nfs42xdr.c
> > index d0ddf90c9be4..49573b285946 100644
> > --- a/fs/nfs/nfs42xdr.c
> > +++ b/fs/nfs/nfs42xdr.c
> > @@ -179,7 +179,7 @@
> >                                 1 + nfs4_xattr_name_maxsz + 1)
> > #define decode_setxattr_maxsz   (op_decode_hdr_maxsz +
> > decode_change_info_maxsz)
> > #define encode_listxattrs_maxsz  (op_encode_hdr_maxsz + 2 + 1)
> > -#define decode_listxattrs_maxsz  (op_decode_hdr_maxsz + 2 + 1 + 1 + 1)
> > +#define decode_listxattrs_maxsz  (op_decode_hdr_maxsz + 2 + 1 + 1 + 5)
> > #define encode_removexattr_maxsz (op_encode_hdr_maxsz + 1 + \
> >                                  nfs4_xattr_name_maxsz)
> > #define decode_removexattr_maxsz (op_decode_hdr_maxsz + \
> >
> > I would like Frank to comment on what the decode_listattrs_maxsz
> > should be. According to the RFC:
> >
> >   /// struct LISTXATTRS4resok {
> >   ///         nfs_cookie4    lxr_cookie;
> >   ///         xattrkey4      lxr_names<>;
> >   ///         bool           lxr_eof;
> >   /// };
> >
> > This is current code (with Chuck's fix): op_decode_hdr_maxsz + 2 + 1 + 1 + 1
> >
> > I don't see how the xattrkey4  lxr_names<> can be estimated by a
> > constant number.
>
> That's right, the size of xattrkey4 is not provided by the maxsz
> constant.
>
> nfs4_xdr_enc_listxattrs() invokes rpc_prepare_reply_pages() to set
> up the pages needed for the variable part of the reply.
>
>
> > So I think the real fix should be:
> >
> > diff --git a/fs/nfs/nfs42xdr.c b/fs/nfs/nfs42xdr.c
> > index d0ddf90c9be4..73b44f8c036d 100644
> > --- a/fs/nfs/nfs42xdr.c
> > +++ b/fs/nfs/nfs42xdr.c
> > @@ -179,7 +179,8 @@
> >                                 1 + nfs4_xattr_name_maxsz + 1)
> > #define decode_setxattr_maxsz   (op_decode_hdr_maxsz +
> > decode_change_info_maxsz)
> > #define encode_listxattrs_maxsz  (op_encode_hdr_maxsz + 2 + 1)
> > -#define decode_listxattrs_maxsz  (op_decode_hdr_maxsz + 2 + 1 + 1 + 1)
> > +#define decode_listxattrs_maxsz  (op_decode_hdr_maxsz + 2 + 1 + 1 + \
> > +                                 XDR_QUADLEN(NFS4_OPAQUE_LIMIT))
> > #define encode_removexattr_maxsz (op_encode_hdr_maxsz + 1 + \
> >                                  nfs4_xattr_name_maxsz)
> > #define decode_removexattr_maxsz (op_decode_hdr_maxsz + \
> >
> >
> >
> >
> >
> >
> >
> >
> >>
> >
> >>
> >> --
> >> Chuck Lever
>
> --
> Chuck Lever
>
>
>

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2020-11-13 19:13 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-09 22:55 kernel oops in generic/013 on an rdma mount (over either soft roce or iwarp) Olga Kornievskaia
2020-11-09 22:59 ` Chuck Lever
2020-11-09 23:07   ` Olga Kornievskaia
2020-11-09 23:17     ` Olga Kornievskaia
2020-11-10 14:41       ` Chuck Lever
2020-11-10 16:51         ` Olga Kornievskaia
2020-11-10 17:44           ` Chuck Lever
2020-11-10 18:18             ` Olga Kornievskaia
2020-11-10 18:25               ` Chuck Lever
2020-11-11 21:42                 ` Olga Kornievskaia
2020-11-12 15:27                   ` Chuck Lever
2020-11-12 15:37                     ` Olga Kornievskaia
2020-11-12 20:48                       ` Chuck Lever
2020-11-13 18:49                         ` Olga Kornievskaia
2020-11-13 19:03                           ` Chuck Lever
2020-11-13 19:12                             ` Olga Kornievskaia

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).