All of lore.kernel.org
 help / color / mirror / Atom feed
* locks_get_lock_context null deref
@ 2015-10-19 16:33 ` William Dauchy
  0 siblings, 0 replies; 8+ messages in thread
From: William Dauchy @ 2015-10-19 16:33 UTC (permalink / raw)
  To: linux-fsdevel-u79uwXL29TY76Z2rM5mHXA; +Cc: Linux NFS mailing list, Jeff Layton

[-- Attachment #1: Type: text/plain, Size: 2630 bytes --]

Hello,

I am getting the following null deref on locks_get_lock_context using a v4.1.x
(x86_64) while using the nfs client v4.0.

Any hint to help debug that issue?


BUG: unable to handle kernel NULL pointer dereference at 00000000000001c8
IP: [<ffffffff811d0cf3>] locks_get_lock_context+0x3/0xc0
PGD 0 
Oops: 0000 [#1] SMP 
CPU: 1 PID: 1773 Comm: kworker/1:1H Not tainted 4.1.11-rc1 #1
Workqueue: rpciod ffffffff8164fff0
task: ffff8810374deba0 ti: ffff8810374df150 task.ti: ffff8810374df150
RIP: 0010:[<ffffffff811d0cf3>]  [<ffffffff811d0cf3>] locks_get_lock_context+0x3/0xc0
RSP: 0000:ffff881036007bb0  EFLAGS: 00010246
RAX: ffff881036007c30 RBX: ffff881001981880 RCX: 0000000000000002
RDX: 00000000000006ed RSI: 0000000000000002 RDI: 0000000000000000
RBP: ffff881036007c08 R08: 0000000000000000 R09: 0000000000000001
R10: 0000000000000000 R11: ffff88101db69948 R12: ffff8810019818d8
R13: ffff881036007bc8 R14: ffff880e225d81c0 R15: ffff881edfd2b400
FS:  0000000000000000(0000) GS:ffff88103fc20000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00000000000001c8 CR3: 000000000169b000 CR4: 00000000000606f0
Stack:
 ffffffff811d2710 ffff881036007bc8 ffffffff819f1af1 ffff881036007bc8
 ffff881036007bc8 ffff881036007c08 ffff881001981880 ffff8810019818d8
 ffff881036007c48 ffff880e225d81c0 ffff881edfd2b400 ffff881036007c88
Call Trace:
 [<ffffffff811d2710>] ? flock_lock_file+0x30/0x270
 [<ffffffff811d3ad1>] flock_lock_file_wait+0x41/0xf0
 [<ffffffff8168be66>] ? _raw_spin_unlock+0x26/0x40
 [<ffffffff81268de9>] do_vfs_lock+0x19/0x40
 [<ffffffff812695cc>] nfs4_locku_done+0x5c/0xf0
 [<ffffffff8164f3f4>] rpc_exit_task+0x34/0xb0
 [<ffffffff8164fcd9>] __rpc_execute+0x79/0x390
 [<ffffffff81650000>] rpc_async_schedule+0x10/0x20
 [<ffffffff81086095>] process_one_work+0x1a5/0x450
 [<ffffffff81086024>] ? process_one_work+0x134/0x450
 [<ffffffff8108638b>] worker_thread+0x4b/0x4a0
 [<ffffffff81086340>] ? process_one_work+0x450/0x450
 [<ffffffff81086340>] ? process_one_work+0x450/0x450
 [<ffffffff8108d777>] kthread+0xf7/0x110
 [<ffffffff8108d680>] ? __kthread_parkme+0xa0/0xa0
 [<ffffffff8168ce3e>] ret_from_fork+0x3e/0x70
 [<ffffffff8108d680>] ? __kthread_parkme+0xa0/0xa0
Code: 48 b8 00 00 00 00 00 00 00 80 55 48 89 e5 48 09 c1 ff d1 5d 85 c0 0f 95 c0 0f b6 c0 eb b9 66 2e 0f 1f 84 00 00 00 00 00 83 fe 02 <48> 8b 87 c8 01 00 00 0f 84 a0 00 00 00 48 85 c0 0f 85 97 00 00 
RIP  [<ffffffff811d0cf3>] locks_get_lock_context+0x3/0xc0
 RSP <ffff881036007bb0>
CR2: 00000000000001c8
---[ end trace 2da9686dda1b5574 ]---


Thanks,
-- 
William

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* locks_get_lock_context null deref
@ 2015-10-19 16:33 ` William Dauchy
  0 siblings, 0 replies; 8+ messages in thread
From: William Dauchy @ 2015-10-19 16:33 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: Linux NFS mailing list, Jeff Layton

[-- Attachment #1: Type: text/plain, Size: 2630 bytes --]

Hello,

I am getting the following null deref on locks_get_lock_context using a v4.1.x
(x86_64) while using the nfs client v4.0.

Any hint to help debug that issue?


BUG: unable to handle kernel NULL pointer dereference at 00000000000001c8
IP: [<ffffffff811d0cf3>] locks_get_lock_context+0x3/0xc0
PGD 0 
Oops: 0000 [#1] SMP 
CPU: 1 PID: 1773 Comm: kworker/1:1H Not tainted 4.1.11-rc1 #1
Workqueue: rpciod ffffffff8164fff0
task: ffff8810374deba0 ti: ffff8810374df150 task.ti: ffff8810374df150
RIP: 0010:[<ffffffff811d0cf3>]  [<ffffffff811d0cf3>] locks_get_lock_context+0x3/0xc0
RSP: 0000:ffff881036007bb0  EFLAGS: 00010246
RAX: ffff881036007c30 RBX: ffff881001981880 RCX: 0000000000000002
RDX: 00000000000006ed RSI: 0000000000000002 RDI: 0000000000000000
RBP: ffff881036007c08 R08: 0000000000000000 R09: 0000000000000001
R10: 0000000000000000 R11: ffff88101db69948 R12: ffff8810019818d8
R13: ffff881036007bc8 R14: ffff880e225d81c0 R15: ffff881edfd2b400
FS:  0000000000000000(0000) GS:ffff88103fc20000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00000000000001c8 CR3: 000000000169b000 CR4: 00000000000606f0
Stack:
 ffffffff811d2710 ffff881036007bc8 ffffffff819f1af1 ffff881036007bc8
 ffff881036007bc8 ffff881036007c08 ffff881001981880 ffff8810019818d8
 ffff881036007c48 ffff880e225d81c0 ffff881edfd2b400 ffff881036007c88
Call Trace:
 [<ffffffff811d2710>] ? flock_lock_file+0x30/0x270
 [<ffffffff811d3ad1>] flock_lock_file_wait+0x41/0xf0
 [<ffffffff8168be66>] ? _raw_spin_unlock+0x26/0x40
 [<ffffffff81268de9>] do_vfs_lock+0x19/0x40
 [<ffffffff812695cc>] nfs4_locku_done+0x5c/0xf0
 [<ffffffff8164f3f4>] rpc_exit_task+0x34/0xb0
 [<ffffffff8164fcd9>] __rpc_execute+0x79/0x390
 [<ffffffff81650000>] rpc_async_schedule+0x10/0x20
 [<ffffffff81086095>] process_one_work+0x1a5/0x450
 [<ffffffff81086024>] ? process_one_work+0x134/0x450
 [<ffffffff8108638b>] worker_thread+0x4b/0x4a0
 [<ffffffff81086340>] ? process_one_work+0x450/0x450
 [<ffffffff81086340>] ? process_one_work+0x450/0x450
 [<ffffffff8108d777>] kthread+0xf7/0x110
 [<ffffffff8108d680>] ? __kthread_parkme+0xa0/0xa0
 [<ffffffff8168ce3e>] ret_from_fork+0x3e/0x70
 [<ffffffff8108d680>] ? __kthread_parkme+0xa0/0xa0
Code: 48 b8 00 00 00 00 00 00 00 80 55 48 89 e5 48 09 c1 ff d1 5d 85 c0 0f 95 c0 0f b6 c0 eb b9 66 2e 0f 1f 84 00 00 00 00 00 83 fe 02 <48> 8b 87 c8 01 00 00 0f 84 a0 00 00 00 48 85 c0 0f 85 97 00 00 
RIP  [<ffffffff811d0cf3>] locks_get_lock_context+0x3/0xc0
 RSP <ffff881036007bb0>
CR2: 00000000000001c8
---[ end trace 2da9686dda1b5574 ]---


Thanks,
-- 
William

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: locks_get_lock_context null deref
  2015-10-19 16:33 ` William Dauchy
  (?)
@ 2015-10-19 17:40 ` William Dauchy
       [not found]   ` <20151019174014.GJ10696-M8Sm6a3kpgNeoWH0uzbU5w@public.gmane.org>
  -1 siblings, 1 reply; 8+ messages in thread
From: William Dauchy @ 2015-10-19 17:40 UTC (permalink / raw)
  To: William Dauchy; +Cc: linux-fsdevel, Linux NFS mailing list, Jeff Layton

[-- Attachment #1: Type: text/plain, Size: 3149 bytes --]

On Oct19 18:33, William Dauchy wrote:
> I am getting the following null deref on locks_get_lock_context using a v4.1.x
> (x86_64) while using the nfs client v4.0.
> 
> Any hint to help debug that issue?
> 
> 
> BUG: unable to handle kernel NULL pointer dereference at 00000000000001c8
> IP: [<ffffffff811d0cf3>] locks_get_lock_context+0x3/0xc0
> PGD 0 
> Oops: 0000 [#1] SMP 
> CPU: 1 PID: 1773 Comm: kworker/1:1H Not tainted 4.1.11-rc1 #1
> Workqueue: rpciod ffffffff8164fff0
> task: ffff8810374deba0 ti: ffff8810374df150 task.ti: ffff8810374df150
> RIP: 0010:[<ffffffff811d0cf3>]  [<ffffffff811d0cf3>] locks_get_lock_context+0x3/0xc0
> RSP: 0000:ffff881036007bb0  EFLAGS: 00010246
> RAX: ffff881036007c30 RBX: ffff881001981880 RCX: 0000000000000002
> RDX: 00000000000006ed RSI: 0000000000000002 RDI: 0000000000000000
> RBP: ffff881036007c08 R08: 0000000000000000 R09: 0000000000000001
> R10: 0000000000000000 R11: ffff88101db69948 R12: ffff8810019818d8
> R13: ffff881036007bc8 R14: ffff880e225d81c0 R15: ffff881edfd2b400
> FS:  0000000000000000(0000) GS:ffff88103fc20000(0000) knlGS:0000000000000000
> CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> CR2: 00000000000001c8 CR3: 000000000169b000 CR4: 00000000000606f0
> Stack:
>  ffffffff811d2710 ffff881036007bc8 ffffffff819f1af1 ffff881036007bc8
>  ffff881036007bc8 ffff881036007c08 ffff881001981880 ffff8810019818d8
>  ffff881036007c48 ffff880e225d81c0 ffff881edfd2b400 ffff881036007c88
> Call Trace:
>  [<ffffffff811d2710>] ? flock_lock_file+0x30/0x270
>  [<ffffffff811d3ad1>] flock_lock_file_wait+0x41/0xf0
>  [<ffffffff8168be66>] ? _raw_spin_unlock+0x26/0x40
>  [<ffffffff81268de9>] do_vfs_lock+0x19/0x40
>  [<ffffffff812695cc>] nfs4_locku_done+0x5c/0xf0
>  [<ffffffff8164f3f4>] rpc_exit_task+0x34/0xb0
>  [<ffffffff8164fcd9>] __rpc_execute+0x79/0x390
>  [<ffffffff81650000>] rpc_async_schedule+0x10/0x20
>  [<ffffffff81086095>] process_one_work+0x1a5/0x450
>  [<ffffffff81086024>] ? process_one_work+0x134/0x450
>  [<ffffffff8108638b>] worker_thread+0x4b/0x4a0
>  [<ffffffff81086340>] ? process_one_work+0x450/0x450
>  [<ffffffff81086340>] ? process_one_work+0x450/0x450
>  [<ffffffff8108d777>] kthread+0xf7/0x110
>  [<ffffffff8108d680>] ? __kthread_parkme+0xa0/0xa0
>  [<ffffffff8168ce3e>] ret_from_fork+0x3e/0x70
>  [<ffffffff8108d680>] ? __kthread_parkme+0xa0/0xa0
> Code: 48 b8 00 00 00 00 00 00 00 80 55 48 89 e5 48 09 c1 ff d1 5d 85 c0 0f 95 c0 0f b6 c0 eb b9 66 2e 0f 1f 84 00 00 00 00 00 83 fe 02 <48> 8b 87 c8 01 00 00 0f 84 a0 00 00 00 48 85 c0 0f 85 97 00 00 
> RIP  [<ffffffff811d0cf3>] locks_get_lock_context+0x3/0xc0
>  RSP <ffff881036007bb0>
> CR2: 00000000000001c8
> ---[ end trace 2da9686dda1b5574 ]---

As mentioned in another thread by Jeff, I applied the following commits:

bcd7f78 locks: have flock_lock_file take an inode pointer instead of a filp
29d01b2 locks: new helpers - flock_lock_inode_wait and posix_lock_inode_wait
ee296d7 locks: inline posix_lock_file_wait and flock_lock_file_wait
83bfff2 nfs4: have do_vfs_lock take an inode pointer

I will see if I get the same issue again.

-- 
William

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: locks_get_lock_context null deref
  2015-10-19 17:40 ` William Dauchy
@ 2015-10-22 17:32       ` William Dauchy
  0 siblings, 0 replies; 8+ messages in thread
From: William Dauchy @ 2015-10-22 17:32 UTC (permalink / raw)
  To: Jeff Layton
  Cc: linux-fsdevel-u79uwXL29TY76Z2rM5mHXA, Linux NFS mailing list,
	Jeff Layton, william-M8Sm6a3kpgNeoWH0uzbU5w

[-- Attachment #1: Type: text/plain, Size: 3762 bytes --]

Hi Jeff,

After a few days of testing, I was unable to reproduce the null deref
mentionned in this thread.
Do you think we can ask for a backport in stable@ for v4.1?

bcd7f78 locks: have flock_lock_file take an inode pointer instead of a filp
29d01b2 locks: new helpers - flock_lock_inode_wait and posix_lock_inode_wait
ee296d7 locks: inline posix_lock_file_wait and flock_lock_file_wait
83bfff2 nfs4: have do_vfs_lock take an inode pointer

On Oct19 19:40, William Dauchy wrote:
> On Oct19 18:33, William Dauchy wrote:
> > I am getting the following null deref on locks_get_lock_context using a v4.1.x
> > (x86_64) while using the nfs client v4.0.
> > 
> > Any hint to help debug that issue?
> > 
> > 
> > BUG: unable to handle kernel NULL pointer dereference at 00000000000001c8
> > IP: [<ffffffff811d0cf3>] locks_get_lock_context+0x3/0xc0
> > PGD 0 
> > Oops: 0000 [#1] SMP 
> > CPU: 1 PID: 1773 Comm: kworker/1:1H Not tainted 4.1.11-rc1 #1
> > Workqueue: rpciod ffffffff8164fff0
> > task: ffff8810374deba0 ti: ffff8810374df150 task.ti: ffff8810374df150
> > RIP: 0010:[<ffffffff811d0cf3>]  [<ffffffff811d0cf3>] locks_get_lock_context+0x3/0xc0
> > RSP: 0000:ffff881036007bb0  EFLAGS: 00010246
> > RAX: ffff881036007c30 RBX: ffff881001981880 RCX: 0000000000000002
> > RDX: 00000000000006ed RSI: 0000000000000002 RDI: 0000000000000000
> > RBP: ffff881036007c08 R08: 0000000000000000 R09: 0000000000000001
> > R10: 0000000000000000 R11: ffff88101db69948 R12: ffff8810019818d8
> > R13: ffff881036007bc8 R14: ffff880e225d81c0 R15: ffff881edfd2b400
> > FS:  0000000000000000(0000) GS:ffff88103fc20000(0000) knlGS:0000000000000000
> > CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> > CR2: 00000000000001c8 CR3: 000000000169b000 CR4: 00000000000606f0
> > Stack:
> >  ffffffff811d2710 ffff881036007bc8 ffffffff819f1af1 ffff881036007bc8
> >  ffff881036007bc8 ffff881036007c08 ffff881001981880 ffff8810019818d8
> >  ffff881036007c48 ffff880e225d81c0 ffff881edfd2b400 ffff881036007c88
> > Call Trace:
> >  [<ffffffff811d2710>] ? flock_lock_file+0x30/0x270
> >  [<ffffffff811d3ad1>] flock_lock_file_wait+0x41/0xf0
> >  [<ffffffff8168be66>] ? _raw_spin_unlock+0x26/0x40
> >  [<ffffffff81268de9>] do_vfs_lock+0x19/0x40
> >  [<ffffffff812695cc>] nfs4_locku_done+0x5c/0xf0
> >  [<ffffffff8164f3f4>] rpc_exit_task+0x34/0xb0
> >  [<ffffffff8164fcd9>] __rpc_execute+0x79/0x390
> >  [<ffffffff81650000>] rpc_async_schedule+0x10/0x20
> >  [<ffffffff81086095>] process_one_work+0x1a5/0x450
> >  [<ffffffff81086024>] ? process_one_work+0x134/0x450
> >  [<ffffffff8108638b>] worker_thread+0x4b/0x4a0
> >  [<ffffffff81086340>] ? process_one_work+0x450/0x450
> >  [<ffffffff81086340>] ? process_one_work+0x450/0x450
> >  [<ffffffff8108d777>] kthread+0xf7/0x110
> >  [<ffffffff8108d680>] ? __kthread_parkme+0xa0/0xa0
> >  [<ffffffff8168ce3e>] ret_from_fork+0x3e/0x70
> >  [<ffffffff8108d680>] ? __kthread_parkme+0xa0/0xa0
> > Code: 48 b8 00 00 00 00 00 00 00 80 55 48 89 e5 48 09 c1 ff d1 5d 85 c0 0f 95 c0 0f b6 c0 eb b9 66 2e 0f 1f 84 00 00 00 00 00 83 fe 02 <48> 8b 87 c8 01 00 00 0f 84 a0 00 00 00 48 85 c0 0f 85 97 00 00 
> > RIP  [<ffffffff811d0cf3>] locks_get_lock_context+0x3/0xc0
> >  RSP <ffff881036007bb0>
> > CR2: 00000000000001c8
> > ---[ end trace 2da9686dda1b5574 ]---
> 
> As mentioned in another thread by Jeff, I applied the following commits:
> 
> bcd7f78 locks: have flock_lock_file take an inode pointer instead of a filp
> 29d01b2 locks: new helpers - flock_lock_inode_wait and posix_lock_inode_wait
> ee296d7 locks: inline posix_lock_file_wait and flock_lock_file_wait
> 83bfff2 nfs4: have do_vfs_lock take an inode pointer
> 
> I will see if I get the same issue again.

-- 
William

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: locks_get_lock_context null deref
@ 2015-10-22 17:32       ` William Dauchy
  0 siblings, 0 replies; 8+ messages in thread
From: William Dauchy @ 2015-10-22 17:32 UTC (permalink / raw)
  To: Jeff Layton; +Cc: linux-fsdevel, Linux NFS mailing list, Jeff Layton, william

[-- Attachment #1: Type: text/plain, Size: 3762 bytes --]

Hi Jeff,

After a few days of testing, I was unable to reproduce the null deref
mentionned in this thread.
Do you think we can ask for a backport in stable@ for v4.1?

bcd7f78 locks: have flock_lock_file take an inode pointer instead of a filp
29d01b2 locks: new helpers - flock_lock_inode_wait and posix_lock_inode_wait
ee296d7 locks: inline posix_lock_file_wait and flock_lock_file_wait
83bfff2 nfs4: have do_vfs_lock take an inode pointer

On Oct19 19:40, William Dauchy wrote:
> On Oct19 18:33, William Dauchy wrote:
> > I am getting the following null deref on locks_get_lock_context using a v4.1.x
> > (x86_64) while using the nfs client v4.0.
> > 
> > Any hint to help debug that issue?
> > 
> > 
> > BUG: unable to handle kernel NULL pointer dereference at 00000000000001c8
> > IP: [<ffffffff811d0cf3>] locks_get_lock_context+0x3/0xc0
> > PGD 0 
> > Oops: 0000 [#1] SMP 
> > CPU: 1 PID: 1773 Comm: kworker/1:1H Not tainted 4.1.11-rc1 #1
> > Workqueue: rpciod ffffffff8164fff0
> > task: ffff8810374deba0 ti: ffff8810374df150 task.ti: ffff8810374df150
> > RIP: 0010:[<ffffffff811d0cf3>]  [<ffffffff811d0cf3>] locks_get_lock_context+0x3/0xc0
> > RSP: 0000:ffff881036007bb0  EFLAGS: 00010246
> > RAX: ffff881036007c30 RBX: ffff881001981880 RCX: 0000000000000002
> > RDX: 00000000000006ed RSI: 0000000000000002 RDI: 0000000000000000
> > RBP: ffff881036007c08 R08: 0000000000000000 R09: 0000000000000001
> > R10: 0000000000000000 R11: ffff88101db69948 R12: ffff8810019818d8
> > R13: ffff881036007bc8 R14: ffff880e225d81c0 R15: ffff881edfd2b400
> > FS:  0000000000000000(0000) GS:ffff88103fc20000(0000) knlGS:0000000000000000
> > CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> > CR2: 00000000000001c8 CR3: 000000000169b000 CR4: 00000000000606f0
> > Stack:
> >  ffffffff811d2710 ffff881036007bc8 ffffffff819f1af1 ffff881036007bc8
> >  ffff881036007bc8 ffff881036007c08 ffff881001981880 ffff8810019818d8
> >  ffff881036007c48 ffff880e225d81c0 ffff881edfd2b400 ffff881036007c88
> > Call Trace:
> >  [<ffffffff811d2710>] ? flock_lock_file+0x30/0x270
> >  [<ffffffff811d3ad1>] flock_lock_file_wait+0x41/0xf0
> >  [<ffffffff8168be66>] ? _raw_spin_unlock+0x26/0x40
> >  [<ffffffff81268de9>] do_vfs_lock+0x19/0x40
> >  [<ffffffff812695cc>] nfs4_locku_done+0x5c/0xf0
> >  [<ffffffff8164f3f4>] rpc_exit_task+0x34/0xb0
> >  [<ffffffff8164fcd9>] __rpc_execute+0x79/0x390
> >  [<ffffffff81650000>] rpc_async_schedule+0x10/0x20
> >  [<ffffffff81086095>] process_one_work+0x1a5/0x450
> >  [<ffffffff81086024>] ? process_one_work+0x134/0x450
> >  [<ffffffff8108638b>] worker_thread+0x4b/0x4a0
> >  [<ffffffff81086340>] ? process_one_work+0x450/0x450
> >  [<ffffffff81086340>] ? process_one_work+0x450/0x450
> >  [<ffffffff8108d777>] kthread+0xf7/0x110
> >  [<ffffffff8108d680>] ? __kthread_parkme+0xa0/0xa0
> >  [<ffffffff8168ce3e>] ret_from_fork+0x3e/0x70
> >  [<ffffffff8108d680>] ? __kthread_parkme+0xa0/0xa0
> > Code: 48 b8 00 00 00 00 00 00 00 80 55 48 89 e5 48 09 c1 ff d1 5d 85 c0 0f 95 c0 0f b6 c0 eb b9 66 2e 0f 1f 84 00 00 00 00 00 83 fe 02 <48> 8b 87 c8 01 00 00 0f 84 a0 00 00 00 48 85 c0 0f 85 97 00 00 
> > RIP  [<ffffffff811d0cf3>] locks_get_lock_context+0x3/0xc0
> >  RSP <ffff881036007bb0>
> > CR2: 00000000000001c8
> > ---[ end trace 2da9686dda1b5574 ]---
> 
> As mentioned in another thread by Jeff, I applied the following commits:
> 
> bcd7f78 locks: have flock_lock_file take an inode pointer instead of a filp
> 29d01b2 locks: new helpers - flock_lock_inode_wait and posix_lock_inode_wait
> ee296d7 locks: inline posix_lock_file_wait and flock_lock_file_wait
> 83bfff2 nfs4: have do_vfs_lock take an inode pointer
> 
> I will see if I get the same issue again.

-- 
William

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: locks_get_lock_context null deref
  2015-10-22 17:32       ` William Dauchy
@ 2015-10-22 18:39           ` Jeff Layton
  -1 siblings, 0 replies; 8+ messages in thread
From: Jeff Layton @ 2015-10-22 18:39 UTC (permalink / raw)
  To: William Dauchy
  Cc: linux-fsdevel-u79uwXL29TY76Z2rM5mHXA, Linux NFS mailing list,
	Jeff Layton

[-- Attachment #1: Type: text/plain, Size: 4211 bytes --]

On Thu, 22 Oct 2015 19:32:43 +0200
William Dauchy <william-M8Sm6a3kpgNeoWH0uzbU5w@public.gmane.org> wrote:

> Hi Jeff,
> 
> After a few days of testing, I was unable to reproduce the null deref
> mentionned in this thread.
> Do you think we can ask for a backport in stable@ for v4.1?
> 
> bcd7f78 locks: have flock_lock_file take an inode pointer instead of a filp
> 29d01b2 locks: new helpers - flock_lock_inode_wait and posix_lock_inode_wait
> ee296d7 locks: inline posix_lock_file_wait and flock_lock_file_wait
> 83bfff2 nfs4: have do_vfs_lock take an inode pointer
> 

Yes, I think those four patches should go into v4.1-stable. Do you need me
to do anything to make that happen?

Thanks,
Jeff

> On Oct19 19:40, William Dauchy wrote:
> > On Oct19 18:33, William Dauchy wrote:
> > > I am getting the following null deref on locks_get_lock_context using a v4.1.x
> > > (x86_64) while using the nfs client v4.0.
> > > 
> > > Any hint to help debug that issue?
> > > 
> > > 
> > > BUG: unable to handle kernel NULL pointer dereference at 00000000000001c8
> > > IP: [<ffffffff811d0cf3>] locks_get_lock_context+0x3/0xc0
> > > PGD 0 
> > > Oops: 0000 [#1] SMP 
> > > CPU: 1 PID: 1773 Comm: kworker/1:1H Not tainted 4.1.11-rc1 #1
> > > Workqueue: rpciod ffffffff8164fff0
> > > task: ffff8810374deba0 ti: ffff8810374df150 task.ti: ffff8810374df150
> > > RIP: 0010:[<ffffffff811d0cf3>]  [<ffffffff811d0cf3>] locks_get_lock_context+0x3/0xc0
> > > RSP: 0000:ffff881036007bb0  EFLAGS: 00010246
> > > RAX: ffff881036007c30 RBX: ffff881001981880 RCX: 0000000000000002
> > > RDX: 00000000000006ed RSI: 0000000000000002 RDI: 0000000000000000
> > > RBP: ffff881036007c08 R08: 0000000000000000 R09: 0000000000000001
> > > R10: 0000000000000000 R11: ffff88101db69948 R12: ffff8810019818d8
> > > R13: ffff881036007bc8 R14: ffff880e225d81c0 R15: ffff881edfd2b400
> > > FS:  0000000000000000(0000) GS:ffff88103fc20000(0000) knlGS:0000000000000000
> > > CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> > > CR2: 00000000000001c8 CR3: 000000000169b000 CR4: 00000000000606f0
> > > Stack:
> > >  ffffffff811d2710 ffff881036007bc8 ffffffff819f1af1 ffff881036007bc8
> > >  ffff881036007bc8 ffff881036007c08 ffff881001981880 ffff8810019818d8
> > >  ffff881036007c48 ffff880e225d81c0 ffff881edfd2b400 ffff881036007c88
> > > Call Trace:
> > >  [<ffffffff811d2710>] ? flock_lock_file+0x30/0x270
> > >  [<ffffffff811d3ad1>] flock_lock_file_wait+0x41/0xf0
> > >  [<ffffffff8168be66>] ? _raw_spin_unlock+0x26/0x40
> > >  [<ffffffff81268de9>] do_vfs_lock+0x19/0x40
> > >  [<ffffffff812695cc>] nfs4_locku_done+0x5c/0xf0
> > >  [<ffffffff8164f3f4>] rpc_exit_task+0x34/0xb0
> > >  [<ffffffff8164fcd9>] __rpc_execute+0x79/0x390
> > >  [<ffffffff81650000>] rpc_async_schedule+0x10/0x20
> > >  [<ffffffff81086095>] process_one_work+0x1a5/0x450
> > >  [<ffffffff81086024>] ? process_one_work+0x134/0x450
> > >  [<ffffffff8108638b>] worker_thread+0x4b/0x4a0
> > >  [<ffffffff81086340>] ? process_one_work+0x450/0x450
> > >  [<ffffffff81086340>] ? process_one_work+0x450/0x450
> > >  [<ffffffff8108d777>] kthread+0xf7/0x110
> > >  [<ffffffff8108d680>] ? __kthread_parkme+0xa0/0xa0
> > >  [<ffffffff8168ce3e>] ret_from_fork+0x3e/0x70
> > >  [<ffffffff8108d680>] ? __kthread_parkme+0xa0/0xa0
> > > Code: 48 b8 00 00 00 00 00 00 00 80 55 48 89 e5 48 09 c1 ff d1 5d 85 c0 0f 95 c0 0f b6 c0 eb b9 66 2e 0f 1f 84 00 00 00 00 00 83 fe 02 <48> 8b 87 c8 01 00 00 0f 84 a0 00 00 00 48 85 c0 0f 85 97 00 00 
> > > RIP  [<ffffffff811d0cf3>] locks_get_lock_context+0x3/0xc0
> > >  RSP <ffff881036007bb0>
> > > CR2: 00000000000001c8
> > > ---[ end trace 2da9686dda1b5574 ]---
> > 
> > As mentioned in another thread by Jeff, I applied the following commits:
> > 
> > bcd7f78 locks: have flock_lock_file take an inode pointer instead of a filp
> > 29d01b2 locks: new helpers - flock_lock_inode_wait and posix_lock_inode_wait
> > ee296d7 locks: inline posix_lock_file_wait and flock_lock_file_wait
> > 83bfff2 nfs4: have do_vfs_lock take an inode pointer
> > 
> > I will see if I get the same issue again.
> 


-- 
Jeff Layton <jlayton-vpEMnDpepFuMZCB2o+C8xQ@public.gmane.org>

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: locks_get_lock_context null deref
@ 2015-10-22 18:39           ` Jeff Layton
  0 siblings, 0 replies; 8+ messages in thread
From: Jeff Layton @ 2015-10-22 18:39 UTC (permalink / raw)
  To: William Dauchy; +Cc: linux-fsdevel, Linux NFS mailing list, Jeff Layton

[-- Attachment #1: Type: text/plain, Size: 4157 bytes --]

On Thu, 22 Oct 2015 19:32:43 +0200
William Dauchy <william@gandi.net> wrote:

> Hi Jeff,
> 
> After a few days of testing, I was unable to reproduce the null deref
> mentionned in this thread.
> Do you think we can ask for a backport in stable@ for v4.1?
> 
> bcd7f78 locks: have flock_lock_file take an inode pointer instead of a filp
> 29d01b2 locks: new helpers - flock_lock_inode_wait and posix_lock_inode_wait
> ee296d7 locks: inline posix_lock_file_wait and flock_lock_file_wait
> 83bfff2 nfs4: have do_vfs_lock take an inode pointer
> 

Yes, I think those four patches should go into v4.1-stable. Do you need me
to do anything to make that happen?

Thanks,
Jeff

> On Oct19 19:40, William Dauchy wrote:
> > On Oct19 18:33, William Dauchy wrote:
> > > I am getting the following null deref on locks_get_lock_context using a v4.1.x
> > > (x86_64) while using the nfs client v4.0.
> > > 
> > > Any hint to help debug that issue?
> > > 
> > > 
> > > BUG: unable to handle kernel NULL pointer dereference at 00000000000001c8
> > > IP: [<ffffffff811d0cf3>] locks_get_lock_context+0x3/0xc0
> > > PGD 0 
> > > Oops: 0000 [#1] SMP 
> > > CPU: 1 PID: 1773 Comm: kworker/1:1H Not tainted 4.1.11-rc1 #1
> > > Workqueue: rpciod ffffffff8164fff0
> > > task: ffff8810374deba0 ti: ffff8810374df150 task.ti: ffff8810374df150
> > > RIP: 0010:[<ffffffff811d0cf3>]  [<ffffffff811d0cf3>] locks_get_lock_context+0x3/0xc0
> > > RSP: 0000:ffff881036007bb0  EFLAGS: 00010246
> > > RAX: ffff881036007c30 RBX: ffff881001981880 RCX: 0000000000000002
> > > RDX: 00000000000006ed RSI: 0000000000000002 RDI: 0000000000000000
> > > RBP: ffff881036007c08 R08: 0000000000000000 R09: 0000000000000001
> > > R10: 0000000000000000 R11: ffff88101db69948 R12: ffff8810019818d8
> > > R13: ffff881036007bc8 R14: ffff880e225d81c0 R15: ffff881edfd2b400
> > > FS:  0000000000000000(0000) GS:ffff88103fc20000(0000) knlGS:0000000000000000
> > > CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> > > CR2: 00000000000001c8 CR3: 000000000169b000 CR4: 00000000000606f0
> > > Stack:
> > >  ffffffff811d2710 ffff881036007bc8 ffffffff819f1af1 ffff881036007bc8
> > >  ffff881036007bc8 ffff881036007c08 ffff881001981880 ffff8810019818d8
> > >  ffff881036007c48 ffff880e225d81c0 ffff881edfd2b400 ffff881036007c88
> > > Call Trace:
> > >  [<ffffffff811d2710>] ? flock_lock_file+0x30/0x270
> > >  [<ffffffff811d3ad1>] flock_lock_file_wait+0x41/0xf0
> > >  [<ffffffff8168be66>] ? _raw_spin_unlock+0x26/0x40
> > >  [<ffffffff81268de9>] do_vfs_lock+0x19/0x40
> > >  [<ffffffff812695cc>] nfs4_locku_done+0x5c/0xf0
> > >  [<ffffffff8164f3f4>] rpc_exit_task+0x34/0xb0
> > >  [<ffffffff8164fcd9>] __rpc_execute+0x79/0x390
> > >  [<ffffffff81650000>] rpc_async_schedule+0x10/0x20
> > >  [<ffffffff81086095>] process_one_work+0x1a5/0x450
> > >  [<ffffffff81086024>] ? process_one_work+0x134/0x450
> > >  [<ffffffff8108638b>] worker_thread+0x4b/0x4a0
> > >  [<ffffffff81086340>] ? process_one_work+0x450/0x450
> > >  [<ffffffff81086340>] ? process_one_work+0x450/0x450
> > >  [<ffffffff8108d777>] kthread+0xf7/0x110
> > >  [<ffffffff8108d680>] ? __kthread_parkme+0xa0/0xa0
> > >  [<ffffffff8168ce3e>] ret_from_fork+0x3e/0x70
> > >  [<ffffffff8108d680>] ? __kthread_parkme+0xa0/0xa0
> > > Code: 48 b8 00 00 00 00 00 00 00 80 55 48 89 e5 48 09 c1 ff d1 5d 85 c0 0f 95 c0 0f b6 c0 eb b9 66 2e 0f 1f 84 00 00 00 00 00 83 fe 02 <48> 8b 87 c8 01 00 00 0f 84 a0 00 00 00 48 85 c0 0f 85 97 00 00 
> > > RIP  [<ffffffff811d0cf3>] locks_get_lock_context+0x3/0xc0
> > >  RSP <ffff881036007bb0>
> > > CR2: 00000000000001c8
> > > ---[ end trace 2da9686dda1b5574 ]---
> > 
> > As mentioned in another thread by Jeff, I applied the following commits:
> > 
> > bcd7f78 locks: have flock_lock_file take an inode pointer instead of a filp
> > 29d01b2 locks: new helpers - flock_lock_inode_wait and posix_lock_inode_wait
> > ee296d7 locks: inline posix_lock_file_wait and flock_lock_file_wait
> > 83bfff2 nfs4: have do_vfs_lock take an inode pointer
> > 
> > I will see if I get the same issue again.
> 


-- 
Jeff Layton <jlayton@poochiereds.net>

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: locks_get_lock_context null deref
  2015-10-22 18:39           ` Jeff Layton
  (?)
@ 2015-10-22 18:55           ` William Dauchy
  -1 siblings, 0 replies; 8+ messages in thread
From: William Dauchy @ 2015-10-22 18:55 UTC (permalink / raw)
  To: stable
  Cc: William Dauchy, linux-fsdevel, Linux NFS mailing list,
	Jeff Layton, Jeff Layton

[-- Attachment #1: Type: text/plain, Size: 3065 bytes --]

Hi stable team,

I got the following null deref on lock_get_context using nfs client v4.0
on a linux v4.1.x.
After applying these four patches on top, I was unable to reproduce the
issue:

bcd7f78 locks: have flock_lock_file take an inode pointer instead of a filp
29d01b2 locks: new helpers - flock_lock_inode_wait and posix_lock_inode_wait
ee296d7 locks: inline posix_lock_file_wait and flock_lock_file_wait
83bfff2 nfs4: have do_vfs_lock take an inode pointer

Jeff Layton agreed it should be candidates for stable tree in v4.1.x
Do you think these four patches could be queued in v4.1.x?


BUG: unable to handle kernel NULL pointer dereference at 00000000000001c8
IP: [<ffffffff811d0cf3>] locks_get_lock_context+0x3/0xc0
PGD 0 
Oops: 0000 [#1] SMP 
CPU: 1 PID: 1773 Comm: kworker/1:1H Not tainted 4.1.11-rc1 #1
Workqueue: rpciod ffffffff8164fff0
task: ffff8810374deba0 ti: ffff8810374df150 task.ti: ffff8810374df150
RIP: 0010:[<ffffffff811d0cf3>]  [<ffffffff811d0cf3>] locks_get_lock_context+0x3/0xc0
RSP: 0000:ffff881036007bb0  EFLAGS: 00010246
RAX: ffff881036007c30 RBX: ffff881001981880 RCX: 0000000000000002
RDX: 00000000000006ed RSI: 0000000000000002 RDI: 0000000000000000
RBP: ffff881036007c08 R08: 0000000000000000 R09: 0000000000000001
R10: 0000000000000000 R11: ffff88101db69948 R12: ffff8810019818d8
R13: ffff881036007bc8 R14: ffff880e225d81c0 R15: ffff881edfd2b400
FS:  0000000000000000(0000) GS:ffff88103fc20000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00000000000001c8 CR3: 000000000169b000 CR4: 00000000000606f0
Stack:
 ffffffff811d2710 ffff881036007bc8 ffffffff819f1af1 ffff881036007bc8
 ffff881036007bc8 ffff881036007c08 ffff881001981880 ffff8810019818d8
 ffff881036007c48 ffff880e225d81c0 ffff881edfd2b400 ffff881036007c88
Call Trace:
 [<ffffffff811d2710>] ? flock_lock_file+0x30/0x270
 [<ffffffff811d3ad1>] flock_lock_file_wait+0x41/0xf0
 [<ffffffff8168be66>] ? _raw_spin_unlock+0x26/0x40
 [<ffffffff81268de9>] do_vfs_lock+0x19/0x40
 [<ffffffff812695cc>] nfs4_locku_done+0x5c/0xf0
 [<ffffffff8164f3f4>] rpc_exit_task+0x34/0xb0
 [<ffffffff8164fcd9>] __rpc_execute+0x79/0x390
 [<ffffffff81650000>] rpc_async_schedule+0x10/0x20
 [<ffffffff81086095>] process_one_work+0x1a5/0x450
 [<ffffffff81086024>] ? process_one_work+0x134/0x450
 [<ffffffff8108638b>] worker_thread+0x4b/0x4a0
 [<ffffffff81086340>] ? process_one_work+0x450/0x450
 [<ffffffff81086340>] ? process_one_work+0x450/0x450
 [<ffffffff8108d777>] kthread+0xf7/0x110
 [<ffffffff8108d680>] ? __kthread_parkme+0xa0/0xa0
 [<ffffffff8168ce3e>] ret_from_fork+0x3e/0x70
 [<ffffffff8108d680>] ? __kthread_parkme+0xa0/0xa0
Code: 48 b8 00 00 00 00 00 00 00 80 55 48 89 e5 48 09 c1 ff d1 5d 85 c0 0f 95 c0 0f b6 c0 eb b9 66 2e 0f 1f 84 00 00 00 00 00 83 fe 02 <48> 8b 87 c8 01 00 00 0f 84 a0 00 00 00 48 85 c0 0f 85 97 00 00 
RIP  [<ffffffff811d0cf3>] locks_get_lock_context+0x3/0xc0
 RSP <ffff881036007bb0>
CR2: 00000000000001c8
---[ end trace 2da9686dda1b5574 ]---


Thanks,
-- 
William

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2015-10-22 18:55 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-10-19 16:33 locks_get_lock_context null deref William Dauchy
2015-10-19 16:33 ` William Dauchy
2015-10-19 17:40 ` William Dauchy
     [not found]   ` <20151019174014.GJ10696-M8Sm6a3kpgNeoWH0uzbU5w@public.gmane.org>
2015-10-22 17:32     ` William Dauchy
2015-10-22 17:32       ` William Dauchy
     [not found]       ` <20151022173243.GD3258-M8Sm6a3kpgNeoWH0uzbU5w@public.gmane.org>
2015-10-22 18:39         ` Jeff Layton
2015-10-22 18:39           ` Jeff Layton
2015-10-22 18:55           ` William Dauchy

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.