All of lore.kernel.org
 help / color / mirror / Atom feed
* [Cluster-devel] Mounting lock_nolock file systems?
@ 2022-06-21  9:14 Christoph Hellwig
  2022-06-21 10:53 ` Andreas Gruenbacher
  2022-06-21 11:19 ` Andrew Price
  0 siblings, 2 replies; 6+ messages in thread
From: Christoph Hellwig @ 2022-06-21  9:14 UTC (permalink / raw)
  To: cluster-devel.redhat.com

I'm felling a little stupid, but in the past after a

mkfs.gfs2  -O -p lock_nolock 

I could just mount the created file system locally.

On current mainline that does not seem to work any more, what am I
missing?

Here is the output from the mount attempt:

oot at testvm:~# mount /dev/vdb /mnt/
[  154.745017] gfs2: fsid=vdb: Trying to join cluster "lock_nolock", "vdb"
[  154.745024] gfs2: fsid=vdb: Now mounting FS (format 1801)...
[  154.782279] gfs2: fsid=vdb.0: journal 0 mapped with 1 extents in 0ms
[  154.784878] gfs2: fsid=vdb.0: jid=0, already locked for use
[  154.784885] gfs2: fsid=vdb.0: jid=0: Looking at journal...
[  154.787474] gfs2: fsid=vdb.0: jid=0: Journal head lookup took 2ms
[  154.787482] gfs2: fsid=vdb.0: jid=0: Acquiring the transaction lock...
[  154.787513] gfs2: fsid=vdb.0: jid=0: Replaying journal...0x0 to 0x0
[  154.787522] gfs2: fsid=vdb.0: jid=0: Replayed 0 of 0 blocks
[  154.787525] gfs2: fsid=vdb.0: jid=0: Found 0 revoke tags
[  154.788116] gfs2: fsid=vdb.0: jid=0: Journal replayed in 3ms [jlck:0ms, jhead:2ms, tl]
[  154.788239] gfs2: fsid=vdb.0: jid=0: Done
[  154.789896] gfs2: fsid=vdb.0: first mount done, others may mount
[  154.802688] gfs2: fsid=vdb.0: fatal: filesystem consistency error - function = gfs2_m5
[  154.802700] gfs2: fsid=vdb.0: about to withdraw this file system
[  185.894949] gfs2: fsid=vdb.0: Journal recovery skipped for jid 0 until next mount.
[  185.894975] gfs2: fsid=vdb.0: Glock dequeues delayed: 0
[  185.895202] gfs2: fsid=vdb.0: File system withdrawn
[  185.895220] CPU: 1 PID: 2777 Comm: mount Not tainted 5.19.0-rc3+ #1713
[  185.895229] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/04
[  185.895239] Call Trace:
[  185.895247]  <TASK>
[  185.895251]  dump_stack_lvl+0x45/0x5a
[  185.895276]  gfs2_withdraw.cold+0xdb/0x507
[  185.895305]  gfs2_fill_super+0xb5a/0xbe0
[  185.895312]  ? gfs2_fill_super+0x771/0xbe0
[  185.895319]  ? gfs2_fill_super+0xa22/0xbe0
[  185.895325]  ? gfs2_reconfigure+0x3c0/0x3c0
[  185.895330]  get_tree_bdev+0x169/0x270
[  185.895341]  gfs2_get_tree+0x19/0x80
[  185.895346]  vfs_get_tree+0x20/0xb0
[  185.895352]  path_mount+0x2b1/0xa70
[  185.895362]  __x64_sys_mount+0xfe/0x140
[  185.895370]  do_syscall_64+0x3b/0x90
[  185.895378]  entry_SYSCALL_64_after_hwframe+0x46/0xb0
[  185.895388] RIP: 0033:0x7fd8ba7269ea
[  185.895404] Code: 48 8b 0d a9 f4 0b 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 08
[  185.895410] RSP: 002b:00007ffea6838018 EFLAGS: 00000246 ORIG_RAX: 00000000000000a5
[  185.895419] RAX: ffffffffffffffda RBX: 00007fd8ba849264 RCX: 00007fd8ba7269ea
[  185.895423] RDX: 000055669b2724e0 RSI: 000055669b26dc50 RDI: 000055669b26b370
[  185.895427] RBP: 000055669b26b140 R08: 0000000000000000 R09: 00007fd8ba7e6be0
[  185.895431] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
[  185.895434] R13: 000055669b26b370 R14: 000055669b2724e0 R15: 000055669b26b140
[  185.895443]  </TASK>
[  185.895461] gfs2: fsid=vdb.0: can't make FS RW: -5



^ permalink raw reply	[flat|nested] 6+ messages in thread

* [Cluster-devel] Mounting lock_nolock file systems?
  2022-06-21  9:14 [Cluster-devel] Mounting lock_nolock file systems? Christoph Hellwig
@ 2022-06-21 10:53 ` Andreas Gruenbacher
  2022-06-21 11:19 ` Andrew Price
  1 sibling, 0 replies; 6+ messages in thread
From: Andreas Gruenbacher @ 2022-06-21 10:53 UTC (permalink / raw)
  To: cluster-devel.redhat.com

On Tue, Jun 21, 2022 at 11:14 AM Christoph Hellwig <hch@lst.de> wrote:
>
> I'm felling a little stupid, but in the past after a
>
> mkfs.gfs2  -O -p lock_nolock
>
> I could just mount the created file system locally.
>
> On current mainline that does not seem to work any more, what am I
> missing?
>
> Here is the output from the mount attempt:
>
> oot at testvm:~# mount /dev/vdb /mnt/
> [  154.745017] gfs2: fsid=vdb: Trying to join cluster "lock_nolock", "vdb"
> [  154.745024] gfs2: fsid=vdb: Now mounting FS (format 1801)...
> [  154.782279] gfs2: fsid=vdb.0: journal 0 mapped with 1 extents in 0ms
> [  154.784878] gfs2: fsid=vdb.0: jid=0, already locked for use
> [  154.784885] gfs2: fsid=vdb.0: jid=0: Looking at journal...
> [  154.787474] gfs2: fsid=vdb.0: jid=0: Journal head lookup took 2ms
> [  154.787482] gfs2: fsid=vdb.0: jid=0: Acquiring the transaction lock...
> [  154.787513] gfs2: fsid=vdb.0: jid=0: Replaying journal...0x0 to 0x0
> [  154.787522] gfs2: fsid=vdb.0: jid=0: Replayed 0 of 0 blocks
> [  154.787525] gfs2: fsid=vdb.0: jid=0: Found 0 revoke tags
> [  154.788116] gfs2: fsid=vdb.0: jid=0: Journal replayed in 3ms [jlck:0ms, jhead:2ms, tl]
> [  154.788239] gfs2: fsid=vdb.0: jid=0: Done
> [  154.789896] gfs2: fsid=vdb.0: first mount done, others may mount
> [  154.802688] gfs2: fsid=vdb.0: fatal: filesystem consistency error - function = gfs2_m5
> [  154.802700] gfs2: fsid=vdb.0: about to withdraw this file system
> [  185.894949] gfs2: fsid=vdb.0: Journal recovery skipped for jid 0 until next mount.
> [  185.894975] gfs2: fsid=vdb.0: Glock dequeues delayed: 0
> [  185.895202] gfs2: fsid=vdb.0: File system withdrawn
> [  185.895220] CPU: 1 PID: 2777 Comm: mount Not tainted 5.19.0-rc3+ #1713
> [  185.895229] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/04
> [  185.895239] Call Trace:
> [  185.895247]  <TASK>
> [  185.895251]  dump_stack_lvl+0x45/0x5a
> [  185.895276]  gfs2_withdraw.cold+0xdb/0x507
> [  185.895305]  gfs2_fill_super+0xb5a/0xbe0
> [  185.895312]  ? gfs2_fill_super+0x771/0xbe0
> [  185.895319]  ? gfs2_fill_super+0xa22/0xbe0
> [  185.895325]  ? gfs2_reconfigure+0x3c0/0x3c0
> [  185.895330]  get_tree_bdev+0x169/0x270
> [  185.895341]  gfs2_get_tree+0x19/0x80
> [  185.895346]  vfs_get_tree+0x20/0xb0
> [  185.895352]  path_mount+0x2b1/0xa70
> [  185.895362]  __x64_sys_mount+0xfe/0x140
> [  185.895370]  do_syscall_64+0x3b/0x90
> [  185.895378]  entry_SYSCALL_64_after_hwframe+0x46/0xb0
> [  185.895388] RIP: 0033:0x7fd8ba7269ea
> [  185.895404] Code: 48 8b 0d a9 f4 0b 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 08
> [  185.895410] RSP: 002b:00007ffea6838018 EFLAGS: 00000246 ORIG_RAX: 00000000000000a5
> [  185.895419] RAX: ffffffffffffffda RBX: 00007fd8ba849264 RCX: 00007fd8ba7269ea
> [  185.895423] RDX: 000055669b2724e0 RSI: 000055669b26dc50 RDI: 000055669b26b370
> [  185.895427] RBP: 000055669b26b140 R08: 0000000000000000 R09: 00007fd8ba7e6be0
> [  185.895431] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
> [  185.895434] R13: 000055669b26b370 R14: 000055669b2724e0 R15: 000055669b26b140
> [  185.895443]  </TASK>
> [  185.895461] gfs2: fsid=vdb.0: can't make FS RW: -5

Hmm, that's supposed to work, and it's working here with kernel
5.19.0-rc3 and multiple versions of mkfs.gfs2. I'm getting slightly
different output from the kernel, though:

gfs2: fsid=vdc: Trying to join cluster "lock_nolock", "vdc"
gfs2: fsid=vdc: Now mounting FS (format 1801)...
gfs2: fsid=vdc.0: journal 0 mapped with 1 extents in 0ms
gfs2: fsid=vdc.0: jid=0, already locked for use
gfs2: fsid=vdc.0: jid=0: Looking at journal...
gfs2: fsid=vdc.0: jid=0: Journal head lookup took 81ms
gfs2: fsid=vdc.0: jid=0: Done
gfs2: fsid=vdc.0: first mount done, others may mount

Is the caching behavior of your vdb device configured weirdly?

Thanks,
Andreas


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [Cluster-devel] Mounting lock_nolock file systems?
  2022-06-21  9:14 [Cluster-devel] Mounting lock_nolock file systems? Christoph Hellwig
  2022-06-21 10:53 ` Andreas Gruenbacher
@ 2022-06-21 11:19 ` Andrew Price
  2022-06-21 12:58   ` Christoph Hellwig
  1 sibling, 1 reply; 6+ messages in thread
From: Andrew Price @ 2022-06-21 11:19 UTC (permalink / raw)
  To: cluster-devel.redhat.com

On 21/06/2022 10:14, Christoph Hellwig wrote:
> I'm felling a little stupid, but in the past after a
> 
> mkfs.gfs2  -O -p lock_nolock
> 
> I could just mount the created file system locally.
> 
> On current mainline that does not seem to work any more, what am I
> missing?

I can't reproduce the problem on current mainline. What version of 
gfs2-utils is your mkfs.gfs2 from?

Could you send your superblock?

   dd if=/dev/vdb bs=4k skip=16 count=1 status=none | xxd -a

will grab it.

Andy

> Here is the output from the mount attempt:
> 
> oot at testvm:~# mount /dev/vdb /mnt/
> [  154.745017] gfs2: fsid=vdb: Trying to join cluster "lock_nolock", "vdb"
> [  154.745024] gfs2: fsid=vdb: Now mounting FS (format 1801)...
> [  154.782279] gfs2: fsid=vdb.0: journal 0 mapped with 1 extents in 0ms
> [  154.784878] gfs2: fsid=vdb.0: jid=0, already locked for use
> [  154.784885] gfs2: fsid=vdb.0: jid=0: Looking at journal...
> [  154.787474] gfs2: fsid=vdb.0: jid=0: Journal head lookup took 2ms
> [  154.787482] gfs2: fsid=vdb.0: jid=0: Acquiring the transaction lock...
> [  154.787513] gfs2: fsid=vdb.0: jid=0: Replaying journal...0x0 to 0x0
> [  154.787522] gfs2: fsid=vdb.0: jid=0: Replayed 0 of 0 blocks
> [  154.787525] gfs2: fsid=vdb.0: jid=0: Found 0 revoke tags
> [  154.788116] gfs2: fsid=vdb.0: jid=0: Journal replayed in 3ms [jlck:0ms, jhead:2ms, tl]
> [  154.788239] gfs2: fsid=vdb.0: jid=0: Done
> [  154.789896] gfs2: fsid=vdb.0: first mount done, others may mount
> [  154.802688] gfs2: fsid=vdb.0: fatal: filesystem consistency error - function = gfs2_m5
> [  154.802700] gfs2: fsid=vdb.0: about to withdraw this file system
> [  185.894949] gfs2: fsid=vdb.0: Journal recovery skipped for jid 0 until next mount.
> [  185.894975] gfs2: fsid=vdb.0: Glock dequeues delayed: 0
> [  185.895202] gfs2: fsid=vdb.0: File system withdrawn
> [  185.895220] CPU: 1 PID: 2777 Comm: mount Not tainted 5.19.0-rc3+ #1713
> [  185.895229] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/04
> [  185.895239] Call Trace:
> [  185.895247]  <TASK>
> [  185.895251]  dump_stack_lvl+0x45/0x5a
> [  185.895276]  gfs2_withdraw.cold+0xdb/0x507
> [  185.895305]  gfs2_fill_super+0xb5a/0xbe0
> [  185.895312]  ? gfs2_fill_super+0x771/0xbe0
> [  185.895319]  ? gfs2_fill_super+0xa22/0xbe0
> [  185.895325]  ? gfs2_reconfigure+0x3c0/0x3c0
> [  185.895330]  get_tree_bdev+0x169/0x270
> [  185.895341]  gfs2_get_tree+0x19/0x80
> [  185.895346]  vfs_get_tree+0x20/0xb0
> [  185.895352]  path_mount+0x2b1/0xa70
> [  185.895362]  __x64_sys_mount+0xfe/0x140
> [  185.895370]  do_syscall_64+0x3b/0x90
> [  185.895378]  entry_SYSCALL_64_after_hwframe+0x46/0xb0
> [  185.895388] RIP: 0033:0x7fd8ba7269ea
> [  185.895404] Code: 48 8b 0d a9 f4 0b 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 08
> [  185.895410] RSP: 002b:00007ffea6838018 EFLAGS: 00000246 ORIG_RAX: 00000000000000a5
> [  185.895419] RAX: ffffffffffffffda RBX: 00007fd8ba849264 RCX: 00007fd8ba7269ea
> [  185.895423] RDX: 000055669b2724e0 RSI: 000055669b26dc50 RDI: 000055669b26b370
> [  185.895427] RBP: 000055669b26b140 R08: 0000000000000000 R09: 00007fd8ba7e6be0
> [  185.895431] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
> [  185.895434] R13: 000055669b26b370 R14: 000055669b2724e0 R15: 000055669b26b140
> [  185.895443]  </TASK>
> [  185.895461] gfs2: fsid=vdb.0: can't make FS RW: -5
> 
> 


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [Cluster-devel] Mounting lock_nolock file systems?
  2022-06-21 11:19 ` Andrew Price
@ 2022-06-21 12:58   ` Christoph Hellwig
  2022-06-22 14:30     ` Christoph Hellwig
  0 siblings, 1 reply; 6+ messages in thread
From: Christoph Hellwig @ 2022-06-21 12:58 UTC (permalink / raw)
  To: cluster-devel.redhat.com

On Tue, Jun 21, 2022 at 12:19:02PM +0100, Andrew Price wrote:
> On 21/06/2022 10:14, Christoph Hellwig wrote:
>> I'm felling a little stupid, but in the past after a
>>
>> mkfs.gfs2  -O -p lock_nolock
>>
>> I could just mount the created file system locally.
>>
>> On current mainline that does not seem to work any more, what am I
>> missing?
>
> I can't reproduce the problem on current mainline. What version of 
> gfs2-utils is your mkfs.gfs2 from?

Sorry, actually it was the pagecache for-next branch from willy.  Looks
like mainline itself is fine.

I'll try to get the superblock information from the pagecache branch
once I find a little time, chasing a bunch of other bugs in the meantime.


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [Cluster-devel] Mounting lock_nolock file systems?
  2022-06-21 12:58   ` Christoph Hellwig
@ 2022-06-22 14:30     ` Christoph Hellwig
  2022-06-23 15:44       ` Andreas Gruenbacher
  0 siblings, 1 reply; 6+ messages in thread
From: Christoph Hellwig @ 2022-06-22 14:30 UTC (permalink / raw)
  To: cluster-devel.redhat.com

On Tue, Jun 21, 2022 at 02:58:57PM +0200, Christoph Hellwig wrote:
> Sorry, actually it was the pagecache for-next branch from willy.  Looks
> like mainline itself is fine.
> 
> I'll try to get the superblock information from the pagecache branch
> once I find a little time, chasing a bunch of other bugs in the meantime.

I bisected it down to:

commit 1abe0e8c19c514827408ba7e7e84969b6f2e784f
Author: Matthew Wilcox (Oracle) <willy@infradead.org>
Date:   Wed May 18 14:41:39 2022 -0400

    gfs: Check PageUptodate instead of PageError
        
    This is the correct flag to test to know if the read completed
    successfully.
		    
    Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>

but I don't have any explanation how it caused that breakage yet.


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [Cluster-devel] Mounting lock_nolock file systems?
  2022-06-22 14:30     ` Christoph Hellwig
@ 2022-06-23 15:44       ` Andreas Gruenbacher
  0 siblings, 0 replies; 6+ messages in thread
From: Andreas Gruenbacher @ 2022-06-23 15:44 UTC (permalink / raw)
  To: cluster-devel.redhat.com

On Wed, Jun 22, 2022 at 4:32 PM Christoph Hellwig <hch@lst.de> wrote:
> On Tue, Jun 21, 2022 at 02:58:57PM +0200, Christoph Hellwig wrote:
> > Sorry, actually it was the pagecache for-next branch from willy.  Looks
> > like mainline itself is fine.
> >
> > I'll try to get the superblock information from the pagecache branch
> > once I find a little time, chasing a bunch of other bugs in the meantime.
>
> I bisected it down to:
>
> commit 1abe0e8c19c514827408ba7e7e84969b6f2e784f
> Author: Matthew Wilcox (Oracle) <willy@infradead.org>
> Date:   Wed May 18 14:41:39 2022 -0400
>
>     gfs: Check PageUptodate instead of PageError
>
>     This is the correct flag to test to know if the read completed
>     successfully.
>
>     Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
>
> but I don't have any explanation how it caused that breakage yet.

gfs2_find_jhead() uses gfs2_end_log_read() as the end_io function of
the bios it submits, and gfs2_end_log_read() doesn't set the pages it
reads uptodate. That should be fixed; it doesn't make much sense.

Willy, can you remove the above patch from the pagecache tree? We can
put it in the gfs2 tree after that gfs2_end_log_read() fix.

(Side note: it's gfs2, not gfs.)

Thanks,
Andreas


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2022-06-23 15:44 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-06-21  9:14 [Cluster-devel] Mounting lock_nolock file systems? Christoph Hellwig
2022-06-21 10:53 ` Andreas Gruenbacher
2022-06-21 11:19 ` Andrew Price
2022-06-21 12:58   ` Christoph Hellwig
2022-06-22 14:30     ` Christoph Hellwig
2022-06-23 15:44       ` Andreas Gruenbacher

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.