All of lore.kernel.org
 help / color / mirror / Atom feed
* make_request bug: can't convert block across chunks or bigger than 512k 2554601336 124
@ 2016-04-08 11:27 Brian J. Murrell
  2016-04-11 12:05 ` Brian J. Murrell
  0 siblings, 1 reply; 2+ messages in thread
From: Brian J. Murrell @ 2016-04-08 11:27 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: Type: text/plain, Size: 7564 bytes --]

So I've had this raid configuration set up and running for many many
months if not years:

/dev/md1:
        Version : 1.2
  Creation Time : Sat Dec 26 19:49:41 2015
     Raid Level : raid0
     Array Size : 1953524736 (1863.03 GiB 2000.41 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Sat Dec 26 19:49:41 2015
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

     Chunk Size : 512K

           Name : linux.interlinx.bc.ca:1  (local to host linux.interlinx.bc.ca)
           UUID : f27ae74d:e2997e50:9df158ee:c5ed3cbb
         Events : 0

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc

/dev/md0:
        Version : 0.90
  Creation Time : Mon Jan 26 19:51:38 2009
     Raid Level : raid1
     Array Size : 1953514496 (1863.02 GiB 2000.40 GB)
  Used Dev Size : 1953514496 (1863.02 GiB 2000.40 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Fri Apr  8 00:00:42 2016
          State : active, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 1
  Spare Devices : 0

           UUID : 2f8fc5e0:a0eb646a:2303d005:33f25f21
         Events : 0.5031387

    Number   Major   Minor   RaidDevice State
       0       8       48        0      active sync   /dev/sdd
       1       0        0        1      removed

       2       9        1        -      faulty spare   /dev/md1

Now out of the blue (or seemingly out of the blue, nothing really comes
out of the blue) I am starting to get the md0 array faulting with:

[37722.335880] md/raid0:md1: make_request bug: can't convert block across chunks or bigger than 512k 2554601336 124
[37722.335887] md/raid1:md0: md1: rescheduling sector 2554601336
[37722.376776] md/raid1:md0: redirecting sector 2554601336 to other mirror: md1
[37722.386455] md/raid0:md1: make_request bug: can't convert block across chunks or bigger than 512k 2554601336 124
[37722.386461] md/raid1:md0: md1: rescheduling sector 2554601336
[37722.412059] md/raid1:md0: redirecting sector 2554601336 to other mirror: sdd
[37722.735049] md/raid0:md1: make_request bug: can't convert block across chunks or bigger than 512k 2554608496 124
[37722.735056] md/raid1:md0: md1: rescheduling sector 2554608496
[37722.789926] md/raid1:md0: redirecting sector 2554608496 to other mirror: md1
[37722.798689] md/raid0:md1: make_request bug: can't convert block across chunks or bigger than 512k 2554608496 124
[37722.798696] md/raid1:md0: md1: rescheduling sector 2554608496
[37722.859314] md/raid1:md0: redirecting sector 2554608496 to other mirror: sdd
[37724.819090] md/raid0:md1: make_request bug: can't convert block across chunks or bigger than 512k 2554606584 12
[37724.819096] md/raid1:md0: md1: rescheduling sector 2554606584
[37727.455279] md/raid1:md0: redirecting sector 2554606584 to other mirror: sdd
[37752.406865] md/raid0:md1: make_request bug: can't convert block across chunks or bigger than 512k 1748150224 124
[37752.521219] md/raid1:md0: Disk failure on md1, disabling device.
[37752.521222] md/raid1:md0: Operation continuing on 1 devices.
[37752.580186] RAID1 conf printout:
[37752.580190]  --- wd:1 rd:2
[37752.580194]  disk 0, wo:0, o:1, dev:sdd
[37752.580197]  disk 1, wo:1, o:0, dev:md1
[37752.580199] RAID1 conf printout:
[37752.580201]  --- wd:1 rd:2
[37752.580203]  disk 0, wo:0, o:1, dev:sdd
[37752.580228] BUG: unable to handle kernel paging request at 000000001dff2107
[37752.584011] IP: [<ffffffffa001909a>] close_write+0x6a/0xa0 [raid1]
[37752.584011] PGD 20f667067 PUD 20f669067 PMD 0 
[37752.584011] Oops: 0000 [#1] SMP 
[37752.584011] CPU 0 
[37752.584011] Modules linked in: iptable_mangle iptable_filter ipt_MASQUERADE xt_tcpudp iptable_nat nf_nat nf_conntrack_ipv4 nf_conntrack nf_defrag_ipv4 ip_tables x_tables autofs4 nfsd nfs lockd fscache auth_rpcgss nfs_acl sunrpc snd_hda_codec_realtek snd_hda_intel snd_hda_codec snd_hwdep snd_pcm radeon snd_seq_midi ttm drm_kms_helper snd_rawmidi ftdi_sio snd_seq_midi_event snd_seq snd_timer drm ppdev snd_seq_device pl2303 usbserial serio_raw i2c_algo_bit mac_hid asus_atk0110 snd parport_pc parport soundcore snd_page_alloc edac_core edac_mce_amd i2c_piix4 k8temp shpchp 8021q garp stp pata_atiixp r8169 raid10 raid456 async_pq async_xor xor async_memcpy async_raid6_recov raid6_pq async_tx raid1 raid0 multipath linear
[37752.584011] 
[37752.584011] Pid: 267, comm: md0_raid1 Not tainted 3.2.0-101-generic #141-Ubuntu System manufacturer System Product Name/M2A-VM
[37752.584011] RIP: 0010:[<ffffffffa001909a>]  [<ffffffffa001909a>] close_write+0x6a/0xa0 [raid1]
[37752.584011] RSP: 0018:ffff88020def9dc0  EFLAGS: 00010246
[37752.584011] RAX: 000000001dff1dff RBX: ffff88020e380108 RCX: 00000000e8e08784
[37752.584011] RDX: 00000000687249a8 RSI: 0000000000000002 RDI: ffff88020e380108
[37752.584011] RBP: ffff88020def9dd0 R08: 00000000e8e08784 R09: 000000000000036b
[37752.584011] R10: dead000000100100 R11: dead000000200200 R12: ffff88020e36b800
[37752.584011] R13: ffff88020e380100 R14: ffff88020e380130 R15: ffff88020e380108
[37752.584011] FS:  00007f8e0a353740(0000) GS:ffff88021fc00000(0000) knlGS:00000000f75586c0
[37752.584011] CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
[37752.584011] CR2: 000000001dff2107 CR3: 000000020f61e000 CR4: 00000000000006f0
[37752.584011] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[37752.584011] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[37752.584011] Process md0_raid1 (pid: 267, threadinfo ffff88020def8000, task ffff88020e2eae00)
[37752.584011] Stack:
[37752.584011]  ffff88020def9e20 ffff88020e36b800 ffff88020def9e60 ffffffffa001cbe5
[37752.584011]  ffff88020e380140 ffff880044dbd800 0000000091827364 ffff88020def9df8
[37752.584011]  ffff88020def9df8 ffff88020def9e08 ffff88020def9e08 ffff880200000000
[37752.584011] Call Trace:
[37752.584011]  [<ffffffffa001cbe5>] raid1d+0x2a5/0x2b0 [raid1]
[37752.584011]  [<ffffffff814f0f9e>] md_thread+0x11e/0x170
[37752.584011]  [<ffffffff8108c650>] ? add_wait_queue+0x60/0x60
[37752.584011]  [<ffffffff814f0e80>] ? md_rdev_init+0x130/0x130
[37752.584011]  [<ffffffff8108bbac>] kthread+0x8c/0xa0
[37752.584011]  [<ffffffff816746b4>] kernel_thread_helper+0x4/0x10
[37752.584011]  [<ffffffff8108bb20>] ? flush_kthread_worker+0xa0/0xa0
[37752.584011]  [<ffffffff816746b0>] ? gs_change+0x13/0x13
[37752.584011] Code: e4 48 8b 53 48 75 de 48 89 d7 e8 e2 cd 14 e1 48 c7 43 48 00 00 00 00 4c 8b 43 18 48 8b 43 20 48 8b 4b 18 48 63 53 10 48 8b 73 08 <48> 8b b8 08 03 00 00 49 c1 e8 03 48 c1 e9 02 41 83 e0 01 83 e1 
[37752.584011] RIP  [<ffffffffa001909a>] close_write+0x6a/0xa0 [raid1]
[37752.584011]  RSP <ffff88020def9dc0>
[37752.584011] CR2: 000000001dff2107
[37752.889674] ---[ end trace 276f024e11aa62ac ]---

Any ideas what is all of a sudden happening?

Cheers,
b.

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: make_request bug: can't convert block across chunks or bigger than 512k 2554601336 124
  2016-04-08 11:27 make_request bug: can't convert block across chunks or bigger than 512k 2554601336 124 Brian J. Murrell
@ 2016-04-11 12:05 ` Brian J. Murrell
  0 siblings, 0 replies; 2+ messages in thread
From: Brian J. Murrell @ 2016-04-11 12:05 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: Type: text/plain, Size: 8239 bytes --]

On Fri, 2016-04-08 at 07:27 -0400, Brian J. Murrell wrote:
> So I've had this raid configuration set up and running for many many
> months if not years:

My apologies if I come across as being impatient, but I wonder if
anyone can shed any light on this.  My array is currently broken and
I'd like to get it back in order before I lose it completely.  Murphy's
law and all.

> /dev/md1:
>         Version : 1.2
>   Creation Time : Sat Dec 26 19:49:41 2015
>      Raid Level : raid0
>      Array Size : 1953524736 (1863.03 GiB 2000.41 GB)
>    Raid Devices : 2
>   Total Devices : 2
>     Persistence : Superblock is persistent
> 
>     Update Time : Sat Dec 26 19:49:41 2015
>           State : clean
>  Active Devices : 2
> Working Devices : 2
>  Failed Devices : 0
>   Spare Devices : 0
> 
>      Chunk Size : 512K
> 
>            Name : linux.interlinx.bc.ca:1  (local to host
> linux.interlinx.bc.ca)
>            UUID : f27ae74d:e2997e50:9df158ee:c5ed3cbb
>          Events : 0
> 
>     Number   Major   Minor   RaidDevice State
>        0       8       16        0      active sync   /dev/sdb
>        1       8       32        1      active sync   /dev/sdc
> 
> /dev/md0:
>         Version : 0.90
>   Creation Time : Mon Jan 26 19:51:38 2009
>      Raid Level : raid1
>      Array Size : 1953514496 (1863.02 GiB 2000.40 GB)
>   Used Dev Size : 1953514496 (1863.02 GiB 2000.40 GB)
>    Raid Devices : 2
>   Total Devices : 2
> Preferred Minor : 0
>     Persistence : Superblock is persistent
> 
>     Update Time : Fri Apr  8 00:00:42 2016
>           State : active, degraded
>  Active Devices : 1
> Working Devices : 1
>  Failed Devices : 1
>   Spare Devices : 0
> 
>            UUID : 2f8fc5e0:a0eb646a:2303d005:33f25f21
>          Events : 0.5031387
> 
>     Number   Major   Minor   RaidDevice State
>        0       8       48        0      active sync   /dev/sdd
>        1       0        0        1      removed
> 
>        2       9        1        -      faulty spare   /dev/md1
> 
> Now out of the blue (or seemingly out of the blue, nothing really
> comes
> out of the blue) I am starting to get the md0 array faulting with:
> 
> [37722.335880] md/raid0:md1: make_request bug: can't convert block
> across chunks or bigger than 512k 2554601336 124
> [37722.335887] md/raid1:md0: md1: rescheduling sector 2554601336
> [37722.376776] md/raid1:md0: redirecting sector 2554601336 to other
> mirror: md1
> [37722.386455] md/raid0:md1: make_request bug: can't convert block
> across chunks or bigger than 512k 2554601336 124
> [37722.386461] md/raid1:md0: md1: rescheduling sector 2554601336
> [37722.412059] md/raid1:md0: redirecting sector 2554601336 to other
> mirror: sdd
> [37722.735049] md/raid0:md1: make_request bug: can't convert block
> across chunks or bigger than 512k 2554608496 124
> [37722.735056] md/raid1:md0: md1: rescheduling sector 2554608496
> [37722.789926] md/raid1:md0: redirecting sector 2554608496 to other
> mirror: md1
> [37722.798689] md/raid0:md1: make_request bug: can't convert block
> across chunks or bigger than 512k 2554608496 124
> [37722.798696] md/raid1:md0: md1: rescheduling sector 2554608496
> [37722.859314] md/raid1:md0: redirecting sector 2554608496 to other
> mirror: sdd
> [37724.819090] md/raid0:md1: make_request bug: can't convert block
> across chunks or bigger than 512k 2554606584 12
> [37724.819096] md/raid1:md0: md1: rescheduling sector 2554606584
> [37727.455279] md/raid1:md0: redirecting sector 2554606584 to other
> mirror: sdd
> [37752.406865] md/raid0:md1: make_request bug: can't convert block
> across chunks or bigger than 512k 1748150224 124
> [37752.521219] md/raid1:md0: Disk failure on md1, disabling device.
> [37752.521222] md/raid1:md0: Operation continuing on 1 devices.
> [37752.580186] RAID1 conf printout:
> [37752.580190]  --- wd:1 rd:2
> [37752.580194]  disk 0, wo:0, o:1, dev:sdd
> [37752.580197]  disk 1, wo:1, o:0, dev:md1
> [37752.580199] RAID1 conf printout:
> [37752.580201]  --- wd:1 rd:2
> [37752.580203]  disk 0, wo:0, o:1, dev:sdd
> [37752.580228] BUG: unable to handle kernel paging request at
> 000000001dff2107
> [37752.584011] IP: [<ffffffffa001909a>] close_write+0x6a/0xa0 [raid1]
> [37752.584011] PGD 20f667067 PUD 20f669067 PMD 0 
> [37752.584011] Oops: 0000 [#1] SMP 
> [37752.584011] CPU 0 
> [37752.584011] Modules linked in: iptable_mangle iptable_filter
> ipt_MASQUERADE xt_tcpudp iptable_nat nf_nat nf_conntrack_ipv4
> nf_conntrack nf_defrag_ipv4 ip_tables x_tables autofs4 nfsd nfs lockd
> fscache auth_rpcgss nfs_acl sunrpc snd_hda_codec_realtek
> snd_hda_intel snd_hda_codec snd_hwdep snd_pcm radeon snd_seq_midi ttm
> drm_kms_helper snd_rawmidi ftdi_sio snd_seq_midi_event snd_seq
> snd_timer drm ppdev snd_seq_device pl2303 usbserial serio_raw
> i2c_algo_bit mac_hid asus_atk0110 snd parport_pc parport soundcore
> snd_page_alloc edac_core edac_mce_amd i2c_piix4 k8temp shpchp 8021q
> garp stp pata_atiixp r8169 raid10 raid456 async_pq async_xor xor
> async_memcpy async_raid6_recov raid6_pq async_tx raid1 raid0
> multipath linear
> [37752.584011] 
> [37752.584011] Pid: 267, comm: md0_raid1 Not tainted 3.2.0-101-
> generic #141-Ubuntu System manufacturer System Product Name/M2A-VM
> [37752.584011] RIP: 0010:[<ffffffffa001909a>]  [<ffffffffa001909a>]
> close_write+0x6a/0xa0 [raid1]
> [37752.584011] RSP: 0018:ffff88020def9dc0  EFLAGS: 00010246
> [37752.584011] RAX: 000000001dff1dff RBX: ffff88020e380108 RCX:
> 00000000e8e08784
> [37752.584011] RDX: 00000000687249a8 RSI: 0000000000000002 RDI:
> ffff88020e380108
> [37752.584011] RBP: ffff88020def9dd0 R08: 00000000e8e08784 R09:
> 000000000000036b
> [37752.584011] R10: dead000000100100 R11: dead000000200200 R12:
> ffff88020e36b800
> [37752.584011] R13: ffff88020e380100 R14: ffff88020e380130 R15:
> ffff88020e380108
> [37752.584011] FS:  00007f8e0a353740(0000) GS:ffff88021fc00000(0000)
> knlGS:00000000f75586c0
> [37752.584011] CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
> [37752.584011] CR2: 000000001dff2107 CR3: 000000020f61e000 CR4:
> 00000000000006f0
> [37752.584011] DR0: 0000000000000000 DR1: 0000000000000000 DR2:
> 0000000000000000
> [37752.584011] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7:
> 0000000000000400
> [37752.584011] Process md0_raid1 (pid: 267, threadinfo
> ffff88020def8000, task ffff88020e2eae00)
> [37752.584011] Stack:
> [37752.584011]  ffff88020def9e20 ffff88020e36b800 ffff88020def9e60
> ffffffffa001cbe5
> [37752.584011]  ffff88020e380140 ffff880044dbd800 0000000091827364
> ffff88020def9df8
> [37752.584011]  ffff88020def9df8 ffff88020def9e08 ffff88020def9e08
> ffff880200000000
> [37752.584011] Call Trace:
> [37752.584011]  [<ffffffffa001cbe5>] raid1d+0x2a5/0x2b0 [raid1]
> [37752.584011]  [<ffffffff814f0f9e>] md_thread+0x11e/0x170
> [37752.584011]  [<ffffffff8108c650>] ? add_wait_queue+0x60/0x60
> [37752.584011]  [<ffffffff814f0e80>] ? md_rdev_init+0x130/0x130
> [37752.584011]  [<ffffffff8108bbac>] kthread+0x8c/0xa0
> [37752.584011]  [<ffffffff816746b4>] kernel_thread_helper+0x4/0x10
> [37752.584011]  [<ffffffff8108bb20>] ? flush_kthread_worker+0xa0/0xa0
> [37752.584011]  [<ffffffff816746b0>] ? gs_change+0x13/0x13
> [37752.584011] Code: e4 48 8b 53 48 75 de 48 89 d7 e8 e2 cd 14 e1 48
> c7 43 48 00 00 00 00 4c 8b 43 18 48 8b 43 20 48 8b 4b 18 48 63 53 10
> 48 8b 73 08 <48> 8b b8 08 03 00 00 49 c1 e8 03 48 c1 e9 02 41 83 e0
> 01 83 e1 
> [37752.584011] RIP  [<ffffffffa001909a>] close_write+0x6a/0xa0
> [raid1]
> [37752.584011]  RSP <ffff88020def9dc0>
> [37752.584011] CR2: 000000001dff2107
> [37752.889674] ---[ end trace 276f024e11aa62ac ]---
> 
> Any ideas what is all of a sudden happening?
> 
> Cheers,
> b.

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2016-04-11 12:05 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-04-08 11:27 make_request bug: can't convert block across chunks or bigger than 512k 2554601336 124 Brian J. Murrell
2016-04-11 12:05 ` Brian J. Murrell

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.