* [PATCH v3] block: Fix a WRITE SAME BUG_ON
@ 2019-02-20 2:09 ZhangXiaoxu
0 siblings, 0 replies; only message in thread
From: ZhangXiaoxu @ 2019-02-20 2:09 UTC (permalink / raw)
To: martin.petersen, hch, heinzm, jdorminy, axboe, snitzer,
linux-block, dm-devel, agk, zhangxiaoxu5
If the lvm is stacked by different lbs disks, when WRITE SAME on it, will bug_on:
kernel BUG at drivers/scsi/sd.c:968!
invalid opcode: 0000 [#1] SMP PTI
CPU: 11 PID: 525 Comm: kworker/11:1H Tainted: G O 5.0.0-rc3+ #2
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-2.fc27 04/01/2014
Workqueue: kblockd blk_mq_run_work_fn
RIP: 0010:sd_init_command+0x7aa/0xdb0
Code: 30 75 00 00 c7 85 44 01 00 00 18 00 00 00 0f 85 fa 04 00 00 48 83 c4 40 48
89 df 5b 5d 41 5c 41 5d 41 5e 41 5f e9 b6 ca fe ff <0f> 0b 41 bc 09
RSP: 0018:ffffb55f80ddbca0 EFLAGS: 00010206
RAX: 0000000000001000 RBX: ffff9ed23fb927a0 RCX: 000000000000f000
RDX: ffff9ed23f0a8400 RSI: ffff9ed27bc79800 RDI: 0000000000000000
RBP: ffff9ed23fb92680 R08: ffff9ed27c8c0000 R09: ffff9ed23fb927d8
R10: 0000000000000000 R11: fefefefefefefeff R12: ffff9ed27bc79800
R13: ffff9ed2787a0000 R14: ffff9ed27bdf3400 R15: ffff9ed23fb927a0
FS: 0000000000000000(0000) GS:ffff9ed27c8c0000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f6b14cf9341 CR3: 0000000069058000 CR4: 00000000000006e0
Call Trace:
? vp_notify+0x12/0x20
scsi_queue_rq+0x525/0xa30
blk_mq_dispatch_rq_list+0x8d/0x580
? syscall_return_via_sysret+0x10/0x7f
? elv_rb_del+0x1f/0x30
? deadline_remove_request+0x55/0xc0
blk_mq_do_dispatch_sched+0x76/0x110
blk_mq_sched_dispatch_requests+0xf9/0x170
__blk_mq_run_hw_queue+0x51/0xd0
process_one_work+0x195/0x380
worker_thread+0x30/0x390
? process_one_work+0x380/0x380
kthread+0x113/0x130
? kthread_park+0x90/0x90
ret_from_fork+0x35/0x40
Modules linked in: alloc(O+)
---[ end trace dc92ddeb2e6d1fe5 ]---
The logical_block_size of the LVM is the max value of the sub disks,
it maybe different with one of the sub disk. when WRITE SAME on the
disk, it will BUG_ON when setup WRITE SAME cmd.
Close WRITE_SAME feature on the LVM if it was stacked by different
logical_block_size disk.
Signed-off-by: ZhangXiaoxu <zhangxiaoxu5@huawei.com>
---
block/blk-settings.c | 14 ++++++++++++--
1 file changed, 12 insertions(+), 2 deletions(-)
diff --git a/block/blk-settings.c b/block/blk-settings.c
index 3e7038e..672cd27 100644
--- a/block/blk-settings.c
+++ b/block/blk-settings.c
@@ -497,8 +497,6 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b,
t->max_sectors = min_not_zero(t->max_sectors, b->max_sectors);
t->max_hw_sectors = min_not_zero(t->max_hw_sectors, b->max_hw_sectors);
t->max_dev_sectors = min_not_zero(t->max_dev_sectors, b->max_dev_sectors);
- t->max_write_same_sectors = min(t->max_write_same_sectors,
- b->max_write_same_sectors);
t->max_write_zeroes_sectors = min(t->max_write_zeroes_sectors,
b->max_write_zeroes_sectors);
t->bounce_pfn = min_not_zero(t->bounce_pfn, b->bounce_pfn);
@@ -537,6 +535,18 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b,
}
}
+ /* If the lbs is different, disable WRITE SAME.
+ The default value of max_write_same_sectors is UINT_MAX, and
+ default value of logical_block_size is 512, do not apply this
+ rule when first iteration.
+ */
+ if (t->logical_block_size != b->logical_block_size &&
+ t->max_write_same_sectors != UINT_MAX)
+ t->max_write_same_sectors = 0;
+ else
+ t->max_write_same_sectors = min(t->max_write_same_sectors,
+ b->max_write_same_sectors);
+
t->logical_block_size = max(t->logical_block_size,
b->logical_block_size);
--
2.7.4
^ permalink raw reply related [flat|nested] only message in thread
only message in thread, other threads:[~2019-02-20 2:06 UTC | newest]
Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-20 2:09 [PATCH v3] block: Fix a WRITE SAME BUG_ON ZhangXiaoxu
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).