From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S941486AbcKNDu7 (ORCPT ); Sun, 13 Nov 2016 22:50:59 -0500 Received: from shadbolt.e.decadent.org.uk ([88.96.1.126]:45165 "EHLO shadbolt.e.decadent.org.uk" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752648AbcKNCEP (ORCPT ); Sun, 13 Nov 2016 21:04:15 -0500 Content-Type: text/plain; charset="UTF-8" Content-Disposition: inline Content-Transfer-Encoding: 8bit MIME-Version: 1.0 From: Ben Hutchings To: linux-kernel@vger.kernel.org, stable@vger.kernel.org CC: akpm@linux-foundation.org, "Bart Van Assche" , "Mike Christie" , "Nicholas Bellinger" Date: Mon, 14 Nov 2016 00:14:20 +0000 Message-ID: X-Mailer: LinuxStableQueue (scripts by bwh) Subject: [PATCH 3.16 095/346] target: Fix max_unmap_lba_count calc overflow In-Reply-To: X-SA-Exim-Connect-IP: 2a02:8011:400e:2:6f00:88c8:c921:d332 X-SA-Exim-Mail-From: ben@decadent.org.uk X-SA-Exim-Scanned: No (on shadbolt.decadent.org.uk); SAEximRunCond expanded to false Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 3.16.39-rc1 review patch. If anyone has any objections, please let me know. ------------------ From: Mike Christie commit ea263c7fada4af8ec7fe5fcfd6e7d7705a89351b upstream. max_discard_sectors only 32bits, and some non scsi backend devices will set this to the max 0xffffffff, so we can end up overflowing during the max_unmap_lba_count calculation. This fixes a regression caused by my patch: commit 8a9ebe717a133ba7bc90b06047f43cc6b8bcb8b3 Author: Mike Christie Date: Mon Jan 18 14:09:27 2016 -0600 target: Fix WRITE_SAME/DISCARD conversion to linux 512b sectors which can result in extra discards being sent to due the overflow causing max_unmap_lba_count to be smaller than what the backing device can actually support. Signed-off-by: Mike Christie Reviewed-by: Bart Van Assche Signed-off-by: Nicholas Bellinger Signed-off-by: Ben Hutchings --- drivers/target/target_core_device.c | 8 +++++--- drivers/target/target_core_file.c | 3 +-- drivers/target/target_core_iblock.c | 3 +-- include/target/target_core_backend.h | 2 +- 4 files changed, 8 insertions(+), 8 deletions(-) --- a/drivers/target/target_core_device.c +++ b/drivers/target/target_core_device.c @@ -1583,13 +1583,15 @@ struct se_device *target_alloc_device(st * in ATA and we need to set TPE=1 */ bool target_configure_unmap_from_queue(struct se_dev_attrib *attrib, - struct request_queue *q, int block_size) + struct request_queue *q) { + int block_size = queue_logical_block_size(q); + if (!blk_queue_discard(q)) return false; - attrib->max_unmap_lba_count = (q->limits.max_discard_sectors << 9) / - block_size; + attrib->max_unmap_lba_count = + q->limits.max_discard_sectors >> (ilog2(block_size) - 9); /* * Currently hardcoded to 1 in Linux/SCSI code.. */ --- a/drivers/target/target_core_file.c +++ b/drivers/target/target_core_file.c @@ -165,8 +165,7 @@ static int fd_configure_device(struct se dev_size, div_u64(dev_size, fd_dev->fd_block_size), fd_dev->fd_block_size); - if (target_configure_unmap_from_queue(&dev->dev_attrib, q, - fd_dev->fd_block_size)) + if (target_configure_unmap_from_queue(&dev->dev_attrib, q)) pr_debug("IFILE: BLOCK Discard support available," " disabled by default\n"); /* --- a/drivers/target/target_core_iblock.c +++ b/drivers/target/target_core_iblock.c @@ -126,8 +126,7 @@ static int iblock_configure_device(struc dev->dev_attrib.hw_max_sectors = queue_max_hw_sectors(q); dev->dev_attrib.hw_queue_depth = q->nr_requests; - if (target_configure_unmap_from_queue(&dev->dev_attrib, q, - dev->dev_attrib.hw_block_size)) + if (target_configure_unmap_from_queue(&dev->dev_attrib, q)) pr_debug("IBLOCK: BLOCK Discard support available," " disabled by default\n"); --- a/include/target/target_core_backend.h +++ b/include/target/target_core_backend.h @@ -97,6 +97,6 @@ sense_reason_t transport_generic_map_mem void array_free(void *array, int n); sector_t target_to_linux_sector(struct se_device *dev, sector_t lb); bool target_configure_unmap_from_queue(struct se_dev_attrib *attrib, - struct request_queue *q, int block_size); + struct request_queue *q); #endif /* TARGET_CORE_BACKEND_H */