From: Ming Lei <ming.lei@redhat.com>
To: Dexuan Cui <decui@microsoft.com>
Cc: kys@microsoft.com, sthemmin@microsoft.com, wei.liu@kernel.org,
jejb@linux.ibm.com, martin.petersen@oracle.com,
haiyangz@microsoft.com, bvanassche@acm.org,
john.garry@huawei.com, linux-scsi@vger.kernel.org,
linux-hyperv@vger.kernel.org, longli@microsoft.com,
mikelley@microsoft.com, linux-kernel@vger.kernel.org,
stable@vger.kernel.org
Subject: Re: [PATCH v2] scsi: core: Fix shost->cmd_per_lun calculation in scsi_add_host_with_dma()
Date: Fri, 8 Oct 2021 11:19:47 +0800 [thread overview]
Message-ID: <YV+40/pHlLwseFw/@T590> (raw)
In-Reply-To: <20211007174957.2080-1-decui@microsoft.com>
On Thu, Oct 07, 2021 at 10:49:57AM -0700, Dexuan Cui wrote:
> After commit ea2f0f77538c, a 416-CPU VM running on Hyper-V hangs during
> boot because scsi_add_host_with_dma() sets shost->cmd_per_lun to a
> negative number (the below numbers may differ in different kernel versions):
> in drivers/scsi/storvsc_drv.c, storvsc_drv_init() sets
> 'max_outstanding_req_per_channel' to 352, and storvsc_probe() sets
> 'max_sub_channels' to (416 - 1) / 4 = 103 and sets scsi_driver.can_queue to
> 352 * (103 + 1) * (100 - 10) / 100 = 32947, which exceeds SHRT_MAX.
>
> Use min_t(int, ...) to fix the issue.
>
> Fixes: ea2f0f77538c ("scsi: core: Cap scsi_host cmd_per_lun at can_queue")
> Cc: stable@vger.kernel.org
> Signed-off-by: Dexuan Cui <decui@microsoft.com>
> ---
>
> v1 tried to fix the issue by changing the storvsc driver:
> https://lwn.net/ml/linux-kernel/BYAPR21MB1270BBC14D5F1AE69FC31A16BFB09@BYAPR21MB1270.namprd21.prod.outlook.com/
>
> v2 directly fixes the scsi core change instead as Michael Kelley and
> John Garry suggested (refer to the above link).
>
> drivers/scsi/hosts.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/scsi/hosts.c b/drivers/scsi/hosts.c
> index 3f6f14f0cafb..24b72ee4246f 100644
> --- a/drivers/scsi/hosts.c
> +++ b/drivers/scsi/hosts.c
> @@ -220,7 +220,8 @@ int scsi_add_host_with_dma(struct Scsi_Host *shost, struct device *dev,
> goto fail;
> }
>
> - shost->cmd_per_lun = min_t(short, shost->cmd_per_lun,
> + /* Use min_t(int, ...) in case shost->can_queue exceeds SHRT_MAX */
> + shost->cmd_per_lun = min_t(int, shost->cmd_per_lun,
> shost->can_queue);
Looks fine:
Reviewed-by: Ming Lei <ming.lei@redhat.com>
--
Ming
prev parent reply other threads:[~2021-10-08 3:20 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-10-07 17:49 [PATCH v2] scsi: core: Fix shost->cmd_per_lun calculation in scsi_add_host_with_dma() Dexuan Cui
2021-10-07 18:10 ` Haiyang Zhang
2021-10-07 21:41 ` John Garry
2021-10-07 22:52 ` Dexuan Cui
2021-10-08 3:19 ` Ming Lei [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YV+40/pHlLwseFw/@T590 \
--to=ming.lei@redhat.com \
--cc=bvanassche@acm.org \
--cc=decui@microsoft.com \
--cc=haiyangz@microsoft.com \
--cc=jejb@linux.ibm.com \
--cc=john.garry@huawei.com \
--cc=kys@microsoft.com \
--cc=linux-hyperv@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-scsi@vger.kernel.org \
--cc=longli@microsoft.com \
--cc=martin.petersen@oracle.com \
--cc=mikelley@microsoft.com \
--cc=stable@vger.kernel.org \
--cc=sthemmin@microsoft.com \
--cc=wei.liu@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).