All of lore.kernel.org
 help / color / mirror / Atom feed
From: Brian King <brking@linux.vnet.ibm.com>
To: Christoph Hellwig <hch@infradead.org>
Cc: linux-scsi@vger.kernel.org, brking@pobox.com, jejb@linux.ibm.com,
	martin.petersen@oracle.com, michael.christie@oracle.com
Subject: Re: [PATCH] scsi: Update max_hw_sectors on rescan
Date: Wed, 24 Jan 2024 16:46:25 -0600	[thread overview]
Message-ID: <c665ba57-92a2-4d6a-adb1-4c3222de5f38@linux.vnet.ibm.com> (raw)
In-Reply-To: <ZbDXV4u17fcQHwjN@infradead.org>

On 1/24/24 3:24 AM, Christoph Hellwig wrote:
> We can't change the host-wide limit here (it wouldn't apply to all
> LUs anyway).  If your limit is per-LU, you can call
> blk_queue_max_hw_sectors from ->slave_configure.

Unfortunately, it doesn't look like slave_configure gets called in the
scenario in question. In this case we already have a scsi_device created but
its in devloss state and the FC transport layer is bringing it back online.

There is also the point that Mike brought up in that if fast fail tmo
has not yet fired, there could be I/O still in the queue that is now
too large. 

To answer your earlier question, Mike, if the VIOS receives a request that
is too large it closes the CRQ, forcing an entire reinit / discovery,
so its definitely not something we want to encounter. I'm trying to get this
behavior improved so that only the one command fails, but that's not what
happens today.

I suppose I could iterate through all the LUNs and call blk_queue_max_hw_sectors
on them, but I'm not sure if that solves the problem. It would close the window
that Mike highlighted, but if there are commands outstanding when this occurs
that are larger than the new max_hw_sectors and they get requeued, will they
get split in the block layer when they get resent to the LLD or will they
just get resent as-is? If its the latter, I'd get a request larger than
what I can support.

-Brian


-- 
Brian King
Power Linux I/O
IBM Linux Technology Center



      reply	other threads:[~2024-01-24 22:46 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-01-17 21:36 [PATCH] scsi: Update max_hw_sectors on rescan Brian King
2024-01-18 15:44 ` John Garry
2024-01-18 17:22   ` Brian King
2024-01-19  9:02     ` John Garry
2024-01-23 13:59       ` Brian King
2024-01-23 22:40 ` Mike Christie
2024-01-24  9:24 ` Christoph Hellwig
2024-01-24 22:46   ` Brian King [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c665ba57-92a2-4d6a-adb1-4c3222de5f38@linux.vnet.ibm.com \
    --to=brking@linux.vnet.ibm.com \
    --cc=brking@pobox.com \
    --cc=hch@infradead.org \
    --cc=jejb@linux.ibm.com \
    --cc=linux-scsi@vger.kernel.org \
    --cc=martin.petersen@oracle.com \
    --cc=michael.christie@oracle.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.