All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: [Open-FCoE] [RFC PATCH] scsi, fcoe, libfc: drop scsi host_lock use from fc_queuecommand
       [not found] <20100831225338.25102.59500.stgit@localhost.localdomain>
@ 2010-08-31 23:56 ` Nicholas A. Bellinger
  2010-09-01  0:16   ` Nicholas A. Bellinger
  2010-09-01  4:17   ` Mike Christie
  0 siblings, 2 replies; 13+ messages in thread
From: Nicholas A. Bellinger @ 2010-08-31 23:56 UTC (permalink / raw)
  To: Vasu Dev
  Cc: michaelc, devel, linux-scsi, FUJITA Tomonori, James Bottomley,
	Christoph Hellwig

On Tue, 2010-08-31 at 15:53 -0700, Vasu Dev wrote:
> Currently fc_queuecommand drops this lock very early on and then re-acquires
> this lock just before return, this re-acquired lock gets dropped immediately
> by its fast path caller scsi_dispatch_cmd, this re-acquire and then immediate
> drop on each IO hits performance especially with small size IOs on multi-core
> systems, this hit is significant about 25% with 512 bytes size IOs.
> 
> This lock is not needed in fc_queuecommand and calling fc_queuecommand
> without this lock held removes performance hit due to above described
> re-acquire and immediately dropping this lock on each IO.

Hi Vasu,

Very interesting numbers with small blocksizes and your patch.  I recall
trying to make a similar optimization patch myself in early 2.6.0 for
Linux/iSCSI initiators which still obtained struct Scsi_Host->host_lock
before calling struct scsi_host_template->queuecommand() in
drivers/scsi/scsi.c:scsi_dispatch_cmd(), but instead simply did not
attempt disable hard interrupts with spin_lock_irqsave().  I actually
run into some deadlocks way back then, and ended up giving up on the
patch and have not pursued the idea further since..

Anyways the idea is an interesting one for discussion, and it's
interesting to finally see the numbers on this.

I have comment on your patch below..

> 
> So this patch adds unlocked_qcmds flag to drop host_lock before
> calling only fc_queuecommand and no need to re-acquire and then drop
> this lock on each IO, this added flag drops this lock only if LLD wants
> as fcoe.ko does here, thus this change won't affect existing LLD since
> added unlocked_qcmds flag will be zero in those cases and their host
> lock uses would remain same after this patch.
> 
> I considered having this lock dropped inside fc_queuecommand using irq flags
> saved by its caller then return without re-acquiring this lock but that
> would make this lock usage asymmetric and passing saved irq flags was
> another issue with this approach, so RFC to seek any better possible fix
> for this or any issue with added unlocked_qcmds while not having
> fc_queuecommand under host_lock anymore. I'd appreciate any comments from
> Mike, Joe and folks on open-fcoe first before taking this patch to
> linux-scsi directly as suggested by Rob and Chris here.
> 
> I also did cable pull test in middle of IOs with this patch and I don't see
> any issue with that after this patch and some more testing is already done
> successfully with this patch.
> 
> Signed-off-by: Vasu Dev <vasu.dev@intel.com>
> ---
>  drivers/scsi/fcoe/fcoe.c    |    1 +
>  drivers/scsi/libfc/fc_fcp.c |   14 +++++---------
>  drivers/scsi/scsi.c         |    8 +++++++-
>  include/scsi/scsi_host.h    |    3 +++
>  4 files changed, 16 insertions(+), 10 deletions(-)
> 
> diff --git a/drivers/scsi/fcoe/fcoe.c b/drivers/scsi/fcoe/fcoe.c
> index 844d618..280a4df 100644
> --- a/drivers/scsi/fcoe/fcoe.c
> +++ b/drivers/scsi/fcoe/fcoe.c
> @@ -706,6 +706,7 @@ static int fcoe_shost_config(struct fc_lport *lport, struct device *dev)
>  	lport->host->max_id = FCOE_MAX_FCP_TARGET;
>  	lport->host->max_channel = 0;
>  	lport->host->max_cmd_len = FCOE_MAX_CMD_LEN;
> +	lport->host->unlocked_qcmds = 1;
>  
>  	if (lport->vport)
>  		lport->host->transportt = fcoe_vport_transport_template;
> diff --git a/drivers/scsi/libfc/fc_fcp.c b/drivers/scsi/libfc/fc_fcp.c
> index 43866e6..39b6bfa 100644
> --- a/drivers/scsi/libfc/fc_fcp.c
> +++ b/drivers/scsi/libfc/fc_fcp.c
> @@ -1751,8 +1751,7 @@ static inline int fc_fcp_lport_queue_ready(struct fc_lport *lport)
>   * @cmd:   The scsi_cmnd to be executed
>   * @done:  The callback function to be called when the scsi_cmnd is complete
>   *
> - * This is the i/o strategy routine, called by the SCSI layer. This routine
> - * is called with the host_lock held.
> + * This is the i/o strategy routine, called by the SCSI layer.
>   */
>  int fc_queuecommand(struct scsi_cmnd *sc_cmd, void (*done)(struct scsi_cmnd *))
>  {
> @@ -1770,9 +1769,8 @@ int fc_queuecommand(struct scsi_cmnd *sc_cmd, void (*done)(struct scsi_cmnd *))
>  	if (rval) {
>  		sc_cmd->result = rval;
>  		done(sc_cmd);
> -		return 0;
> +		return rc;
>  	}
> -	spin_unlock_irq(lport->host->host_lock);
>  
>  	if (!*(struct fc_remote_port **)rport->dd_data) {
>  		/*
> @@ -1781,7 +1779,7 @@ int fc_queuecommand(struct scsi_cmnd *sc_cmd, void (*done)(struct scsi_cmnd *))
>  		 */
>  		sc_cmd->result = DID_IMM_RETRY << 16;
>  		done(sc_cmd);
> -		goto out;
> +		return rc;
>  	}
>  
>  	rpriv = rport->dd_data;
> @@ -1790,13 +1788,13 @@ int fc_queuecommand(struct scsi_cmnd *sc_cmd, void (*done)(struct scsi_cmnd *))
>  		if (lport->qfull)
>  			fc_fcp_can_queue_ramp_down(lport);
>  		rc = SCSI_MLQUEUE_HOST_BUSY;
> -		goto out;
> +		return rc;
>  	}
>  
>  	fsp = fc_fcp_pkt_alloc(lport, GFP_ATOMIC);
>  	if (fsp == NULL) {
>  		rc = SCSI_MLQUEUE_HOST_BUSY;
> -		goto out;
> +		return rc;
>  	}
>  
>  	/*
> @@ -1848,8 +1846,6 @@ int fc_queuecommand(struct scsi_cmnd *sc_cmd, void (*done)(struct scsi_cmnd *))
>  		fc_fcp_pkt_release(fsp);
>  		rc = SCSI_MLQUEUE_HOST_BUSY;
>  	}
> -out:
> -	spin_lock_irq(lport->host->host_lock);
>  	return rc;
>  }
>  EXPORT_SYMBOL(fc_queuecommand);
> diff --git a/drivers/scsi/scsi.c b/drivers/scsi/scsi.c
> index ad0ed21..e7b9f3c 100644
> --- a/drivers/scsi/scsi.c
> +++ b/drivers/scsi/scsi.c
> @@ -746,6 +746,9 @@ int scsi_dispatch_cmd(struct scsi_cmnd *cmd)
>  	 */
>  	scsi_cmd_get_serial(host, cmd); 
>  
> +	if (host->unlocked_qcmds)
> +		spin_unlock_irqrestore(host->host_lock, flags);
> +
>  	if (unlikely(host->shost_state == SHOST_DEL)) {
>  		cmd->result = (DID_NO_CONNECT << 16);
>  		scsi_done(cmd);

I don't think it's safe to call scsi_done() for the SHOST_DEL case here
with host->unlocked_qcmds=1 w/o holding host_lock, nor would it be safe
for the SCSI LLD itself using host->unlocked_qcmds=1 to call the
(*scsi_done)() being passed into sht->queuecommand() without
host->host_lock being held by either the SCSI ML or the SCSI LLD.

There may also be some other issues with doing this that are more
subtle..  Any comments on the possibility of doing this properly for
software initiators from the Linux/SCSI folks..?  Considering that
host_lock is always held for scsi_dispatch_cmd() -> scsi_done()
competion..?

Thanks!

--nab



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Open-FCoE] [RFC PATCH] scsi, fcoe, libfc: drop scsi host_lock use from fc_queuecommand
  2010-08-31 23:56 ` [Open-FCoE] [RFC PATCH] scsi, fcoe, libfc: drop scsi host_lock use from fc_queuecommand Nicholas A. Bellinger
@ 2010-09-01  0:16   ` Nicholas A. Bellinger
  2010-09-01  4:17   ` Mike Christie
  1 sibling, 0 replies; 13+ messages in thread
From: Nicholas A. Bellinger @ 2010-09-01  0:16 UTC (permalink / raw)
  To: Vasu Dev
  Cc: michaelc, devel, linux-scsi, FUJITA Tomonori, James Bottomley,
	Christoph Hellwig

On Tue, 2010-08-31 at 16:56 -0700, Nicholas A. Bellinger wrote:
> On Tue, 2010-08-31 at 15:53 -0700, Vasu Dev wrote:
> > Currently fc_queuecommand drops this lock very early on and then re-acquires
> > this lock just before return, this re-acquired lock gets dropped immediately
> > by its fast path caller scsi_dispatch_cmd, this re-acquire and then immediate
> > drop on each IO hits performance especially with small size IOs on multi-core
> > systems, this hit is significant about 25% with 512 bytes size IOs.
> > 
> > This lock is not needed in fc_queuecommand and calling fc_queuecommand
> > without this lock held removes performance hit due to above described
> > re-acquire and immediately dropping this lock on each IO.
> 
> Hi Vasu,
> 
> Very interesting numbers with small blocksizes and your patch.  I recall
> trying to make a similar optimization patch myself in early 2.6.0 for
> Linux/iSCSI initiators which still obtained struct Scsi_Host->host_lock
> before calling struct scsi_host_template->queuecommand() in
> drivers/scsi/scsi.c:scsi_dispatch_cmd(), but instead simply did not
> attempt disable hard interrupts with spin_lock_irqsave().  I actually
> run into some deadlocks way back then, and ended up giving up on the
> patch and have not pursued the idea further since..
> 
> Anyways the idea is an interesting one for discussion, and it's
> interesting to finally see the numbers on this.
> 
> I have comment on your patch below..
> 
> > 
> > So this patch adds unlocked_qcmds flag to drop host_lock before
> > calling only fc_queuecommand and no need to re-acquire and then drop
> > this lock on each IO, this added flag drops this lock only if LLD wants
> > as fcoe.ko does here, thus this change won't affect existing LLD since
> > added unlocked_qcmds flag will be zero in those cases and their host
> > lock uses would remain same after this patch.
> > 
> > I considered having this lock dropped inside fc_queuecommand using irq flags
> > saved by its caller then return without re-acquiring this lock but that
> > would make this lock usage asymmetric and passing saved irq flags was
> > another issue with this approach, so RFC to seek any better possible fix
> > for this or any issue with added unlocked_qcmds while not having
> > fc_queuecommand under host_lock anymore. I'd appreciate any comments from
> > Mike, Joe and folks on open-fcoe first before taking this patch to
> > linux-scsi directly as suggested by Rob and Chris here.
> > 
> > I also did cable pull test in middle of IOs with this patch and I don't see
> > any issue with that after this patch and some more testing is already done
> > successfully with this patch.
> > 
> > Signed-off-by: Vasu Dev <vasu.dev@intel.com>
> > ---
> >  drivers/scsi/fcoe/fcoe.c    |    1 +
> >  drivers/scsi/libfc/fc_fcp.c |   14 +++++---------
> >  drivers/scsi/scsi.c         |    8 +++++++-
> >  include/scsi/scsi_host.h    |    3 +++
> >  4 files changed, 16 insertions(+), 10 deletions(-)
> > 
> > diff --git a/drivers/scsi/fcoe/fcoe.c b/drivers/scsi/fcoe/fcoe.c
> > index 844d618..280a4df 100644
> > --- a/drivers/scsi/fcoe/fcoe.c
> > +++ b/drivers/scsi/fcoe/fcoe.c
> > @@ -706,6 +706,7 @@ static int fcoe_shost_config(struct fc_lport *lport, struct device *dev)
> >  	lport->host->max_id = FCOE_MAX_FCP_TARGET;
> >  	lport->host->max_channel = 0;
> >  	lport->host->max_cmd_len = FCOE_MAX_CMD_LEN;
> > +	lport->host->unlocked_qcmds = 1;
> >  
> >  	if (lport->vport)
> >  		lport->host->transportt = fcoe_vport_transport_template;
> > diff --git a/drivers/scsi/libfc/fc_fcp.c b/drivers/scsi/libfc/fc_fcp.c
> > index 43866e6..39b6bfa 100644
> > --- a/drivers/scsi/libfc/fc_fcp.c
> > +++ b/drivers/scsi/libfc/fc_fcp.c
> > @@ -1751,8 +1751,7 @@ static inline int fc_fcp_lport_queue_ready(struct fc_lport *lport)
> >   * @cmd:   The scsi_cmnd to be executed
> >   * @done:  The callback function to be called when the scsi_cmnd is complete
> >   *
> > - * This is the i/o strategy routine, called by the SCSI layer. This routine
> > - * is called with the host_lock held.
> > + * This is the i/o strategy routine, called by the SCSI layer.
> >   */
> >  int fc_queuecommand(struct scsi_cmnd *sc_cmd, void (*done)(struct scsi_cmnd *))
> >  {
> > @@ -1770,9 +1769,8 @@ int fc_queuecommand(struct scsi_cmnd *sc_cmd, void (*done)(struct scsi_cmnd *))
> >  	if (rval) {
> >  		sc_cmd->result = rval;
> >  		done(sc_cmd);
> > -		return 0;
> > +		return rc;
> >  	}
> > -	spin_unlock_irq(lport->host->host_lock);
> >  
> >  	if (!*(struct fc_remote_port **)rport->dd_data) {
> >  		/*
> > @@ -1781,7 +1779,7 @@ int fc_queuecommand(struct scsi_cmnd *sc_cmd, void (*done)(struct scsi_cmnd *))
> >  		 */
> >  		sc_cmd->result = DID_IMM_RETRY << 16;
> >  		done(sc_cmd);
> > -		goto out;
> > +		return rc;
> >  	}
> >  
> >  	rpriv = rport->dd_data;
> > @@ -1790,13 +1788,13 @@ int fc_queuecommand(struct scsi_cmnd *sc_cmd, void (*done)(struct scsi_cmnd *))
> >  		if (lport->qfull)
> >  			fc_fcp_can_queue_ramp_down(lport);
> >  		rc = SCSI_MLQUEUE_HOST_BUSY;
> > -		goto out;
> > +		return rc;
> >  	}
> >  
> >  	fsp = fc_fcp_pkt_alloc(lport, GFP_ATOMIC);
> >  	if (fsp == NULL) {
> >  		rc = SCSI_MLQUEUE_HOST_BUSY;
> > -		goto out;
> > +		return rc;
> >  	}
> >  
> >  	/*
> > @@ -1848,8 +1846,6 @@ int fc_queuecommand(struct scsi_cmnd *sc_cmd, void (*done)(struct scsi_cmnd *))
> >  		fc_fcp_pkt_release(fsp);
> >  		rc = SCSI_MLQUEUE_HOST_BUSY;
> >  	}
> > -out:
> > -	spin_lock_irq(lport->host->host_lock);
> >  	return rc;
> >  }
> >  EXPORT_SYMBOL(fc_queuecommand);
> > diff --git a/drivers/scsi/scsi.c b/drivers/scsi/scsi.c
> > index ad0ed21..e7b9f3c 100644
> > --- a/drivers/scsi/scsi.c
> > +++ b/drivers/scsi/scsi.c
> > @@ -746,6 +746,9 @@ int scsi_dispatch_cmd(struct scsi_cmnd *cmd)
> >  	 */
> >  	scsi_cmd_get_serial(host, cmd); 
> >  
> > +	if (host->unlocked_qcmds)
> > +		spin_unlock_irqrestore(host->host_lock, flags);
> > +
> >  	if (unlikely(host->shost_state == SHOST_DEL)) {
> >  		cmd->result = (DID_NO_CONNECT << 16);
> >  		scsi_done(cmd);
> 
> I don't think it's safe to call scsi_done() for the SHOST_DEL case here
> with host->unlocked_qcmds=1 w/o holding host_lock, nor would it be safe
> for the SCSI LLD itself using host->unlocked_qcmds=1 to call the
> (*scsi_done)() being passed into sht->queuecommand() without
> host->host_lock being held by either the SCSI ML or the SCSI LLD.
> 

Actually also, the accessing of host->shost_state needs to be protected
by host->host_lock at all times in order for HBA shutdown logic to
function as expected for scsi_dispatch_cmd()..

Best,

--nab



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Open-FCoE] [RFC PATCH] scsi, fcoe, libfc: drop scsi host_lock use from fc_queuecommand
  2010-08-31 23:56 ` [Open-FCoE] [RFC PATCH] scsi, fcoe, libfc: drop scsi host_lock use from fc_queuecommand Nicholas A. Bellinger
  2010-09-01  0:16   ` Nicholas A. Bellinger
@ 2010-09-01  4:17   ` Mike Christie
       [not found]     ` <4C7DD3E8.9050700-hcNo3dDEHLuVc3sceRu5cw@public.gmane.org>
  1 sibling, 1 reply; 13+ messages in thread
From: Mike Christie @ 2010-09-01  4:17 UTC (permalink / raw)
  To: Nicholas A. Bellinger
  Cc: Vasu Dev, devel, linux-scsi, FUJITA Tomonori, James Bottomley,
	Christoph Hellwig

On 08/31/2010 06:56 PM, Nicholas A. Bellinger wrote:
>> +	if (host->unlocked_qcmds)
>> +		spin_unlock_irqrestore(host->host_lock, flags);
>> +
>>   	if (unlikely(host->shost_state == SHOST_DEL)) {
>>   		cmd->result = (DID_NO_CONNECT<<  16);
>>   		scsi_done(cmd);
>
> I don't think it's safe to call scsi_done() for the SHOST_DEL case here
> with host->unlocked_qcmds=1 w/o holding host_lock, nor would it be safe
> for the SCSI LLD itself using host->unlocked_qcmds=1 to call the
> (*scsi_done)() being passed into sht->queuecommand() without
> host->host_lock being held by either the SCSI ML or the SCSI LLD.

The host state should be checked under the host lock, but I do not think 
it needs to be held with calling scsi_done. scsi_done just queues up the 
request to be completed in the block softirq, and the block layer 
protects against something like the command getting completed by 
multiple code paths/threads.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [RFC PATCH] scsi, fcoe, libfc: drop scsi host_lock use from fc_queuecommand
       [not found]     ` <4C7DD3E8.9050700-hcNo3dDEHLuVc3sceRu5cw@public.gmane.org>
@ 2010-09-01  7:57       ` Zou, Yi
  2010-09-01 20:10         ` [Open-FCoE] " Nicholas A. Bellinger
  0 siblings, 1 reply; 13+ messages in thread
From: Zou, Yi @ 2010-09-01  7:57 UTC (permalink / raw)
  To: Mike Christie, Nicholas A. Bellinger
  Cc: FUJITA Tomonori, James Bottomley, devel-s9riP+hp16TNLxjTenLetw,
	linux-scsi, Christoph Hellwig

> 
> On 08/31/2010 06:56 PM, Nicholas A. Bellinger wrote:
> >> +	if (host->unlocked_qcmds)
> >> +		spin_unlock_irqrestore(host->host_lock, flags);
> >> +
> >>   	if (unlikely(host->shost_state == SHOST_DEL)) {
> >>   		cmd->result = (DID_NO_CONNECT<<  16);
> >>   		scsi_done(cmd);
> >
> > I don't think it's safe to call scsi_done() for the SHOST_DEL case here
> > with host->unlocked_qcmds=1 w/o holding host_lock, nor would it be safe
> > for the SCSI LLD itself using host->unlocked_qcmds=1 to call the
> > (*scsi_done)() being passed into sht->queuecommand() without
> > host->host_lock being held by either the SCSI ML or the SCSI LLD.
> 
> The host state should be checked under the host lock, but I do not think
> it needs to be held with calling scsi_done. scsi_done just queues up the
> request to be completed in the block softirq, and the block layer
> protects against something like the command getting completed by
> multiple code paths/threads.

It looks safe to me to call scsi_done() w/o host_lock held,  in which case,
there is probably no need for the flag unlocked_qcmds, but just move the 
spin_unlock_ireqrestore() up to just after scsi_cmd_get_serial(), and let
queuecommand() decide when/where if it wants to grab&drop the host lock, where
in the case for fc_queuecomamnd(), we won't grab it at all. Just a thought...

Thanks,
yi

^ permalink raw reply	[flat|nested] 13+ messages in thread

* RE: [Open-FCoE] [RFC PATCH] scsi, fcoe, libfc: drop scsi host_lock use from fc_queuecommand
  2010-09-01  7:57       ` Zou, Yi
@ 2010-09-01 20:10         ` Nicholas A. Bellinger
       [not found]           ` <1283371821.32007.636.camel-Y1+j5t8j3WgjMeEPmliV8E/sVC8ogwMJ@public.gmane.org>
  0 siblings, 1 reply; 13+ messages in thread
From: Nicholas A. Bellinger @ 2010-09-01 20:10 UTC (permalink / raw)
  To: Zou, Yi
  Cc: Mike Christie, linux-scsi, FUJITA Tomonori, James Bottomley,
	devel, Christoph Hellwig, Joe Eykholt, Hannes Reinecke,
	Matthew Wilcox

On Wed, 2010-09-01 at 00:57 -0700, Zou, Yi wrote:
> > 
> > On 08/31/2010 06:56 PM, Nicholas A. Bellinger wrote:
> > >> +	if (host->unlocked_qcmds)
> > >> +		spin_unlock_irqrestore(host->host_lock, flags);
> > >> +
> > >>   	if (unlikely(host->shost_state == SHOST_DEL)) {
> > >>   		cmd->result = (DID_NO_CONNECT<<  16);
> > >>   		scsi_done(cmd);
> > >
> > > I don't think it's safe to call scsi_done() for the SHOST_DEL case here
> > > with host->unlocked_qcmds=1 w/o holding host_lock, nor would it be safe
> > > for the SCSI LLD itself using host->unlocked_qcmds=1 to call the
> > > (*scsi_done)() being passed into sht->queuecommand() without
> > > host->host_lock being held by either the SCSI ML or the SCSI LLD.
> > 
> > The host state should be checked under the host lock, but I do not think
> > it needs to be held with calling scsi_done. scsi_done just queues up the
> > request to be completed in the block softirq, and the block layer
> > protects against something like the command getting completed by
> > multiple code paths/threads.
> 
> It looks safe to me to call scsi_done() w/o host_lock held,

Hmmmm, this indeed this appears to be safe now..  For some reason I had
it in my head (and in TCM_Loop virtual SCSI LLD code as well) that
host_lock needed to be held while calling struct scsi_cmnd->scsi_done().

I assume this is some old age relic from the BLK days in the SCSI
completion path, and the subsequent conversion.  I still see a couple of
ancient drivers in drivers/scsi/ that are still doing this, but I
believe I stand corrected in that (all..?) of the modern in-use
drivers/scsi code is indeed *not* holding host_lock while calling struct
scsi_cmnd->scsi_done()..

In that case, I will prepare a patch for TCM_Loop v4 and post it to
linux-scsi.  Thanks for the info..!

>   in which case,
> there is probably no need for the flag unlocked_qcmds, but just move the 
> spin_unlock_ireqrestore() up to just after scsi_cmd_get_serial(), and let
> queuecommand() decide when/where if it wants to grab&drop the host lock, where
> in the case for fc_queuecomamnd(), we won't grab it at all. Just a thought...
> 

Yes, but many existing SCSI LLD's SHT->queuecommand() depends upon
unlocking the host_lock being held, but I don't know how many actually
need to do this to begin with...?

I think initially this patch would need to be able to run the
'optimized' path first with a SCSI LLD like an FCoE or iSCSI software
initiator that knows that SHT->queuecommand() is not held, but still
allow existing LLDs that expect to unlock and lock struct
Scsi_Host->host_lock themselves internally do not immediately all break
and deadlock terribly.

>From that point we could discuss for a v2 patch about converting
everything single LLD queuecommand() caller to not directly touch
host_lock, unless they have some bizarre reason for doing so.  Again,
this is assume that calling SHT->queuecommand() is safe to begin with,
and there are not cases of interaction by the LLDs in
SHT->queuecommand() that when accessing struct Scsi_Host require the
host_lock to be held.

James and Co, any comments here..?

Best,

--nab



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [RFC PATCH] scsi, fcoe, libfc: drop scsi host_lock use from fc_queuecommand
       [not found]           ` <1283371821.32007.636.camel-Y1+j5t8j3WgjMeEPmliV8E/sVC8ogwMJ@public.gmane.org>
@ 2010-09-01 21:06             ` Vasu Dev
  2010-09-01 21:38               ` [Open-FCoE] " Nicholas A. Bellinger
       [not found]               ` <1283375187.30431.71.camel-B2RhF0yJhE275v1z/vFq2g@public.gmane.org>
  0 siblings, 2 replies; 13+ messages in thread
From: Vasu Dev @ 2010-09-01 21:06 UTC (permalink / raw)
  To: Nicholas A. Bellinger
  Cc: linux-scsi, Matthew Wilcox, FUJITA Tomonori, James Bottomley,
	devel-s9riP+hp16TNLxjTenLetw, Christoph Hellwig

On Wed, 2010-09-01 at 13:10 -0700, Nicholas A. Bellinger wrote:
> On Wed, 2010-09-01 at 00:57 -0700, Zou, Yi wrote:
> > > 
> > > On 08/31/2010 06:56 PM, Nicholas A. Bellinger wrote:
> > > >> +	if (host->unlocked_qcmds)
> > > >> +		spin_unlock_irqrestore(host->host_lock, flags);
> > > >> +
> > > >>   	if (unlikely(host->shost_state == SHOST_DEL)) {
> > > >>   		cmd->result = (DID_NO_CONNECT<<  16);
> > > >>   		scsi_done(cmd);
> > > >
> > > > I don't think it's safe to call scsi_done() for the SHOST_DEL case here
> > > > with host->unlocked_qcmds=1 w/o holding host_lock, nor would it be safe
> > > > for the SCSI LLD itself using host->unlocked_qcmds=1 to call the
> > > > (*scsi_done)() being passed into sht->queuecommand() without
> > > > host->host_lock being held by either the SCSI ML or the SCSI LLD.
> > > 
> > > The host state should be checked under the host lock, but I do not think
> > > it needs to be held with calling scsi_done. scsi_done just queues up the
> > > request to be completed in the block softirq, and the block layer
> > > protects against something like the command getting completed by
> > > multiple code paths/threads.
> > 
> > It looks safe to me to call scsi_done() w/o host_lock held,
> 
> Hmmmm, this indeed this appears to be safe now..  For some reason I had
> it in my head (and in TCM_Loop virtual SCSI LLD code as well) that
> host_lock needed to be held while calling struct scsi_cmnd->scsi_done().
> 
> I assume this is some old age relic from the BLK days in the SCSI
> completion path, and the subsequent conversion.  I still see a couple of
> ancient drivers in drivers/scsi/ that are still doing this, but I
> believe I stand corrected in that (all..?) of the modern in-use
> drivers/scsi code is indeed *not* holding host_lock while calling struct
> scsi_cmnd->scsi_done()..
> 

fcoe/libfc moved to scsi_done w/o holding scsi host_lock a while ago
around dec, 09 and it was done after discussion with Mathew and Chris
Leech from fcoe side at that time, they may have more to comment on
this.

 
> In that case, I will prepare a patch for TCM_Loop v4 and post it to
> linux-scsi.  Thanks for the info..!

> >   in which case,
> > there is probably no need for the flag unlocked_qcmds, but just move the 
> > spin_unlock_ireqrestore() up to just after scsi_cmd_get_serial(), and let
> > queuecommand() decide when/where if it wants to grab&drop the host lock, where
> > in the case for fc_queuecomamnd(), we won't grab it at all. Just a thought...
> > 
> 
> Yes, but many existing SCSI LLD's SHT->queuecommand() depends upon
> unlocking the host_lock being held, but I don't know how many actually
> need to do this to begin with...?
> 
> I think initially this patch would need to be able to run the
> 'optimized' path first with a SCSI LLD like an FCoE or iSCSI software
> initiator that knows that SHT->queuecommand() is not held, but still
> allow existing LLDs that expect to unlock and lock struct
> Scsi_Host->host_lock themselves internally do not immediately all break
> and deadlock terribly.
> 

I fully agree on this approach. I had same intent with this patch to not
impact existing LLD doing queuecommand under host lock, having such LLD
to re-acquire this lock would pose same perf issue as fcoe is having now
since lock still needed inside scsi_dispatch_cmd for shost_state
checking as indicated above by Nab and Mike beside needed for
scsi_cmd_get_serial there.

I'll restore lock for shost_state and for this unlikely SHOST_DEL case
have this lock dropped later, however still have fc_queuecommand w/o
host lock held, so change would be as:-


diff --git a/drivers/scsi/scsi.c b/drivers/scsi/scsi.c
index ad0ed21..ce504e5 100644
--- a/drivers/scsi/scsi.c
+++ b/drivers/scsi/scsi.c
@@ -749,11 +749,16 @@ int scsi_dispatch_cmd(struct scsi_cmnd *cmd)
        if (unlikely(host->shost_state == SHOST_DEL)) {
                cmd->result = (DID_NO_CONNECT << 16);
                scsi_done(cmd);
+               spin_unlock_irqrestore(host->host_lock, flags);
        } else {
                trace_scsi_dispatch_cmd_start(cmd);
+               if (!host->unlocked_qcmds)
+                       spin_unlock_irqrestore(host->host_lock, flags);
                rtn = host->hostt->queuecommand(cmd, scsi_done);
+               if (host->unlocked_qcmds)
+                       spin_unlock_irqrestore(host->host_lock, flags);
        }
-       spin_unlock_irqrestore(host->host_lock, flags);
+
        if (rtn) {
                trace_scsi_dispatch_cmd_error(cmd, rtn);
                if (rtn != SCSI_MLQUEUE_DEVICE_BUSY &&


Maybe two additional checks here is not so neat but not too bad either
as just two additional checks here, I excluded this unlocked_qcmd checks
from scsi_error queuecommand calling to not clutter its code there with
these additional checks  w/o any good case for that code path.

> >From that point we could discuss for a v2 patch about converting
> everything single LLD queuecommand() caller to not directly touch
> host_lock, unless they have some bizarre reason for doing so. 

Good idea.

>  Again,
> this is assume that calling SHT->queuecommand() is safe to begin with,
> and there are not cases of interaction by the LLDs in
> SHT->queuecommand() that when accessing struct Scsi_Host require the
> host_lock to be held.
> 
> James and Co, any comments here..?
> 

I'm also curious see more comments these good points. Thanks Nab for all
comments and opening up this patch for wider discussion, will help this
patch done sooner.

	Vasu

> Best,
> 
> --nab
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* RE: [Open-FCoE] [RFC PATCH] scsi, fcoe, libfc: drop scsi host_lock use from fc_queuecommand
  2010-09-01 21:06             ` Vasu Dev
@ 2010-09-01 21:38               ` Nicholas A. Bellinger
  2010-09-02 17:24                 ` Vasu Dev
       [not found]               ` <1283375187.30431.71.camel-B2RhF0yJhE275v1z/vFq2g@public.gmane.org>
  1 sibling, 1 reply; 13+ messages in thread
From: Nicholas A. Bellinger @ 2010-09-01 21:38 UTC (permalink / raw)
  To: Vasu Dev
  Cc: Matthew Wilcox, Zou, Yi, Mike Christie, linux-scsi,
	FUJITA Tomonori, James Bottomley, devel, Christoph Hellwig,
	Joe Eykholt, Hannes Reinecke

On Wed, 2010-09-01 at 14:06 -0700, Vasu Dev wrote:
> On Wed, 2010-09-01 at 13:10 -0700, Nicholas A. Bellinger wrote:
> > On Wed, 2010-09-01 at 00:57 -0700, Zou, Yi wrote:
> > > > 
> > > > On 08/31/2010 06:56 PM, Nicholas A. Bellinger wrote:
> > > > >> +	if (host->unlocked_qcmds)
> > > > >> +		spin_unlock_irqrestore(host->host_lock, flags);
> > > > >> +
> > > > >>   	if (unlikely(host->shost_state == SHOST_DEL)) {
> > > > >>   		cmd->result = (DID_NO_CONNECT<<  16);
> > > > >>   		scsi_done(cmd);
> > > > >
> > > > > I don't think it's safe to call scsi_done() for the SHOST_DEL case here
> > > > > with host->unlocked_qcmds=1 w/o holding host_lock, nor would it be safe
> > > > > for the SCSI LLD itself using host->unlocked_qcmds=1 to call the
> > > > > (*scsi_done)() being passed into sht->queuecommand() without
> > > > > host->host_lock being held by either the SCSI ML or the SCSI LLD.
> > > > 
> > > > The host state should be checked under the host lock, but I do not think
> > > > it needs to be held with calling scsi_done. scsi_done just queues up the
> > > > request to be completed in the block softirq, and the block layer
> > > > protects against something like the command getting completed by
> > > > multiple code paths/threads.
> > > 
> > > It looks safe to me to call scsi_done() w/o host_lock held,
> > 
> > Hmmmm, this indeed this appears to be safe now..  For some reason I had
> > it in my head (and in TCM_Loop virtual SCSI LLD code as well) that
> > host_lock needed to be held while calling struct scsi_cmnd->scsi_done().
> > 
> > I assume this is some old age relic from the BLK days in the SCSI
> > completion path, and the subsequent conversion.  I still see a couple of
> > ancient drivers in drivers/scsi/ that are still doing this, but I
> > believe I stand corrected in that (all..?) of the modern in-use
> > drivers/scsi code is indeed *not* holding host_lock while calling struct
> > scsi_cmnd->scsi_done()..
> > 
> 
> fcoe/libfc moved to scsi_done w/o holding scsi host_lock a while ago
> around dec, 09 and it was done after discussion with Mathew and Chris
> Leech from fcoe side at that time, they may have more to comment on
> this.
> 

Very good to know..  Thanks for this info!!

/me queues up patch  8-)

>  
> > In that case, I will prepare a patch for TCM_Loop v4 and post it to
> > linux-scsi.  Thanks for the info..!
> 
> > >   in which case,
> > > there is probably no need for the flag unlocked_qcmds, but just move the 
> > > spin_unlock_ireqrestore() up to just after scsi_cmd_get_serial(), and let
> > > queuecommand() decide when/where if it wants to grab&drop the host lock, where
> > > in the case for fc_queuecomamnd(), we won't grab it at all. Just a thought...
> > > 
> > 
> > Yes, but many existing SCSI LLD's SHT->queuecommand() depends upon
> > unlocking the host_lock being held, but I don't know how many actually
> > need to do this to begin with...?
> > 
> > I think initially this patch would need to be able to run the
> > 'optimized' path first with a SCSI LLD like an FCoE or iSCSI software
> > initiator that knows that SHT->queuecommand() is not held, but still
> > allow existing LLDs that expect to unlock and lock struct
> > Scsi_Host->host_lock themselves internally do not immediately all break
> > and deadlock terribly.
> > 
> 
> I fully agree on this approach. I had same intent with this patch to not
> impact existing LLD doing queuecommand under host lock, having such LLD
> to re-acquire this lock would pose same perf issue as fcoe is having now
> since lock still needed inside scsi_dispatch_cmd for shost_state
> checking as indicated above by Nab and Mike beside needed for
> scsi_cmd_get_serial there.
> 
> I'll restore lock for shost_state and for this unlikely SHOST_DEL case
> have this lock dropped later, however still have fc_queuecommand w/o
> host lock held, so change would be as:-
> 
> 
> diff --git a/drivers/scsi/scsi.c b/drivers/scsi/scsi.c
> index ad0ed21..ce504e5 100644
> --- a/drivers/scsi/scsi.c
> +++ b/drivers/scsi/scsi.c
> @@ -749,11 +749,16 @@ int scsi_dispatch_cmd(struct scsi_cmnd *cmd)
>         if (unlikely(host->shost_state == SHOST_DEL)) {
>                 cmd->result = (DID_NO_CONNECT << 16);
>                 scsi_done(cmd);
> +               spin_unlock_irqrestore(host->host_lock, flags);
>         } else {
>                 trace_scsi_dispatch_cmd_start(cmd);
> +               if (!host->unlocked_qcmds)
> +                       spin_unlock_irqrestore(host->host_lock, flags);
>                 rtn = host->hostt->queuecommand(cmd, scsi_done);
> +               if (host->unlocked_qcmds)
> +                       spin_unlock_irqrestore(host->host_lock, flags);
>         }
> -       spin_unlock_irqrestore(host->host_lock, flags);
> +
>         if (rtn) {
>                 trace_scsi_dispatch_cmd_error(cmd, rtn);
>                 if (rtn != SCSI_MLQUEUE_DEVICE_BUSY &&
> 
> 
> Maybe two additional checks here is not so neat but not too bad either
> as just two additional checks here, I excluded this unlocked_qcmd checks
> from scsi_error queuecommand calling to not clutter its code there with
> these additional checks  w/o any good case for that code path.
> 

Hmmmm, handing off the locking like this between ML and LLD gets ugly
pretty quick, so I am not sure exactly sure if this is the right way to
go about it..

Mabye in code terms we might need to consider converting some (or
all..?) struct Scsi_Host->host_lock accesses to some form of Linux/SCSI
Host specific lock and unlock wrapper that is aware of the current
->queuecommand() LLD lock status..?  Obviously this would only need to
cover the queuecommand path here first, but perhaps would be useful in
other areas as well..?

This would at least (I think) allow ML and LLD code to be a tad bit
cleaner than something like the example above where host->unlocked_qcmds
TRUE/FALSE conditionals exist directly within ML and LLD code.

Any other comments here from the Linux/SCSI folks..?

Thanks!

--nab



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [RFC PATCH] scsi, fcoe, libfc: drop scsi host_lock use from fc_queuecommand
       [not found]               ` <1283375187.30431.71.camel-B2RhF0yJhE275v1z/vFq2g@public.gmane.org>
@ 2010-09-01 22:45                 ` Chris Leech
  2010-09-01 23:38                   ` [Open-FCoE] " Mike Christie
  0 siblings, 1 reply; 13+ messages in thread
From: Chris Leech @ 2010-09-01 22:45 UTC (permalink / raw)
  To: Vasu Dev
  Cc: linux-scsi, Matthew Wilcox, FUJITA Tomonori, James Bottomley,
	devel-s9riP+hp16TNLxjTenLetw, Christoph Hellwig

On Wed, Sep 01, 2010 at 02:06:26PM -0700, Vasu Dev wrote:
> > > It looks safe to me to call scsi_done() w/o host_lock held,
> > 
> > Hmmmm, this indeed this appears to be safe now..  For some reason I had
> > it in my head (and in TCM_Loop virtual SCSI LLD code as well) that
> > host_lock needed to be held while calling struct scsi_cmnd->scsi_done().
> > 
> > I assume this is some old age relic from the BLK days in the SCSI
> > completion path, and the subsequent conversion.  I still see a couple of
> > ancient drivers in drivers/scsi/ that are still doing this, but I
> > believe I stand corrected in that (all..?) of the modern in-use
> > drivers/scsi code is indeed *not* holding host_lock while calling struct
> > scsi_cmnd->scsi_done()..
> > 
> 
> fcoe/libfc moved to scsi_done w/o holding scsi host_lock a while ago
> around dec, 09 and it was done after discussion with Mathew and Chris
> Leech from fcoe side at that time, they may have more to comment on
> this.

There's not a whole lot to comment on.  Matthew Wilcox was helping me
look for opportunities to reduce our host_lock use, and said he didn't
think it was needed around scsi_done anymore.  It held up under testing,
so I submitted a patch.

	Chris

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Open-FCoE] [RFC PATCH] scsi, fcoe, libfc: drop scsi host_lock use from fc_queuecommand
  2010-09-01 22:45                 ` Chris Leech
@ 2010-09-01 23:38                   ` Mike Christie
  2010-09-02  1:37                     ` Mike Christie
  0 siblings, 1 reply; 13+ messages in thread
From: Mike Christie @ 2010-09-01 23:38 UTC (permalink / raw)
  To: Chris Leech
  Cc: Vasu Dev, Nicholas A. Bellinger, Matthew Wilcox, linux-scsi,
	FUJITA Tomonori, James Bottomley, devel, Christoph Hellwig

On 09/01/2010 05:45 PM, Chris Leech wrote:
> On Wed, Sep 01, 2010 at 02:06:26PM -0700, Vasu Dev wrote:
>>>> It looks safe to me to call scsi_done() w/o host_lock held,
>>>
>>> Hmmmm, this indeed this appears to be safe now..  For some reason I had
>>> it in my head (and in TCM_Loop virtual SCSI LLD code as well) that
>>> host_lock needed to be held while calling struct scsi_cmnd->scsi_done().
>>>
>>> I assume this is some old age relic from the BLK days in the SCSI
>>> completion path, and the subsequent conversion.  I still see a couple of
>>> ancient drivers in drivers/scsi/ that are still doing this, but I
>>> believe I stand corrected in that (all..?) of the modern in-use
>>> drivers/scsi code is indeed *not* holding host_lock while calling struct
>>> scsi_cmnd->scsi_done()..
>>>
>>
>> fcoe/libfc moved to scsi_done w/o holding scsi host_lock a while ago
>> around dec, 09 and it was done after discussion with Mathew and Chris
>> Leech from fcoe side at that time, they may have more to comment on
>> this.
>
> There's not a whole lot to comment on.  Matthew Wilcox was helping me
> look for opportunities to reduce our host_lock use, and said he didn't
> think it was needed around scsi_done anymore.  It held up under testing,
> so I submitted a patch.
>

The host_lock was not actually there for any scsi_done stuff. It was 
probably lazy programming that it was held there. For that code, the 
host_lock was held in fc_queuecommand for the rport check and for the 
setting of the SCp.ptr and fsp->cmd, and it was held in the completion 
path for the SCp.otr and fsp->cmd checks  The rport check locking got 
fixed recently and I was looking at the SCp.ptr and fsp->cmd and was 
wondering if there could be a problem where one thread completes the IO 
and sets those fields to NULL, but another thread could be completing it 
too and it would see a scsi_cmnd that is not released and reallocated by 
the other thread. So for example the fc_eh_abort code still grabs the 
host_lock when calling CMD_SP and taking a ref and checking that the fsp 
is not null.

If it is a problem then we should add some locking or some other atomic 
magic. If it is not a problem then those checks could just be removed, 
right?

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Open-FCoE] [RFC PATCH] scsi, fcoe, libfc: drop scsi host_lock use from fc_queuecommand
  2010-09-01 23:38                   ` [Open-FCoE] " Mike Christie
@ 2010-09-02  1:37                     ` Mike Christie
  0 siblings, 0 replies; 13+ messages in thread
From: Mike Christie @ 2010-09-02  1:37 UTC (permalink / raw)
  To: Chris Leech
  Cc: linux-scsi, Matthew Wilcox, FUJITA Tomonori, James Bottomley,
	devel, Christoph Hellwig

On 09/01/2010 06:38 PM, Mike Christie wrote:
> wondering if there could be a problem where one thread completes the IO
> and sets those fields to NULL, but another thread could be completing it
> too and it would see a scsi_cmnd that is not released and reallocated by

There should not be a not in there. So it should be and it would see a 
scsi_cmnd that is released and reallocated by the other thread.

> the other thread. So for example the fc_eh_abort code still grabs the
> host_lock when calling CMD_SP and taking a ref and checking that the fsp
> is not null.


^ permalink raw reply	[flat|nested] 13+ messages in thread

* RE: [Open-FCoE] [RFC PATCH] scsi, fcoe, libfc: drop scsi host_lock use from fc_queuecommand
  2010-09-01 21:38               ` [Open-FCoE] " Nicholas A. Bellinger
@ 2010-09-02 17:24                 ` Vasu Dev
  2010-09-02 19:48                   ` Nicholas A. Bellinger
  0 siblings, 1 reply; 13+ messages in thread
From: Vasu Dev @ 2010-09-02 17:24 UTC (permalink / raw)
  To: Nicholas A. Bellinger
  Cc: Matthew Wilcox, Zou, Yi, Mike Christie, linux-scsi,
	FUJITA Tomonori, James Bottomley, devel, Christoph Hellwig,
	Joe Eykholt, Hannes Reinecke

On Wed, 2010-09-01 at 14:38 -0700, Nicholas A. Bellinger wrote:
> > Maybe two additional checks here is not so neat but not too bad
> either
> > as just two additional checks here, I excluded this unlocked_qcmd
> checks
> > from scsi_error queuecommand calling to not clutter its code there
> with
> > these additional checks  w/o any good case for that code path.
> > 
> 
> Hmmmm, handing off the locking like this between ML and LLD gets ugly
> pretty quick, so I am not sure exactly sure if this is the right way
> to
> go about it..
> 
> Mabye in code terms we might need to consider converting some (or
> all..?) struct Scsi_Host->host_lock accesses to some form of
> Linux/SCSI
> Host specific lock and unlock wrapper that is aware of the current
> ->queuecommand() LLD lock status..?  Obviously this would only need to
> cover the queuecommand path here first, but perhaps would be useful in
> other areas as well..?
> 

This patch would add only two conditional locking and unlocking of host
lock in scsi_dispatch_cmd and these are not applicable to all other
several places host_lock use, therefore any wrapper for just these two
new places is not worthy as that would hide conditions inside wrapper,
or may be I'm missing something here.

  
> This would at least (I think) allow ML and LLD code to be a tad bit
> cleaner than something like the example above where
> host->unlocked_qcmds
> TRUE/FALSE conditionals exist directly within ML and LLD code.
> 

Yeah but direct use makes things more obvious at first look. However
neatness is worthy using wrapper(macros) if there are several such
places. Anycase this is very minor code style thing here and I'm fine
with wrapper if you want.

	Thanks
	Vasu


^ permalink raw reply	[flat|nested] 13+ messages in thread

* RE: [Open-FCoE] [RFC PATCH] scsi, fcoe, libfc: drop scsi host_lock use from fc_queuecommand
  2010-09-02 17:24                 ` Vasu Dev
@ 2010-09-02 19:48                   ` Nicholas A. Bellinger
  2010-09-03 22:38                     ` Vasu Dev
  0 siblings, 1 reply; 13+ messages in thread
From: Nicholas A. Bellinger @ 2010-09-02 19:48 UTC (permalink / raw)
  To: Vasu Dev
  Cc: Matthew Wilcox, Zou, Yi, Mike Christie, linux-scsi,
	FUJITA Tomonori, James Bottomley, devel, Christoph Hellwig,
	Joe Eykholt, Hannes Reinecke

On Thu, 2010-09-02 at 10:24 -0700, Vasu Dev wrote:
> On Wed, 2010-09-01 at 14:38 -0700, Nicholas A. Bellinger wrote:
> > > Maybe two additional checks here is not so neat but not too bad
> > either
> > > as just two additional checks here, I excluded this unlocked_qcmd
> > checks
> > > from scsi_error queuecommand calling to not clutter its code there
> > with
> > > these additional checks  w/o any good case for that code path.
> > > 
> > 
> > Hmmmm, handing off the locking like this between ML and LLD gets ugly
> > pretty quick, so I am not sure exactly sure if this is the right way
> > to
> > go about it..
> > 
> > Mabye in code terms we might need to consider converting some (or
> > all..?) struct Scsi_Host->host_lock accesses to some form of
> > Linux/SCSI
> > Host specific lock and unlock wrapper that is aware of the current
> > ->queuecommand() LLD lock status..?  Obviously this would only need to
> > cover the queuecommand path here first, but perhaps would be useful in
> > other areas as well..?
> > 
> 
> This patch would add only two conditional locking and unlocking of host
> lock in scsi_dispatch_cmd and these are not applicable to all other
> several places host_lock use, therefore any wrapper for just these two
> new places is not worthy as that would hide conditions inside wrapper,
> or may be I'm missing something here.

Hmmm, fair point here..  It is just something about seeing single block
conditional with unlock and locking spins that makes me a little
un-easy.. ;-)

> 
>   
> > This would at least (I think) allow ML and LLD code to be a tad bit
> > cleaner than something like the example above where
> > host->unlocked_qcmds
> > TRUE/FALSE conditionals exist directly within ML and LLD code.
> > 
> 
> Yeah but direct use makes things more obvious at first look. However
> neatness is worthy using wrapper(macros) if there are several such
> places. Anycase this is very minor code style thing here and I'm fine
> with wrapper if you want.

Sure, I am thinking about these simple host_lock wrappers as more of a
transitional look for LLDs more than anything..

Btw, I would be happy to include your forthcoming v2 patch into a
lio-core-2.6.git branch, and give it some testing in the next week.

Thanks!

--nab



^ permalink raw reply	[flat|nested] 13+ messages in thread

* RE: [Open-FCoE] [RFC PATCH] scsi, fcoe, libfc: drop scsi host_lock use from fc_queuecommand
  2010-09-02 19:48                   ` Nicholas A. Bellinger
@ 2010-09-03 22:38                     ` Vasu Dev
  0 siblings, 0 replies; 13+ messages in thread
From: Vasu Dev @ 2010-09-03 22:38 UTC (permalink / raw)
  To: Nicholas A. Bellinger
  Cc: Matthew Wilcox, Zou, Yi, Mike Christie, linux-scsi,
	FUJITA Tomonori, James Bottomley, devel, Christoph Hellwig,
	Joe Eykholt, Hannes Reinecke

On Thu, 2010-09-02 at 12:48 -0700, Nicholas A. Bellinger wrote:
> > Yeah but direct use makes things more obvious at first look. However
> > neatness is worthy using wrapper(macros) if there are several such
> > places. Anycase this is very minor code style thing here and I'm
> fine
> > with wrapper if you want.
> 
> Sure, I am thinking about these simple host_lock wrappers as more of a
> transitional look for LLDs more than anything..
> 
> Btw, I would be happy to include your forthcoming v2 patch into a
> lio-core-2.6.git branch, and give it some testing in the next week.
> 

Awesome, Thanks for your all help Nab,

	I tried to have wrapper instead of checks to drop host_lock before
fc_queuecommand using wrapper something like this :-

+static inline void scsi_qcmd_host_unlock(struct Scsi_Host *shost,
unsigned long irq_flags)
+{
+       if (shost->host_lock_pending) {
+               shost->host_lock_pending = 0;
+               spin_unlock_irqrestore(shost->host_lock, irq_flags);
+       } else if (shost->unlocked_qcmds)
+               spin_unlock_irqrestore(shost->host_lock, irq_flags);
+       else
+               shost->host_lock_pending = 1;
+}
+

This didn't work well beside required lot more checks to track host lock
so that this wrapper can be called w/o checks as:-

	scsi_qcmd_host_unlock(host, flags);
	rtn = host->hostt->queuecommand(cmd, scsi_done);
	scsi_qcmd_host_unlock(host, flags);

I think it is better of with simple checks for now as I posted in my
patch, may be a wrapper can be added in case more places neeeds such
checks as we talked before.

	Thanks
	Vasu 






^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2010-09-03 22:58 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <20100831225338.25102.59500.stgit@localhost.localdomain>
2010-08-31 23:56 ` [Open-FCoE] [RFC PATCH] scsi, fcoe, libfc: drop scsi host_lock use from fc_queuecommand Nicholas A. Bellinger
2010-09-01  0:16   ` Nicholas A. Bellinger
2010-09-01  4:17   ` Mike Christie
     [not found]     ` <4C7DD3E8.9050700-hcNo3dDEHLuVc3sceRu5cw@public.gmane.org>
2010-09-01  7:57       ` Zou, Yi
2010-09-01 20:10         ` [Open-FCoE] " Nicholas A. Bellinger
     [not found]           ` <1283371821.32007.636.camel-Y1+j5t8j3WgjMeEPmliV8E/sVC8ogwMJ@public.gmane.org>
2010-09-01 21:06             ` Vasu Dev
2010-09-01 21:38               ` [Open-FCoE] " Nicholas A. Bellinger
2010-09-02 17:24                 ` Vasu Dev
2010-09-02 19:48                   ` Nicholas A. Bellinger
2010-09-03 22:38                     ` Vasu Dev
     [not found]               ` <1283375187.30431.71.camel-B2RhF0yJhE275v1z/vFq2g@public.gmane.org>
2010-09-01 22:45                 ` Chris Leech
2010-09-01 23:38                   ` [Open-FCoE] " Mike Christie
2010-09-02  1:37                     ` Mike Christie

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.