All of lore.kernel.org
 help / color / mirror / Atom feed
* qla2xxx: Conditionally disable automatic queue full tracking
@ 2009-09-23 23:59 Giridhar Malavali
  2009-09-24 14:42 ` Michael Reed
  2009-09-24 19:49 ` Mike Christie
  0 siblings, 2 replies; 14+ messages in thread
From: Giridhar Malavali @ 2009-09-23 23:59 UTC (permalink / raw)
  To: mdr, James Bottomley; +Cc: Mike Christie, LinuxSCSI, Andrew Vasquez, vasu.dev

Hi Michael/James,

	Patches were submitted to move queue ramp up/down code recently to  
scsi mid layer. With this change, I don't see a need for a module  
parameter to disable queuefull tracking  in qla2xxx driver. Andrew,  
mentioned that this got introduced to avoid wobbling behavior on the  
wire due to queue depth modifications. Just wanted to check whether  
the same need to be done in scsi mid layer too.


Thanks,
Giridhar.M.B


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: qla2xxx: Conditionally disable automatic queue full tracking
  2009-09-23 23:59 qla2xxx: Conditionally disable automatic queue full tracking Giridhar Malavali
@ 2009-09-24 14:42 ` Michael Reed
  2009-09-24 17:05   ` Giridhar Malavali
  2009-09-24 19:49 ` Mike Christie
  1 sibling, 1 reply; 14+ messages in thread
From: Michael Reed @ 2009-09-24 14:42 UTC (permalink / raw)
  To: Giridhar Malavali
  Cc: James Bottomley, Mike Christie, LinuxSCSI, Andrew Vasquez, vasu.dev

Hello,

The purpose of the queue full tracking patch in qla2xxx is to
keep the driver from changing a user space override of the queue
depth back to what the driver believes is the "correct" value.

The raid devices that we use have per raid controller queue depth
limits and have at times demonstrated, uh, bad behavior when
their queues are full for sustained periods of time.  SGI needs
to be able to set queue depth for a lun based upon access patterns
and performance requirements of the entire cluster and know that
it will be honored as an upper bound.

Might you provide a pointer to the recently submitted patches?
I haven't followed linux-scsi in a while....  I'll be quite
happy to take a look.

Thanks,
 Mike


Giridhar Malavali wrote:
> Hi Michael/James,
> 
> 	Patches were submitted to move queue ramp up/down code recently to  
> scsi mid layer. With this change, I don't see a need for a module  
> parameter to disable queuefull tracking  in qla2xxx driver. Andrew,  
> mentioned that this got introduced to avoid wobbling behavior on the  
> wire due to queue depth modifications. Just wanted to check whether  
> the same need to be done in scsi mid layer too.
> 
> 
> Thanks,
> Giridhar.M.B
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: qla2xxx: Conditionally disable automatic queue full tracking
  2009-09-24 14:42 ` Michael Reed
@ 2009-09-24 17:05   ` Giridhar Malavali
  2009-09-24 19:15     ` Michael Reed
  0 siblings, 1 reply; 14+ messages in thread
From: Giridhar Malavali @ 2009-09-24 17:05 UTC (permalink / raw)
  To: Michael Reed
  Cc: James Bottomley, Mike Christie, LinuxSCSI, Andrew Vasquez, vasu.dev

Hi Michael,

Here are the patches..

http://marc.info/?a=119828370000006&r=1&w=2 .
http://marc.info/?l=linux-scsi&m=125201657106722&w=2

Thanks,
Giridhar.M.B



On Sep 24, 2009, at 7:42 AM, Michael Reed wrote:

> Hello,
>
> The purpose of the queue full tracking patch in qla2xxx is to
> keep the driver from changing a user space override of the queue
> depth back to what the driver believes is the "correct" value.
>
> The raid devices that we use have per raid controller queue depth
> limits and have at times demonstrated, uh, bad behavior when
> their queues are full for sustained periods of time.  SGI needs
> to be able to set queue depth for a lun based upon access patterns
> and performance requirements of the entire cluster and know that
> it will be honored as an upper bound.
>
> Might you provide a pointer to the recently submitted patches?
> I haven't followed linux-scsi in a while....  I'll be quite
> happy to take a look.
>
> Thanks,
> Mike
>
>
> Giridhar Malavali wrote:
>> Hi Michael/James,
>>
>> 	Patches were submitted to move queue ramp up/down code recently to
>> scsi mid layer. With this change, I don't see a need for a module
>> parameter to disable queuefull tracking  in qla2xxx driver. Andrew,
>> mentioned that this got introduced to avoid wobbling behavior on the
>> wire due to queue depth modifications. Just wanted to check whether
>> the same need to be done in scsi mid layer too.
>>
>>
>> Thanks,
>> Giridhar.M.B
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux- 
>> scsi" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: qla2xxx: Conditionally disable automatic queue full tracking
  2009-09-24 17:05   ` Giridhar Malavali
@ 2009-09-24 19:15     ` Michael Reed
  2009-09-24 20:55       ` Giridhar Malavali
  0 siblings, 1 reply; 14+ messages in thread
From: Michael Reed @ 2009-09-24 19:15 UTC (permalink / raw)
  To: Giridhar Malavali
  Cc: James Bottomley, Mike Christie, LinuxSCSI, Andrew Vasquez, vasu.dev

To answer your query, yes.  Ultimately, I believe the ml should be
modified to view a user space modification to a lun's queue depth
as an upper bound.  I don't care, much, whether the system dynamically
adjusts the queue depth or leaves it alone as long as it honors the
value programmed via user space.

Are you volunteering to do the work?  :)  How can I help?

Thanks,
 Mike


Giridhar Malavali wrote:
> Hi Michael,
> 
> Here are the patches..
> 
> http://marc.info/?a=119828370000006&r=1&w=2 .
> http://marc.info/?l=linux-scsi&m=125201657106722&w=2
> 
> Thanks,
> Giridhar.M.B
> 
> 
> 
> On Sep 24, 2009, at 7:42 AM, Michael Reed wrote:
> 
>> Hello,
>>
>> The purpose of the queue full tracking patch in qla2xxx is to
>> keep the driver from changing a user space override of the queue
>> depth back to what the driver believes is the "correct" value.
>>
>> The raid devices that we use have per raid controller queue depth
>> limits and have at times demonstrated, uh, bad behavior when
>> their queues are full for sustained periods of time.  SGI needs
>> to be able to set queue depth for a lun based upon access patterns
>> and performance requirements of the entire cluster and know that
>> it will be honored as an upper bound.
>>
>> Might you provide a pointer to the recently submitted patches?
>> I haven't followed linux-scsi in a while....  I'll be quite
>> happy to take a look.
>>
>> Thanks,
>> Mike
>>
>>
>> Giridhar Malavali wrote:
>>> Hi Michael/James,
>>>
>>> 	Patches were submitted to move queue ramp up/down code recently to
>>> scsi mid layer. With this change, I don't see a need for a module
>>> parameter to disable queuefull tracking  in qla2xxx driver. Andrew,
>>> mentioned that this got introduced to avoid wobbling behavior on the
>>> wire due to queue depth modifications. Just wanted to check whether
>>> the same need to be done in scsi mid layer too.
>>>
>>>
>>> Thanks,
>>> Giridhar.M.B
>>>
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe linux- 
>>> scsi" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: qla2xxx: Conditionally disable automatic queue full tracking
  2009-09-23 23:59 qla2xxx: Conditionally disable automatic queue full tracking Giridhar Malavali
  2009-09-24 14:42 ` Michael Reed
@ 2009-09-24 19:49 ` Mike Christie
  1 sibling, 0 replies; 14+ messages in thread
From: Mike Christie @ 2009-09-24 19:49 UTC (permalink / raw)
  To: Giridhar Malavali
  Cc: mdr, James Bottomley, LinuxSCSI, Andrew Vasquez, vasu.dev

Giridhar Malavali wrote:
> Hi Michael/James,
> 
>     Patches were submitted to move queue ramp up/down code recently to 
> scsi mid layer. With this change, I don't see a need for a module 
> parameter to disable queuefull tracking  in qla2xxx driver. Andrew, 
> mentioned that this got introduced to avoid wobbling behavior on the 
> wire due to queue depth modifications. Just wanted to check whether the 
> same need to be done in scsi mid layer too.
> 

I think so. I believe Michael Reed had a valid reason for it, and I do 
not see why we would not hit a similar problem with other drivers.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: qla2xxx: Conditionally disable automatic queue full tracking
  2009-09-24 19:15     ` Michael Reed
@ 2009-09-24 20:55       ` Giridhar Malavali
  2009-09-24 21:02         ` Michael Reed
  0 siblings, 1 reply; 14+ messages in thread
From: Giridhar Malavali @ 2009-09-24 20:55 UTC (permalink / raw)
  To: Michael Reed
  Cc: James Bottomley, Mike Christie, LinuxSCSI, Andrew Vasquez, vasu.dev

I will be more than willing to do these changes. Just curious, was  
this seen on non qla2xxx drivers too.

-- Giri


On Sep 24, 2009, at 12:15 PM, Michael Reed wrote:

> To answer your query, yes.  Ultimately, I believe the ml should be
> modified to view a user space modification to a lun's queue depth
> as an upper bound.  I don't care, much, whether the system dynamically
> adjusts the queue depth or leaves it alone as long as it honors the
> value programmed via user space.
>
> Are you volunteering to do the work?  :)  How can I help?
>
> Thanks,
> Mike
>
>
> Giridhar Malavali wrote:
>> Hi Michael,
>>
>> Here are the patches..
>>
>> http://marc.info/?a=119828370000006&r=1&w=2 .
>> http://marc.info/?l=linux-scsi&m=125201657106722&w=2
>>
>> Thanks,
>> Giridhar.M.B
>>
>>
>>
>> On Sep 24, 2009, at 7:42 AM, Michael Reed wrote:
>>
>>> Hello,
>>>
>>> The purpose of the queue full tracking patch in qla2xxx is to
>>> keep the driver from changing a user space override of the queue
>>> depth back to what the driver believes is the "correct" value.
>>>
>>> The raid devices that we use have per raid controller queue depth
>>> limits and have at times demonstrated, uh, bad behavior when
>>> their queues are full for sustained periods of time.  SGI needs
>>> to be able to set queue depth for a lun based upon access patterns
>>> and performance requirements of the entire cluster and know that
>>> it will be honored as an upper bound.
>>>
>>> Might you provide a pointer to the recently submitted patches?
>>> I haven't followed linux-scsi in a while....  I'll be quite
>>> happy to take a look.
>>>
>>> Thanks,
>>> Mike
>>>
>>>
>>> Giridhar Malavali wrote:
>>>> Hi Michael/James,
>>>>
>>>> 	Patches were submitted to move queue ramp up/down code recently to
>>>> scsi mid layer. With this change, I don't see a need for a module
>>>> parameter to disable queuefull tracking  in qla2xxx driver. Andrew,
>>>> mentioned that this got introduced to avoid wobbling behavior on  
>>>> the
>>>> wire due to queue depth modifications. Just wanted to check whether
>>>> the same need to be done in scsi mid layer too.
>>>>
>>>>
>>>> Thanks,
>>>> Giridhar.M.B
>>>>
>>>> --
>>>> To unsubscribe from this list: send the line "unsubscribe linux-
>>>> scsi" in
>>>> the body of a message to majordomo@vger.kernel.org
>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: qla2xxx: Conditionally disable automatic queue full tracking
  2009-09-24 20:55       ` Giridhar Malavali
@ 2009-09-24 21:02         ` Michael Reed
  2009-09-30  1:34           ` Giridhar Malavali
  0 siblings, 1 reply; 14+ messages in thread
From: Michael Reed @ 2009-09-24 21:02 UTC (permalink / raw)
  To: Giridhar Malavali
  Cc: James Bottomley, Mike Christie, LinuxSCSI, Andrew Vasquez, vasu.dev

The LSI fusion FC driver did not exhibit the problem.  It doesn't
dynamically re-adjust the queue depth.  I have limited experience
with Emulex and cannot comment.

Thank you for offering to do the work.  I'll gladly review and test
anything you'd care to send my way.

Mike


Giridhar Malavali wrote:
> I will be more than willing to do these changes. Just curious, was  
> this seen on non qla2xxx drivers too.
> 
> -- Giri
> 
> 
> On Sep 24, 2009, at 12:15 PM, Michael Reed wrote:
> 
>> To answer your query, yes.  Ultimately, I believe the ml should be
>> modified to view a user space modification to a lun's queue depth
>> as an upper bound.  I don't care, much, whether the system dynamically
>> adjusts the queue depth or leaves it alone as long as it honors the
>> value programmed via user space.
>>
>> Are you volunteering to do the work?  :)  How can I help?
>>
>> Thanks,
>> Mike
>>
>>
>> Giridhar Malavali wrote:
>>> Hi Michael,
>>>
>>> Here are the patches..
>>>
>>> http://marc.info/?a=119828370000006&r=1&w=2 .
>>> http://marc.info/?l=linux-scsi&m=125201657106722&w=2
>>>
>>> Thanks,
>>> Giridhar.M.B
>>>
>>>
>>>
>>> On Sep 24, 2009, at 7:42 AM, Michael Reed wrote:
>>>
>>>> Hello,
>>>>
>>>> The purpose of the queue full tracking patch in qla2xxx is to
>>>> keep the driver from changing a user space override of the queue
>>>> depth back to what the driver believes is the "correct" value.
>>>>
>>>> The raid devices that we use have per raid controller queue depth
>>>> limits and have at times demonstrated, uh, bad behavior when
>>>> their queues are full for sustained periods of time.  SGI needs
>>>> to be able to set queue depth for a lun based upon access patterns
>>>> and performance requirements of the entire cluster and know that
>>>> it will be honored as an upper bound.
>>>>
>>>> Might you provide a pointer to the recently submitted patches?
>>>> I haven't followed linux-scsi in a while....  I'll be quite
>>>> happy to take a look.
>>>>
>>>> Thanks,
>>>> Mike
>>>>
>>>>
>>>> Giridhar Malavali wrote:
>>>>> Hi Michael/James,
>>>>>
>>>>> 	Patches were submitted to move queue ramp up/down code recently to
>>>>> scsi mid layer. With this change, I don't see a need for a module
>>>>> parameter to disable queuefull tracking  in qla2xxx driver. Andrew,
>>>>> mentioned that this got introduced to avoid wobbling behavior on  
>>>>> the
>>>>> wire due to queue depth modifications. Just wanted to check whether
>>>>> the same need to be done in scsi mid layer too.
>>>>>
>>>>>
>>>>> Thanks,
>>>>> Giridhar.M.B
>>>>>
>>>>> --
>>>>> To unsubscribe from this list: send the line "unsubscribe linux-
>>>>> scsi" in
>>>>> the body of a message to majordomo@vger.kernel.org
>>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: qla2xxx: Conditionally disable automatic queue full tracking
  2009-09-24 21:02         ` Michael Reed
@ 2009-09-30  1:34           ` Giridhar Malavali
  2009-09-30 13:08             ` Michael Reed
  2009-09-30 18:23             ` Mike Christie
  0 siblings, 2 replies; 14+ messages in thread
From: Giridhar Malavali @ 2009-09-30  1:34 UTC (permalink / raw)
  To: Michael Reed
  Cc: James Bottomley, Mike Christie, LinuxSCSI, Andrew Vasquez, vasu.dev

Michael,

Can u please clarify following things.

1) You are ok with scsi mid layer adjusting the queue depth as long as  
the ramp-up code does not over shoot the user set threshold.
2) You want this to be controlled at device/LUN level. So the  
threshold is per LUN rather than target.
3) From your previous mail, I understand that you don't require a  
combined limit per target. Say the total queue depth for all LUN's on  
a particular target should not exceed some threshold.

-- Giri




On Sep 24, 2009, at 2:02 PM, Michael Reed wrote:

> The LSI fusion FC driver did not exhibit the problem.  It doesn't
> dynamically re-adjust the queue depth.  I have limited experience
> with Emulex and cannot comment.
>
> Thank you for offering to do the work.  I'll gladly review and test
> anything you'd care to send my way.
>
> Mike
>
>
> Giridhar Malavali wrote:
>> I will be more than willing to do these changes. Just curious, was
>> this seen on non qla2xxx drivers too.
>>
>> -- Giri
>>
>>
>> On Sep 24, 2009, at 12:15 PM, Michael Reed wrote:
>>
>>> To answer your query, yes.  Ultimately, I believe the ml should be
>>> modified to view a user space modification to a lun's queue depth
>>> as an upper bound.  I don't care, much, whether the system  
>>> dynamically
>>> adjusts the queue depth or leaves it alone as long as it honors the
>>> value programmed via user space.
>>>
>>> Are you volunteering to do the work?  :)  How can I help?
>>>
>>> Thanks,
>>> Mike
>>>
>>>
>>> Giridhar Malavali wrote:
>>>> Hi Michael,
>>>>
>>>> Here are the patches..
>>>>
>>>> http://marc.info/?a=119828370000006&r=1&w=2 .
>>>> http://marc.info/?l=linux-scsi&m=125201657106722&w=2
>>>>
>>>> Thanks,
>>>> Giridhar.M.B
>>>>
>>>>
>>>>
>>>> On Sep 24, 2009, at 7:42 AM, Michael Reed wrote:
>>>>
>>>>> Hello,
>>>>>
>>>>> The purpose of the queue full tracking patch in qla2xxx is to
>>>>> keep the driver from changing a user space override of the queue
>>>>> depth back to what the driver believes is the "correct" value.
>>>>>
>>>>> The raid devices that we use have per raid controller queue depth
>>>>> limits and have at times demonstrated, uh, bad behavior when
>>>>> their queues are full for sustained periods of time.  SGI needs
>>>>> to be able to set queue depth for a lun based upon access patterns
>>>>> and performance requirements of the entire cluster and know that
>>>>> it will be honored as an upper bound.
>>>>>
>>>>> Might you provide a pointer to the recently submitted patches?
>>>>> I haven't followed linux-scsi in a while....  I'll be quite
>>>>> happy to take a look.
>>>>>
>>>>> Thanks,
>>>>> Mike
>>>>>
>>>>>
>>>>> Giridhar Malavali wrote:
>>>>>> Hi Michael/James,
>>>>>>
>>>>>> 	Patches were submitted to move queue ramp up/down code  
>>>>>> recently to
>>>>>> scsi mid layer. With this change, I don't see a need for a module
>>>>>> parameter to disable queuefull tracking  in qla2xxx driver.  
>>>>>> Andrew,
>>>>>> mentioned that this got introduced to avoid wobbling behavior on
>>>>>> the
>>>>>> wire due to queue depth modifications. Just wanted to check  
>>>>>> whether
>>>>>> the same need to be done in scsi mid layer too.
>>>>>>
>>>>>>
>>>>>> Thanks,
>>>>>> Giridhar.M.B
>>>>>>
>>>>>> --
>>>>>> To unsubscribe from this list: send the line "unsubscribe linux-
>>>>>> scsi" in
>>>>>> the body of a message to majordomo@vger.kernel.org
>>>>>> More majordomo info at  http://vger.kernel.org/majordomo- 
>>>>>> info.html
>>


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: qla2xxx: Conditionally disable automatic queue full tracking
  2009-09-30  1:34           ` Giridhar Malavali
@ 2009-09-30 13:08             ` Michael Reed
  2009-09-30 13:43               ` Michael Reed
  2009-09-30 18:23             ` Mike Christie
  1 sibling, 1 reply; 14+ messages in thread
From: Michael Reed @ 2009-09-30 13:08 UTC (permalink / raw)
  To: Giridhar Malavali
  Cc: James Bottomley, Mike Christie, LinuxSCSI, Andrew Vasquez,
	vasu.dev, Jeremy Higdon

Hi Giri,

Giridhar Malavali wrote:
> Michael,
> 
> Can u please clarify following things.
> 
> 1) You are ok with scsi mid layer adjusting the queue depth as long as  
> the ramp-up code does not over shoot the user set threshold.

Yes.  This was the problem we had with qla2xxx.  Sites would tune
their cluster to avoid most queue fulls and qla2xxx would merrily
ramp queue depths back up to 32.

> 2) You want this to be controlled at device/LUN level. So the  
> threshold is per LUN rather than target.

That's the way it is today and we're okay with this arrangement.
This allows for static "performance" balancing.  Luns designated
for high throughput can receive more outstanding commands than those
only lightly used.

> 3) From your previous mail, I understand that you don't require a  
> combined limit per target. Say the total queue depth for all LUN's on  
> a particular target should not exceed some threshold.

Require, no.  This is the way it is today.  Desirable, YES.

This would be useful in both single system and cluster environments
allowing the administrator to not have to be so conservative with
their per-lun queue depth calculations.  On the raids we use, a
"target" corresponds to a fibre cable entering a raid
controller.  Every storage entity presented is a "lun".  The max
number of outstanding commands is per raid controller.  I think this
is a good idea and would support its implementation.  :)  I don't
believe it eliminates the need for the ability to set a queue depth
"per lun" but I could be talked into that belief.

Mike

> 
> -- Giri
> 
> 
> 
> 
> On Sep 24, 2009, at 2:02 PM, Michael Reed wrote:
> 
>> The LSI fusion FC driver did not exhibit the problem.  It doesn't
>> dynamically re-adjust the queue depth.  I have limited experience
>> with Emulex and cannot comment.
>>
>> Thank you for offering to do the work.  I'll gladly review and test
>> anything you'd care to send my way.
>>
>> Mike
>>
>>
>> Giridhar Malavali wrote:
>>> I will be more than willing to do these changes. Just curious, was
>>> this seen on non qla2xxx drivers too.
>>>
>>> -- Giri
>>>
>>>
>>> On Sep 24, 2009, at 12:15 PM, Michael Reed wrote:
>>>
>>>> To answer your query, yes.  Ultimately, I believe the ml should be
>>>> modified to view a user space modification to a lun's queue depth
>>>> as an upper bound.  I don't care, much, whether the system  
>>>> dynamically
>>>> adjusts the queue depth or leaves it alone as long as it honors the
>>>> value programmed via user space.
>>>>
>>>> Are you volunteering to do the work?  :)  How can I help?
>>>>
>>>> Thanks,
>>>> Mike
>>>>
>>>>
>>>> Giridhar Malavali wrote:
>>>>> Hi Michael,
>>>>>
>>>>> Here are the patches..
>>>>>
>>>>> http://marc.info/?a=119828370000006&r=1&w=2 .
>>>>> http://marc.info/?l=linux-scsi&m=125201657106722&w=2
>>>>>
>>>>> Thanks,
>>>>> Giridhar.M.B
>>>>>
>>>>>
>>>>>
>>>>> On Sep 24, 2009, at 7:42 AM, Michael Reed wrote:
>>>>>
>>>>>> Hello,
>>>>>>
>>>>>> The purpose of the queue full tracking patch in qla2xxx is to
>>>>>> keep the driver from changing a user space override of the queue
>>>>>> depth back to what the driver believes is the "correct" value.
>>>>>>
>>>>>> The raid devices that we use have per raid controller queue depth
>>>>>> limits and have at times demonstrated, uh, bad behavior when
>>>>>> their queues are full for sustained periods of time.  SGI needs
>>>>>> to be able to set queue depth for a lun based upon access patterns
>>>>>> and performance requirements of the entire cluster and know that
>>>>>> it will be honored as an upper bound.
>>>>>>
>>>>>> Might you provide a pointer to the recently submitted patches?
>>>>>> I haven't followed linux-scsi in a while....  I'll be quite
>>>>>> happy to take a look.
>>>>>>
>>>>>> Thanks,
>>>>>> Mike
>>>>>>
>>>>>>
>>>>>> Giridhar Malavali wrote:
>>>>>>> Hi Michael/James,
>>>>>>>
>>>>>>> 	Patches were submitted to move queue ramp up/down code  
>>>>>>> recently to
>>>>>>> scsi mid layer. With this change, I don't see a need for a module
>>>>>>> parameter to disable queuefull tracking  in qla2xxx driver.  
>>>>>>> Andrew,
>>>>>>> mentioned that this got introduced to avoid wobbling behavior on
>>>>>>> the
>>>>>>> wire due to queue depth modifications. Just wanted to check  
>>>>>>> whether
>>>>>>> the same need to be done in scsi mid layer too.
>>>>>>>
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Giridhar.M.B
>>>>>>>
>>>>>>> --
>>>>>>> To unsubscribe from this list: send the line "unsubscribe linux-
>>>>>>> scsi" in
>>>>>>> the body of a message to majordomo@vger.kernel.org
>>>>>>> More majordomo info at  http://vger.kernel.org/majordomo- 
>>>>>>> info.html
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: qla2xxx: Conditionally disable automatic queue full tracking
  2009-09-30 13:08             ` Michael Reed
@ 2009-09-30 13:43               ` Michael Reed
  0 siblings, 0 replies; 14+ messages in thread
From: Michael Reed @ 2009-09-30 13:43 UTC (permalink / raw)
  To: Giridhar Malavali
  Cc: James Bottomley, Mike Christie, LinuxSCSI, Andrew Vasquez,
	vasu.dev, Jeremy Higdon



Michael Reed wrote:
> Hi Giri,
> 
> Giridhar Malavali wrote:
>> Michael,
>>
>> Can u please clarify following things.
>>
>> 1) You are ok with scsi mid layer adjusting the queue depth as long as  
>> the ramp-up code does not over shoot the user set threshold.
> 
> Yes.  This was the problem we had with qla2xxx.  Sites would tune
> their cluster to avoid most queue fulls and qla2xxx would merrily
> ramp queue depths back up to 32.
> 
>> 2) You want this to be controlled at device/LUN level. So the  
>> threshold is per LUN rather than target.
> 
> That's the way it is today and we're okay with this arrangement.
> This allows for static "performance" balancing.  Luns designated
> for high throughput can receive more outstanding commands than those
> only lightly used.
> 
>> 3) From your previous mail, I understand that you don't require a  
>> combined limit per target. Say the total queue depth for all LUN's on  
>> a particular target should not exceed some threshold.
> 
> Require, no.  This is the way it is today.  Desirable, YES.
> 
> This would be useful in both single system and cluster environments
> allowing the administrator to not have to be so conservative with
> their per-lun queue depth calculations.  On the raids we use, a
> "target" corresponds to a fibre cable entering a raid
> controller.  Every storage entity presented is a "lun".  The max
> number of outstanding commands is per raid controller.  I think this
> is a good idea and would support its implementation.  :)  I don't
> believe it eliminates the need for the ability to set a queue depth
> "per lun" but I could be talked into that belief.

In general, handling of queue full in a multi-initiator environment
is tricky in that a queue full status may be returned when the
initiator sending the command has no outstanding commands....

Mike

> 
> Mike
> 
>> -- Giri
>>
>>
>>
>>
>> On Sep 24, 2009, at 2:02 PM, Michael Reed wrote:
>>
>>> The LSI fusion FC driver did not exhibit the problem.  It doesn't
>>> dynamically re-adjust the queue depth.  I have limited experience
>>> with Emulex and cannot comment.
>>>
>>> Thank you for offering to do the work.  I'll gladly review and test
>>> anything you'd care to send my way.
>>>
>>> Mike
>>>
>>>
>>> Giridhar Malavali wrote:
>>>> I will be more than willing to do these changes. Just curious, was
>>>> this seen on non qla2xxx drivers too.
>>>>
>>>> -- Giri
>>>>
>>>>
>>>> On Sep 24, 2009, at 12:15 PM, Michael Reed wrote:
>>>>
>>>>> To answer your query, yes.  Ultimately, I believe the ml should be
>>>>> modified to view a user space modification to a lun's queue depth
>>>>> as an upper bound.  I don't care, much, whether the system  
>>>>> dynamically
>>>>> adjusts the queue depth or leaves it alone as long as it honors the
>>>>> value programmed via user space.
>>>>>
>>>>> Are you volunteering to do the work?  :)  How can I help?
>>>>>
>>>>> Thanks,
>>>>> Mike
>>>>>
>>>>>
>>>>> Giridhar Malavali wrote:
>>>>>> Hi Michael,
>>>>>>
>>>>>> Here are the patches..
>>>>>>
>>>>>> http://marc.info/?a=119828370000006&r=1&w=2 .
>>>>>> http://marc.info/?l=linux-scsi&m=125201657106722&w=2
>>>>>>
>>>>>> Thanks,
>>>>>> Giridhar.M.B
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Sep 24, 2009, at 7:42 AM, Michael Reed wrote:
>>>>>>
>>>>>>> Hello,
>>>>>>>
>>>>>>> The purpose of the queue full tracking patch in qla2xxx is to
>>>>>>> keep the driver from changing a user space override of the queue
>>>>>>> depth back to what the driver believes is the "correct" value.
>>>>>>>
>>>>>>> The raid devices that we use have per raid controller queue depth
>>>>>>> limits and have at times demonstrated, uh, bad behavior when
>>>>>>> their queues are full for sustained periods of time.  SGI needs
>>>>>>> to be able to set queue depth for a lun based upon access patterns
>>>>>>> and performance requirements of the entire cluster and know that
>>>>>>> it will be honored as an upper bound.
>>>>>>>
>>>>>>> Might you provide a pointer to the recently submitted patches?
>>>>>>> I haven't followed linux-scsi in a while....  I'll be quite
>>>>>>> happy to take a look.
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Mike
>>>>>>>
>>>>>>>
>>>>>>> Giridhar Malavali wrote:
>>>>>>>> Hi Michael/James,
>>>>>>>>
>>>>>>>> 	Patches were submitted to move queue ramp up/down code  
>>>>>>>> recently to
>>>>>>>> scsi mid layer. With this change, I don't see a need for a module
>>>>>>>> parameter to disable queuefull tracking  in qla2xxx driver.  
>>>>>>>> Andrew,
>>>>>>>> mentioned that this got introduced to avoid wobbling behavior on
>>>>>>>> the
>>>>>>>> wire due to queue depth modifications. Just wanted to check  
>>>>>>>> whether
>>>>>>>> the same need to be done in scsi mid layer too.
>>>>>>>>
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> Giridhar.M.B
>>>>>>>>
>>>>>>>> --
>>>>>>>> To unsubscribe from this list: send the line "unsubscribe linux-
>>>>>>>> scsi" in
>>>>>>>> the body of a message to majordomo@vger.kernel.org
>>>>>>>> More majordomo info at  http://vger.kernel.org/majordomo- 
>>>>>>>> info.html
> --
> To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: qla2xxx: Conditionally disable automatic queue full tracking
  2009-09-30  1:34           ` Giridhar Malavali
  2009-09-30 13:08             ` Michael Reed
@ 2009-09-30 18:23             ` Mike Christie
  2009-10-02  0:19               ` Michael Reed
  1 sibling, 1 reply; 14+ messages in thread
From: Mike Christie @ 2009-09-30 18:23 UTC (permalink / raw)
  To: Giridhar Malavali
  Cc: Michael Reed, James Bottomley, LinuxSCSI, Andrew Vasquez, vasu.dev

On 09/29/2009 08:34 PM, Giridhar Malavali wrote:
> 3) From your previous mail, I understand that you don't require a
> combined limit per target. Say the total queue depth for all LUN's on a
> particular target should not exceed some threshold.
>

James Smart had done this patch
http://marc.info/?l=linux-scsi&m=121070114018354&w=2
where it sets the starget->can_queue based on info we get from vendors. 
The patch did not get merged. JamesB does not want the 
starget->can_queue to be static, and wants code like the queue full 
tracking code which dynamically ramps the device queue depth up and down.

I am not sure if JamesB meant that he wants to ramp down the 
starget->can_queue based on getting a QEUEU_FULL though. I thought he 
just meant he wants it to be dynamic. If I am right, then I think we 
could use JamesS's patch to set an initial starget->can_queue and add 
another field for a max value. Then we could add some code that ramps 
down/up based on something like command completion time or throughput or 
some other value.

If JamesS did mean that he wanted to ramp down the starget->can_queue 
based on QUEUE_FULLs then JamesS and JamesB do not agree on that and we 
are stuck.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: qla2xxx: Conditionally disable automatic queue full tracking
  2009-09-30 18:23             ` Mike Christie
@ 2009-10-02  0:19               ` Michael Reed
  2009-10-02 17:17                 ` James Smart
  0 siblings, 1 reply; 14+ messages in thread
From: Michael Reed @ 2009-10-02  0:19 UTC (permalink / raw)
  To: Mike Christie
  Cc: Giridhar Malavali, James Bottomley, LinuxSCSI, Andrew Vasquez,
	vasu.dev, Jeremy Higdon



Mike Christie wrote:
> On 09/29/2009 08:34 PM, Giridhar Malavali wrote:
>> 3) From your previous mail, I understand that you don't require a
>> combined limit per target. Say the total queue depth for all LUN's on a
>> particular target should not exceed some threshold.
>>
> 
> James Smart had done this patch
> http://marc.info/?l=linux-scsi&m=121070114018354&w=2
> where it sets the starget->can_queue based on info we get from vendors. 
> The patch did not get merged. JamesB does not want the 
> starget->can_queue to be static, and wants code like the queue full 
> tracking code which dynamically ramps the device queue depth up and down.

Agree.  Some amount of dynamic management of queue full seems desirable.
I believe any such dynamic management needs to acknowledge that it
exists in a multi-initiator environment, i.e., might get a QUEUE_FULL
with no other commands outstanding.

> 
> I am not sure if JamesB meant that he wants to ramp down the 
> starget->can_queue based on getting a QEUEU_FULL though. I thought he 
> just meant he wants it to be dynamic. 

What does "be dynamic" mean if not adjusted based upon a target's
response to scsi commands?

> If I am right, then I think we 
> could use JamesS's patch to set an initial starget->can_queue and add 
> another field for a max value. Then we could add some code that ramps 
> down/up based on something like command completion time or throughput or 
> some other value.

We don't necessarily need or want can_queue set by a value encoded into
a kernel table.  Some of our raid devices' can_queue values vary based
upon the firmware they are running.  A static table would, at best, be a
decent starting point.  At worst, it could dramatically over-commit the
target.  Our raid devices' max can_queue is either per raid controller
or per host port.

Whatever path we go down, I view having a user programmable upper bound
as a requirement.

> 
> If JamesS did mean that he wanted to ramp down the starget->can_queue 
> based on QUEUE_FULLs then JamesS and JamesB do not agree on that and we 
> are stuck.

I don't consider ramp up/down of starget->can_queue a requirement.
But I also don't consider its presence a problem.

Our requirements are pretty simple: the ability to limit the number
of commands queued to a target or lun in a multi-initiator environment
such that no individual initiator can fully consume the resources
of the target/lun.  I.e., we want a user programmable upper bound
on all queue_depth and can_queue adjustments.  (Yes, I've stated this
a few times.  :)


Mike




^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: qla2xxx: Conditionally disable automatic queue full tracking
  2009-10-02  0:19               ` Michael Reed
@ 2009-10-02 17:17                 ` James Smart
  2009-10-06 17:17                   ` Michael Reed
  0 siblings, 1 reply; 14+ messages in thread
From: James Smart @ 2009-10-02 17:17 UTC (permalink / raw)
  To: Michael Reed
  Cc: Mike Christie, Giridhar Malavali, James Bottomley, LinuxSCSI,
	Andrew Vasquez, vasu.dev, Jeremy Higdon



Michael Reed wrote:
> Mike Christie wrote:
>   
>> On 09/29/2009 08:34 PM, Giridhar Malavali wrote:
>>     
>>> 3) From your previous mail, I understand that you don't require a
>>> combined limit per target. Say the total queue depth for all LUN's on a
>>> particular target should not exceed some threshold.
>>>
>>>       
>> James Smart had done this patch
>> http://marc.info/?l=linux-scsi&m=121070114018354&w=2
>> where it sets the starget->can_queue based on info we get from vendors. 
>> The patch did not get merged. JamesB does not want the 
>> starget->can_queue to be static, and wants code like the queue full 
>> tracking code which dynamically ramps the device queue depth up and down.
>>     
>
> Agree.  Some amount of dynamic management of queue full seems desirable.
> I believe any such dynamic management needs to acknowledge that it
> exists in a multi-initiator environment, i.e., might get a QUEUE_FULL
> with no other commands outstanding.
>
>   
Completely agree - but there are multiple levels to the problem, many of 
which are at odds with each other....

>> I am not sure if JamesB meant that he wants to ramp down the 
>> starget->can_queue based on getting a QEUEU_FULL though. I thought he 
>> just meant he wants it to be dynamic. 
>>     
>
> What does "be dynamic" mean if not adjusted based upon a target's
> response to scsi commands?
>
>   
>> If I am right, then I think we 
>> could use JamesS's patch to set an initial starget->can_queue and add 
>> another field for a max value. Then we could add some code that ramps 
>> down/up based on something like command completion time or throughput or 
>> some other value.
>>     
>
>   
The desire of my patch was to aid single-initiator cases where multiple 
luns share the target, and there is a per-target resource limit, such as 
a maximum number of commands per target port.  Thus, the can_queue caps 
the number of commands allowed to be issued to the target.

In single-initiator, it would effectively stops QUEUE_FULLS from the 
target, aiding two issues:
- If the lun queue levels overcommit the target, I have seen targets 
that are so busy just handling the receive of the cmd frames, that they 
don't have enough cycles to send QUEUE_FULLS back, or punt and drop 
commands on the floor, or in the worse case, are so consumed they stop 
work on everything, including i/o they had already received. Note: these 
behaviors play havoc on any backoff algorithm dependent upon target 
response.  The OEMs try to solve this by providing configuration 
formulas. However, these have so many variables they are very complex to 
do right, and inexperienced admins may not know the formulas at all.  If 
we cap the outstanding i/o count, we never overcommit, and never see 
these headaches.
- Assuming that backoff algorithms are done per-lun, and if per-lun 
queue levels drop to outstanding loads and slowly ramp back up - there's 
an implicit biasing that takes place toward the luns that already have 
i/o outstanding or have yet to send i/o - meaning their queue levels are 
higher than the lun that saw the QUEUE_FULL. As they have more credit to 
submit i/o they may always consume more of the target than the 
backed-off lun, never allow it to get back to a level playing field.  If 
we avoid the QUEUE_FULLS to begin with, this biasing is lessened  (but 
not removed as it moves it back into the io scheduling area, which we 
always have).

In multi-initiator, it really doesn't change the problem, but it will 
lessen the degree to which an over-committed target is overwhelmed, 
which has to be goodness. Especially in cases where the target behaves 
like I described above.

My intent is that the cap per-target is static.  It would be initialized 
from the device record at device detection. I have no problem allowing a 
sysfs parameter to change it's value.  However, I do not believe we want 
a ramp-up/ramp-down on this value.

My ideas for the queuing algorithms is that we would have several 
selectable policies - on both the target and lun. If we did do a 
target-based ramp-up/ramp-down, I would have the device record value I 
added be the "max" (and max can be unlimited), and when the algorithm 
was selected, have the initial value (if necessary) and ramp up/down 
parameters specified.

> We don't necessarily need or want can_queue set by a value encoded into
> a kernel table.  Some of our raid devices' can_queue values vary based
> upon the firmware they are running.  A static table would, at best, be a
> decent starting point.  At worst, it could dramatically over-commit the
> target.  Our raid devices' max can_queue is either per raid controller
> or per host port.
>
> Whatever path we go down, I view having a user programmable upper bound
> as a requirement.
>
>   
Agree. If your device behaves as you stated - then don't set a maximum, 
which is the default backward-compatible part of my patch.

>> If JamesS did mean that he wanted to ramp down the starget->can_queue 
>> based on QUEUE_FULLs then JamesS and JamesB do not agree on that and we 
>> are stuck.
>>     
>
> I don't consider ramp up/down of starget->can_queue a requirement.
> But I also don't consider its presence a problem.
>
>   
Agreed.  My preference is a ramp up/down on a per-lun basis. However, 
you may select an algorithm that manipulates all luns at the same time 
for a target.

> Our requirements are pretty simple: the ability to limit the number
> of commands queued to a target or lun in a multi-initiator environment
> such that no individual initiator can fully consume the resources
> of the target/lun.  I.e., we want a user programmable upper bound
> on all queue_depth and can_queue adjustments.  (Yes, I've stated this
> a few times.  :)
>
>
>   
Easy to state, not so easy to truly do. But I'm in agreement.

-- james s

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: qla2xxx: Conditionally disable automatic queue full tracking
  2009-10-02 17:17                 ` James Smart
@ 2009-10-06 17:17                   ` Michael Reed
  0 siblings, 0 replies; 14+ messages in thread
From: Michael Reed @ 2009-10-06 17:17 UTC (permalink / raw)
  To: James Smart
  Cc: Mike Christie, Giridhar Malavali, James Bottomley, LinuxSCSI,
	Andrew Vasquez, vasu.dev, Jeremy Higdon



James Smart wrote:
> 
> Michael Reed wrote:
>> Mike Christie wrote:
>>   
>>> On 09/29/2009 08:34 PM, Giridhar Malavali wrote:
>>>     
>>>> 3) From your previous mail, I understand that you don't require a
>>>> combined limit per target. Say the total queue depth for all LUN's on a
>>>> particular target should not exceed some threshold.
>>>>
>>>>       
>>> James Smart had done this patch
>>> http://marc.info/?l=linux-scsi&m=121070114018354&w=2
>>> where it sets the starget->can_queue based on info we get from vendors. 
>>> The patch did not get merged. JamesB does not want the 
>>> starget->can_queue to be static, and wants code like the queue full 
>>> tracking code which dynamically ramps the device queue depth up and down.
>>>     
>> Agree.  Some amount of dynamic management of queue full seems desirable.
>> I believe any such dynamic management needs to acknowledge that it
>> exists in a multi-initiator environment, i.e., might get a QUEUE_FULL
>> with no other commands outstanding.
>>
>>   
> Completely agree - but there are multiple levels to the problem, many of 
> which are at odds with each other....
> 
>>> I am not sure if JamesB meant that he wants to ramp down the 
>>> starget->can_queue based on getting a QEUEU_FULL though. I thought he 
>>> just meant he wants it to be dynamic. 
>>>     
>> What does "be dynamic" mean if not adjusted based upon a target's
>> response to scsi commands?
>>
>>   
>>> If I am right, then I think we 
>>> could use JamesS's patch to set an initial starget->can_queue and add 
>>> another field for a max value. Then we could add some code that ramps 
>>> down/up based on something like command completion time or throughput or 
>>> some other value.
>>>     
>>   
> The desire of my patch was to aid single-initiator cases where multiple 
> luns share the target, and there is a per-target resource limit, such as 
> a maximum number of commands per target port.  Thus, the can_queue caps 
> the number of commands allowed to be issued to the target.

This is a good thing for multi-initiator as well, moving the limit/throttle
from the lun where the queue_depth might otherwise have to be quite a bit
lower than needed for good performance.

> 
> In single-initiator, it would effectively stops QUEUE_FULLS from the 
> target, aiding two issues:
> - If the lun queue levels over-commit the target, I have seen targets 
> that are so busy just handling the receive of the cmd frames, that they 
> don't have enough cycles to send QUEUE_FULLS back, or punt and drop 
> commands on the floor, or in the worse case, are so consumed they stop 
> work on everything, including i/o they had already received. 

I've seen them corrupt data under these circumstances.

> Note: these 
> behaviors play havoc on any backoff algorithm dependent upon target 
> response.  The OEMs try to solve this by providing configuration 
> formulas. However, these have so many variables they are very complex to 
> do right, and inexperienced admins may not know the formulas at all.  If 
> we cap the outstanding i/o count, we never overcommit, and never see 
> these headaches.
> - Assuming that backoff algorithms are done per-lun, and if per-lun 
> queue levels drop to outstanding loads and slowly ramp back up - there's 
> an implicit biasing that takes place toward the luns that already have 
> i/o outstanding or have yet to send i/o - meaning their queue levels are 
> higher than the lun that saw the QUEUE_FULL. As they have more credit to 
> submit i/o they may always consume more of the target than the 
> backed-off lun, never allow it to get back to a level playing field.  If 
> we avoid the QUEUE_FULLS to begin with, this biasing is lessened  (but 
> not removed as it moves it back into the io scheduling area, which we 
> always have).

This seems like an argument in favor of having per target throttling
for this class of device.

> 
> In multi-initiator, it really doesn't change the problem, but it will 
> lessen the degree to which an over-committed target is overwhelmed, 
> which has to be goodness. Especially in cases where the target behaves 
> like I described above.
> 
> My intent is that the cap per-target is static.  It would be initialized 
> from the device record at device detection. I have no problem allowing a 
> sysfs parameter to change it's value.  However, I do not believe we want 
> a ramp-up/ramp-down on this value.

A ramp-up / ramp-down should only be necessary in the event of a
misconfiguration.  That would make it more necessary in multi-initiator
than single initiator.

> 
> My ideas for the queuing algorithms is that we would have several 
> selectable policies - on both the target and lun. If we did do a 
> target-based ramp-up/ramp-down, I would have the device record value I 
> added be the "max" (and max can be unlimited), and when the algorithm 
> was selected, have the initial value (if necessary) and ramp up/down 
> parameters specified.
> 
>> We don't necessarily need or want can_queue set by a value encoded into
>> a kernel table.  Some of our raid devices' can_queue values vary based
>> upon the firmware they are running.  A static table would, at best, be a
>> decent starting point.  At worst, it could dramatically over-commit the
>> target.  Our raid devices' max can_queue is either per raid controller
>> or per host port.
>>
>> Whatever path we go down, I view having a user programmable upper bound
>> as a requirement.
>>
>>   
> Agree. If your device behaves as you stated - then don't set a maximum, 
> which is the default backward-compatible part of my patch.

Some of our devices use the inquiry data of their manufacturer.
This still puts us at the mercy of the values in the table.
I suspect you're looking at a starting point for a can_queue
limit, but it doesn't eliminate the need for user space modification
of that limit.  Why not start with an infinite limit, no
kernel enshrined data, and just have udev adjust it down based
upon a configuration file which can be updated independently of
the kernel?  udev should be able to coordinate this.

> 
>>> If JamesS did mean that he wanted to ramp down the starget->can_queue 
>>> based on QUEUE_FULLs then JamesS and JamesB do not agree on that and we 
>>> are stuck.
>>>     
>> I don't consider ramp up/down of starget->can_queue a requirement.
>> But I also don't consider its presence a problem.
>>
>>   
> Agreed.  My preference is a ramp up/down on a per-lun basis. However, 
> you may select an algorithm that manipulates all luns at the same time 
> for a target.

I would think that, if we're going to do ramp up / ramp down,
ramping at the target as well as the lun would introduce a more
end-user understandable process than trying to ramp only luns.  It's
putting the management of the resource at the level of the resource.
Or closer to it in the case of the resource being at the raid
controller level.

Target ramping is meaningless for some configs, and lun ramping seems
less appropriate for others.

> 
>> Our requirements are pretty simple: the ability to limit the number
>> of commands queued to a target or lun in a multi-initiator environment
>> such that no individual initiator can fully consume the resources
>> of the target/lun.  I.e., we want a user programmable upper bound
>> on all queue_depth and can_queue adjustments.  (Yes, I've stated this
>> a few times.  :)
>>
>>
>>   
> Easy to state, not so easy to truly do. But I'm in agreement.

Just making certain that the system doesn't sabotage the efforts of
an intelligent admin seems like a good starting place.

Having a per-target can_queue, without any additional adjustment
other than the current per-lun adjustments, would go a long way toward
providing a solution that solves a significant percentage of the issues.

But, I'd vote for throttling at the target as well as the lun, and
being able to enable or disable as appropriate.

Once we agree in principle, perhaps we should consider what knobs need
to be present at each level that permits throttling?

Mike

> 
> -- james s
> --
> To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2009-10-06 17:18 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-09-23 23:59 qla2xxx: Conditionally disable automatic queue full tracking Giridhar Malavali
2009-09-24 14:42 ` Michael Reed
2009-09-24 17:05   ` Giridhar Malavali
2009-09-24 19:15     ` Michael Reed
2009-09-24 20:55       ` Giridhar Malavali
2009-09-24 21:02         ` Michael Reed
2009-09-30  1:34           ` Giridhar Malavali
2009-09-30 13:08             ` Michael Reed
2009-09-30 13:43               ` Michael Reed
2009-09-30 18:23             ` Mike Christie
2009-10-02  0:19               ` Michael Reed
2009-10-02 17:17                 ` James Smart
2009-10-06 17:17                   ` Michael Reed
2009-09-24 19:49 ` Mike Christie

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.