linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [patch 1/2]scsi: scsi_run_queue() doesn't use local list to handle starved sdev
@ 2011-12-22  3:10 Shaohua Li
  2011-12-22 18:27 ` James Bottomley
  0 siblings, 1 reply; 8+ messages in thread
From: Shaohua Li @ 2011-12-22  3:10 UTC (permalink / raw)
  To: lkml, linux-scsi
  Cc: JBottomley, Jens Axboe, Christoph Hellwig, Ted Ts'o, Wu,
	Fengguang, Darrick J. Wong

scsi_run_queue() picks off all sdev from host starved_list to a local list,
then handle them. If there are multiple threads running scsi_run_queue(),
the starved_list will get messed. This is quite common, because request
rq_affinity is on by default.

Signed-off-by: Shaohua Li <shaohua.li@intel.com>
---
 drivers/scsi/scsi_lib.c |   21 ++++++++++++++-------
 1 file changed, 14 insertions(+), 7 deletions(-)

Index: linux/drivers/scsi/scsi_lib.c
===================================================================
--- linux.orig/drivers/scsi/scsi_lib.c	2011-12-21 16:56:23.000000000 +0800
+++ linux/drivers/scsi/scsi_lib.c	2011-12-22 09:33:09.000000000 +0800
@@ -401,9 +401,8 @@ static inline int scsi_host_is_busy(stru
  */
 static void scsi_run_queue(struct request_queue *q)
 {
-	struct scsi_device *sdev = q->queuedata;
+	struct scsi_device *sdev = q->queuedata, *head_sdev = NULL;
 	struct Scsi_Host *shost;
-	LIST_HEAD(starved_list);
 	unsigned long flags;
 
 	/* if the device is dead, sdev will be NULL, so no queue to run */
@@ -415,9 +414,8 @@ static void scsi_run_queue(struct reques
 		scsi_single_lun_run(sdev);
 
 	spin_lock_irqsave(shost->host_lock, flags);
-	list_splice_init(&shost->starved_list, &starved_list);
 
-	while (!list_empty(&starved_list)) {
+	while (!list_empty(&shost->starved_list)) {
 		/*
 		 * As long as shost is accepting commands and we have
 		 * starved queues, call blk_run_queue. scsi_request_fn
@@ -431,8 +429,13 @@ static void scsi_run_queue(struct reques
 		if (scsi_host_is_busy(shost))
 			break;
 
-		sdev = list_entry(starved_list.next,
+		sdev = list_entry(shost->starved_list.next,
 				  struct scsi_device, starved_entry);
+		if (sdev == head_sdev)
+			break;
+		if (!head_sdev)
+			head_sdev = sdev;
+
 		list_del_init(&sdev->starved_entry);
 		if (scsi_target_is_busy(scsi_target(sdev))) {
 			list_move_tail(&sdev->starved_entry,
@@ -445,9 +448,13 @@ static void scsi_run_queue(struct reques
 		__blk_run_queue(sdev->request_queue);
 		spin_unlock(sdev->request_queue->queue_lock);
 		spin_lock(shost->host_lock);
+		/*
+		 * the head sdev is no longer starved and removed from the
+		 * starved list, select a new sdev as head.
+		 */
+		if (head_sdev == sdev && list_empty(&sdev->starved_entry))
+			head_sdev = NULL;
 	}
-	/* put any unprocessed entries back */
-	list_splice(&starved_list, &shost->starved_list);
 	spin_unlock_irqrestore(shost->host_lock, flags);
 
 	blk_run_queue(q);



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [patch 1/2]scsi: scsi_run_queue() doesn't use local list to handle starved sdev
  2011-12-22  3:10 [patch 1/2]scsi: scsi_run_queue() doesn't use local list to handle starved sdev Shaohua Li
@ 2011-12-22 18:27 ` James Bottomley
  2011-12-23  0:40   ` Shaohua Li
  0 siblings, 1 reply; 8+ messages in thread
From: James Bottomley @ 2011-12-22 18:27 UTC (permalink / raw)
  To: Shaohua Li
  Cc: lkml, linux-scsi, Jens Axboe, Christoph Hellwig, Ted Ts'o,
	Wu, Fengguang, Darrick J. Wong

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset="utf-8", Size: 1881 bytes --]

On Thu, 2011-12-22 at 11:10 +0800, Shaohua Li wrote:
> scsi_run_queue() picks off all sdev from host starved_list to a local list,
> then handle them. If there are multiple threads running scsi_run_queue(),
> the starved_list will get messed. This is quite common, because request
> rq_affinity is on by default.
> 
> Signed-off-by: Shaohua Li <shaohua.li@intel.com>
> ---
>  drivers/scsi/scsi_lib.c |   21 ++++++++++++++-------
>  1 file changed, 14 insertions(+), 7 deletions(-)
> 
> Index: linux/drivers/scsi/scsi_lib.c
> ===================================================================
> --- linux.orig/drivers/scsi/scsi_lib.c	2011-12-21 16:56:23.000000000 +0800
> +++ linux/drivers/scsi/scsi_lib.c	2011-12-22 09:33:09.000000000 +0800
> @@ -401,9 +401,8 @@ static inline int scsi_host_is_busy(stru
>   */
>  static void scsi_run_queue(struct request_queue *q)
>  {
> -	struct scsi_device *sdev = q->queuedata;
> +	struct scsi_device *sdev = q->queuedata, *head_sdev = NULL;
>  	struct Scsi_Host *shost;
> -	LIST_HEAD(starved_list);
>  	unsigned long flags;
>  
>  	/* if the device is dead, sdev will be NULL, so no queue to run */
> @@ -415,9 +414,8 @@ static void scsi_run_queue(struct reques
>  		scsi_single_lun_run(sdev);
>  
>  	spin_lock_irqsave(shost->host_lock, flags);
> -	list_splice_init(&shost->starved_list, &starved_list);
>  
> -	while (!list_empty(&starved_list)) {
> +	while (!list_empty(&shost->starved_list)) {

The original reason for working from a copy instead of the original list
was that the device can end up back on the starved list because of a
variety of conditions in the HBA and so this would cause the loop not to
exit, so this piece of the patch doesn't look right to me.

James

ÿôèº{.nÇ+‰·Ÿ®‰­†+%ŠËÿ±éݶ\x17¥Šwÿº{.nÇ+‰·¥Š{±þG«éÿŠ{ayº\x1dʇڙë,j\a­¢f£¢·hšïêÿ‘êçz_è®\x03(­éšŽŠÝ¢j"ú\x1a¶^[m§ÿÿ¾\a«þG«éÿ¢¸?™¨è­Ú&£ø§~á¶iO•æ¬z·švØ^\x14\x04\x1a¶^[m§ÿÿÃ\fÿ¶ìÿ¢¸?–I¥

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [patch 1/2]scsi: scsi_run_queue() doesn't use local list to handle starved sdev
  2011-12-22 18:27 ` James Bottomley
@ 2011-12-23  0:40   ` Shaohua Li
  2011-12-23  1:17     ` James Bottomley
  0 siblings, 1 reply; 8+ messages in thread
From: Shaohua Li @ 2011-12-23  0:40 UTC (permalink / raw)
  To: James Bottomley
  Cc: lkml, linux-scsi, Jens Axboe, Christoph Hellwig, Ted Ts'o,
	Wu, Fengguang, Darrick J. Wong

On Thu, 2011-12-22 at 18:27 +0000, James Bottomley wrote:
> On Thu, 2011-12-22 at 11:10 +0800, Shaohua Li wrote:
> > scsi_run_queue() picks off all sdev from host starved_list to a local list,
> > then handle them. If there are multiple threads running scsi_run_queue(),
> > the starved_list will get messed. This is quite common, because request
> > rq_affinity is on by default.
> > 
> > Signed-off-by: Shaohua Li <shaohua.li@intel.com>
> > ---
> >  drivers/scsi/scsi_lib.c |   21 ++++++++++++++-------
> >  1 file changed, 14 insertions(+), 7 deletions(-)
> > 
> > Index: linux/drivers/scsi/scsi_lib.c
> > ===================================================================
> > --- linux.orig/drivers/scsi/scsi_lib.c	2011-12-21 16:56:23.000000000 +0800
> > +++ linux/drivers/scsi/scsi_lib.c	2011-12-22 09:33:09.000000000 +0800
> > @@ -401,9 +401,8 @@ static inline int scsi_host_is_busy(stru
> >   */
> >  static void scsi_run_queue(struct request_queue *q)
> >  {
> > -	struct scsi_device *sdev = q->queuedata;
> > +	struct scsi_device *sdev = q->queuedata, *head_sdev = NULL;
> >  	struct Scsi_Host *shost;
> > -	LIST_HEAD(starved_list);
> >  	unsigned long flags;
> >  
> >  	/* if the device is dead, sdev will be NULL, so no queue to run */
> > @@ -415,9 +414,8 @@ static void scsi_run_queue(struct reques
> >  		scsi_single_lun_run(sdev);
> >  
> >  	spin_lock_irqsave(shost->host_lock, flags);
> > -	list_splice_init(&shost->starved_list, &starved_list);
> >  
> > -	while (!list_empty(&starved_list)) {
> > +	while (!list_empty(&shost->starved_list)) {
> 
> The original reason for working from a copy instead of the original list
> was that the device can end up back on the starved list because of a
> variety of conditions in the HBA and so this would cause the loop not to
> exit, so this piece of the patch doesn't look right to me.
+               /*
+                * the head sdev is no longer starved and removed from the
+                * starved list, select a new sdev as head.
+                */
+               if (head_sdev == sdev && list_empty(&sdev->starved_entry))
+                       head_sdev = NULL;
I had this in the loop, which is to guarantee the loop will exit if a
device is removed from the starved list.

Thanks,
Shaohua


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [patch 1/2]scsi: scsi_run_queue() doesn't use local list to handle starved sdev
  2011-12-23  0:40   ` Shaohua Li
@ 2011-12-23  1:17     ` James Bottomley
  2011-12-23  1:53       ` Shaohua Li
  0 siblings, 1 reply; 8+ messages in thread
From: James Bottomley @ 2011-12-23  1:17 UTC (permalink / raw)
  To: Shaohua Li
  Cc: lkml, linux-scsi, Jens Axboe, Christoph Hellwig, Ted Ts'o,
	Wu, Fengguang, Darrick J. Wong

On Fri, 2011-12-23 at 08:40 +0800, Shaohua Li wrote:
> On Thu, 2011-12-22 at 18:27 +0000, James Bottomley wrote:
> > On Thu, 2011-12-22 at 11:10 +0800, Shaohua Li wrote:
> > > scsi_run_queue() picks off all sdev from host starved_list to a local list,
> > > then handle them. If there are multiple threads running scsi_run_queue(),
> > > the starved_list will get messed. This is quite common, because request
> > > rq_affinity is on by default.
> > > 
> > > Signed-off-by: Shaohua Li <shaohua.li@intel.com>
> > > ---
> > >  drivers/scsi/scsi_lib.c |   21 ++++++++++++++-------
> > >  1 file changed, 14 insertions(+), 7 deletions(-)
> > > 
> > > Index: linux/drivers/scsi/scsi_lib.c
> > > ===================================================================
> > > --- linux.orig/drivers/scsi/scsi_lib.c	2011-12-21 16:56:23.000000000 +0800
> > > +++ linux/drivers/scsi/scsi_lib.c	2011-12-22 09:33:09.000000000 +0800
> > > @@ -401,9 +401,8 @@ static inline int scsi_host_is_busy(stru
> > >   */
> > >  static void scsi_run_queue(struct request_queue *q)
> > >  {
> > > -	struct scsi_device *sdev = q->queuedata;
> > > +	struct scsi_device *sdev = q->queuedata, *head_sdev = NULL;
> > >  	struct Scsi_Host *shost;
> > > -	LIST_HEAD(starved_list);
> > >  	unsigned long flags;
> > >  
> > >  	/* if the device is dead, sdev will be NULL, so no queue to run */
> > > @@ -415,9 +414,8 @@ static void scsi_run_queue(struct reques
> > >  		scsi_single_lun_run(sdev);
> > >  
> > >  	spin_lock_irqsave(shost->host_lock, flags);
> > > -	list_splice_init(&shost->starved_list, &starved_list);
> > >  
> > > -	while (!list_empty(&starved_list)) {
> > > +	while (!list_empty(&shost->starved_list)) {
> > 
> > The original reason for working from a copy instead of the original list
> > was that the device can end up back on the starved list because of a
> > variety of conditions in the HBA and so this would cause the loop not to
> > exit, so this piece of the patch doesn't look right to me.
> +               /*
> +                * the head sdev is no longer starved and removed from the
> +                * starved list, select a new sdev as head.
> +                */
> +               if (head_sdev == sdev && list_empty(&sdev->starved_entry))
> +                       head_sdev = NULL;
> I had this in the loop, which is to guarantee the loop will exit if a
> device is removed from the starved list.

And the non-head sdevs? which can also get put back onto the list

What's the reason for not just traversing the list once using list
splice?

Basically, the changelog doesn't seem to explain what you're doing and
the logic looks flawed.

James



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [patch 1/2]scsi: scsi_run_queue() doesn't use local list to handle starved sdev
  2011-12-23  1:17     ` James Bottomley
@ 2011-12-23  1:53       ` Shaohua Li
  2012-01-09  7:31         ` Shaohua Li
  0 siblings, 1 reply; 8+ messages in thread
From: Shaohua Li @ 2011-12-23  1:53 UTC (permalink / raw)
  To: James Bottomley
  Cc: lkml, linux-scsi, Jens Axboe, Christoph Hellwig, Ted Ts'o,
	Wu, Fengguang, Darrick J. Wong

On Thu, 2011-12-22 at 19:17 -0600, James Bottomley wrote:
> On Fri, 2011-12-23 at 08:40 +0800, Shaohua Li wrote:
> > On Thu, 2011-12-22 at 18:27 +0000, James Bottomley wrote:
> > > On Thu, 2011-12-22 at 11:10 +0800, Shaohua Li wrote:
> > > > scsi_run_queue() picks off all sdev from host starved_list to a local list,
> > > > then handle them. If there are multiple threads running scsi_run_queue(),
> > > > the starved_list will get messed. This is quite common, because request
> > > > rq_affinity is on by default.
> > > > 
> > > > Signed-off-by: Shaohua Li <shaohua.li@intel.com>
> > > > ---
> > > >  drivers/scsi/scsi_lib.c |   21 ++++++++++++++-------
> > > >  1 file changed, 14 insertions(+), 7 deletions(-)
> > > > 
> > > > Index: linux/drivers/scsi/scsi_lib.c
> > > > ===================================================================
> > > > --- linux.orig/drivers/scsi/scsi_lib.c	2011-12-21 16:56:23.000000000 +0800
> > > > +++ linux/drivers/scsi/scsi_lib.c	2011-12-22 09:33:09.000000000 +0800
> > > > @@ -401,9 +401,8 @@ static inline int scsi_host_is_busy(stru
> > > >   */
> > > >  static void scsi_run_queue(struct request_queue *q)
> > > >  {
> > > > -	struct scsi_device *sdev = q->queuedata;
> > > > +	struct scsi_device *sdev = q->queuedata, *head_sdev = NULL;
> > > >  	struct Scsi_Host *shost;
> > > > -	LIST_HEAD(starved_list);
> > > >  	unsigned long flags;
> > > >  
> > > >  	/* if the device is dead, sdev will be NULL, so no queue to run */
> > > > @@ -415,9 +414,8 @@ static void scsi_run_queue(struct reques
> > > >  		scsi_single_lun_run(sdev);
> > > >  
> > > >  	spin_lock_irqsave(shost->host_lock, flags);
> > > > -	list_splice_init(&shost->starved_list, &starved_list);
> > > >  
> > > > -	while (!list_empty(&starved_list)) {
> > > > +	while (!list_empty(&shost->starved_list)) {
> > > 
> > > The original reason for working from a copy instead of the original list
> > > was that the device can end up back on the starved list because of a
> > > variety of conditions in the HBA and so this would cause the loop not to
> > > exit, so this piece of the patch doesn't look right to me.
> > +               /*
> > +                * the head sdev is no longer starved and removed from the
> > +                * starved list, select a new sdev as head.
> > +                */
> > +               if (head_sdev == sdev && list_empty(&sdev->starved_entry))
> > +                       head_sdev = NULL;
> > I had this in the loop, which is to guarantee the loop will exit if a
> > device is removed from the starved list.
> 
> And the non-head sdevs? which can also get put back onto the list
yes, it will be put back to the list tail. Note we always handle the
head sdev first. move the cursor of list iterating to next dev and next,
we will hit head_sdev == sdev eventually, then the loop exit. or the
cursor isn't moving, which means we have scsi_host_is_busy(). in both
cases, the loop will exit.

> What's the reason for not just traversing the list once using list
> splice?
> 
> Basically, the changelog doesn't seem to explain what you're doing and
> the logic looks flawed.
The main reason is we have multiple CPU trying to get the list to its
local list and then handle it, then putback unprocessed entries back. we
have multiple cpu handles blk softirq, because request rq_affinity is on
default. And the putback always happen, because scsi_run_queue is called
after every request finish. When it's called, scsi_host has just one
free slot. The in-coordination of multiple threads handle of the starved
list will mess the list. I observed dev which doesn't get chance to
dispatch request is put to starved list tail, while dev which dispatches
request is still at head. eg, this will cause unfairness. This means
some devices will get starved.

Thanks,
Shaohua 


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [patch 1/2]scsi: scsi_run_queue() doesn't use local list to handle starved sdev
  2011-12-23  1:53       ` Shaohua Li
@ 2012-01-09  7:31         ` Shaohua Li
  2012-01-09 17:30           ` James Bottomley
  0 siblings, 1 reply; 8+ messages in thread
From: Shaohua Li @ 2012-01-09  7:31 UTC (permalink / raw)
  To: James Bottomley
  Cc: lkml, linux-scsi, Jens Axboe, Christoph Hellwig, Ted Ts'o,
	Wu, Fengguang, Darrick J. Wong

On Fri, 2011-12-23 at 09:53 +0800, Shaohua Li wrote:
> On Thu, 2011-12-22 at 19:17 -0600, James Bottomley wrote:
> > On Fri, 2011-12-23 at 08:40 +0800, Shaohua Li wrote:
> > > On Thu, 2011-12-22 at 18:27 +0000, James Bottomley wrote:
> > > > On Thu, 2011-12-22 at 11:10 +0800, Shaohua Li wrote:
> > > > > scsi_run_queue() picks off all sdev from host starved_list to a local list,
> > > > > then handle them. If there are multiple threads running scsi_run_queue(),
> > > > > the starved_list will get messed. This is quite common, because request
> > > > > rq_affinity is on by default.
> > > > > 
> > > > > Signed-off-by: Shaohua Li <shaohua.li@intel.com>
> > > > > ---
> > > > >  drivers/scsi/scsi_lib.c |   21 ++++++++++++++-------
> > > > >  1 file changed, 14 insertions(+), 7 deletions(-)
> > > > > 
> > > > > Index: linux/drivers/scsi/scsi_lib.c
> > > > > ===================================================================
> > > > > --- linux.orig/drivers/scsi/scsi_lib.c	2011-12-21 16:56:23.000000000 +0800
> > > > > +++ linux/drivers/scsi/scsi_lib.c	2011-12-22 09:33:09.000000000 +0800
> > > > > @@ -401,9 +401,8 @@ static inline int scsi_host_is_busy(stru
> > > > >   */
> > > > >  static void scsi_run_queue(struct request_queue *q)
> > > > >  {
> > > > > -	struct scsi_device *sdev = q->queuedata;
> > > > > +	struct scsi_device *sdev = q->queuedata, *head_sdev = NULL;
> > > > >  	struct Scsi_Host *shost;
> > > > > -	LIST_HEAD(starved_list);
> > > > >  	unsigned long flags;
> > > > >  
> > > > >  	/* if the device is dead, sdev will be NULL, so no queue to run */
> > > > > @@ -415,9 +414,8 @@ static void scsi_run_queue(struct reques
> > > > >  		scsi_single_lun_run(sdev);
> > > > >  
> > > > >  	spin_lock_irqsave(shost->host_lock, flags);
> > > > > -	list_splice_init(&shost->starved_list, &starved_list);
> > > > >  
> > > > > -	while (!list_empty(&starved_list)) {
> > > > > +	while (!list_empty(&shost->starved_list)) {
> > > > 
> > > > The original reason for working from a copy instead of the original list
> > > > was that the device can end up back on the starved list because of a
> > > > variety of conditions in the HBA and so this would cause the loop not to
> > > > exit, so this piece of the patch doesn't look right to me.
> > > +               /*
> > > +                * the head sdev is no longer starved and removed from the
> > > +                * starved list, select a new sdev as head.
> > > +                */
> > > +               if (head_sdev == sdev && list_empty(&sdev->starved_entry))
> > > +                       head_sdev = NULL;
> > > I had this in the loop, which is to guarantee the loop will exit if a
> > > device is removed from the starved list.
> > 
> > And the non-head sdevs? which can also get put back onto the list
> yes, it will be put back to the list tail. Note we always handle the
> head sdev first. move the cursor of list iterating to next dev and next,
> we will hit head_sdev == sdev eventually, then the loop exit. or the
> cursor isn't moving, which means we have scsi_host_is_busy(). in both
> cases, the loop will exit.
> 
> > What's the reason for not just traversing the list once using list
> > splice?
> > 
> > Basically, the changelog doesn't seem to explain what you're doing and
> > the logic looks flawed.
> The main reason is we have multiple CPU trying to get the list to its
> local list and then handle it, then putback unprocessed entries back. we
> have multiple cpu handles blk softirq, because request rq_affinity is on
> default. And the putback always happen, because scsi_run_queue is called
> after every request finish. When it's called, scsi_host has just one
> free slot. The in-coordination of multiple threads handle of the starved
> list will mess the list. I observed dev which doesn't get chance to
> dispatch request is put to starved list tail, while dev which dispatches
> request is still at head. eg, this will cause unfairness. This means
> some devices will get starved.
James,
can you take a look at it? the two patches give 12% improvement in a
simple file creation workload. I updated the patch log.

Thanks,
Shaohua

Subject: scsi: scsi_run_queue() doesn't use local list to handle starved sdev

scsi_run_queue() picks off all sdev from host starved_list to a local list,
then handle them, and putback unprocessed entries back. we have multiple cpu
handles blk softirq, because request rq_affinity is on by default. And the
putback always happen, because scsi_run_queue() is called after every request
finish. When it's called, scsi_host has just one free slot. The in-coordination
of multiple threads handle of the starved list will mess the list. I observed
dev which doesn't get chance to dispatch request is put to starved list tail,
while dev which dispatches request successfully is still at head. This will
cause unfairness, which means some devices will get starved.

Signed-off-by: Shaohua Li <shaohua.li@intel.com>
---
 drivers/scsi/scsi_lib.c |   25 +++++++++++++++----------
 1 file changed, 15 insertions(+), 10 deletions(-)

Index: linux/drivers/scsi/scsi_lib.c
===================================================================
--- linux.orig/drivers/scsi/scsi_lib.c	2011-12-23 10:21:22.000000000 +0800
+++ linux/drivers/scsi/scsi_lib.c	2012-01-09 15:14:53.000000000 +0800
@@ -401,9 +401,8 @@ static inline int scsi_host_is_busy(stru
  */
 static void scsi_run_queue(struct request_queue *q)
 {
-	struct scsi_device *sdev = q->queuedata;
+	struct scsi_device *sdev = q->queuedata, *head_sdev = NULL;
 	struct Scsi_Host *shost;
-	LIST_HEAD(starved_list);
 	unsigned long flags;
 
 	/* if the device is dead, sdev will be NULL, so no queue to run */
@@ -415,9 +414,8 @@ static void scsi_run_queue(struct reques
 		scsi_single_lun_run(sdev);
 
 	spin_lock_irqsave(shost->host_lock, flags);
-	list_splice_init(&shost->starved_list, &starved_list);
 
-	while (!list_empty(&starved_list)) {
+	while (!list_empty(&shost->starved_list)) {
 		/*
 		 * As long as shost is accepting commands and we have
 		 * starved queues, call blk_run_queue. scsi_request_fn
@@ -431,8 +429,13 @@ static void scsi_run_queue(struct reques
 		if (scsi_host_is_busy(shost))
 			break;
 
-		sdev = list_entry(starved_list.next,
+		sdev = list_entry(shost->starved_list.next,
 				  struct scsi_device, starved_entry);
+		if (sdev == head_sdev)
+			break;
+		if (!head_sdev)
+			head_sdev = sdev;
+
 		list_del_init(&sdev->starved_entry);
 		if (scsi_target_is_busy(scsi_target(sdev))) {
 			list_move_tail(&sdev->starved_entry,
@@ -445,9 +448,13 @@ static void scsi_run_queue(struct reques
 		__blk_run_queue(sdev->request_queue);
 		spin_unlock(sdev->request_queue->queue_lock);
 		spin_lock(shost->host_lock);
+		/*
+		 * the head sdev is no longer starved and removed from the
+		 * starved list, select a new sdev as head.
+		 */
+		if (head_sdev == sdev && list_empty(&sdev->starved_entry))
+			head_sdev = NULL;
 	}
-	/* put any unprocessed entries back */
-	list_splice(&starved_list, &shost->starved_list);
 	spin_unlock_irqrestore(shost->host_lock, flags);
 
 	blk_run_queue(q);
@@ -1316,9 +1323,7 @@ static inline int scsi_target_queue_read
 	}
 
 	if (scsi_target_is_busy(starget)) {
-		if (list_empty(&sdev->starved_entry))
-			list_add_tail(&sdev->starved_entry,
-				      &shost->starved_list);
+		list_move_tail(&sdev->starved_entry, &shost->starved_list);
 		return 0;
 	}
 




^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [patch 1/2]scsi: scsi_run_queue() doesn't use local list to handle starved sdev
  2012-01-09  7:31         ` Shaohua Li
@ 2012-01-09 17:30           ` James Bottomley
  2012-01-10  3:27             ` Shaohua Li
  0 siblings, 1 reply; 8+ messages in thread
From: James Bottomley @ 2012-01-09 17:30 UTC (permalink / raw)
  To: Shaohua Li
  Cc: lkml, linux-scsi, Jens Axboe, Christoph Hellwig, Ted Ts'o,
	Wu, Fengguang, Darrick J. Wong

On Mon, 2012-01-09 at 15:31 +0800, Shaohua Li wrote:
> On Fri, 2011-12-23 at 09:53 +0800, Shaohua Li wrote:
> > On Thu, 2011-12-22 at 19:17 -0600, James Bottomley wrote:
> > > On Fri, 2011-12-23 at 08:40 +0800, Shaohua Li wrote:
> > > > On Thu, 2011-12-22 at 18:27 +0000, James Bottomley wrote:
> > > > > On Thu, 2011-12-22 at 11:10 +0800, Shaohua Li wrote:
> > > > > > scsi_run_queue() picks off all sdev from host starved_list to a local list,
> > > > > > then handle them. If there are multiple threads running scsi_run_queue(),
> > > > > > the starved_list will get messed. This is quite common, because request
> > > > > > rq_affinity is on by default.
> > > > > > 
> > > > > > Signed-off-by: Shaohua Li <shaohua.li@intel.com>
> > > > > > ---
> > > > > >  drivers/scsi/scsi_lib.c |   21 ++++++++++++++-------
> > > > > >  1 file changed, 14 insertions(+), 7 deletions(-)
> > > > > > 
> > > > > > Index: linux/drivers/scsi/scsi_lib.c
> > > > > > ===================================================================
> > > > > > --- linux.orig/drivers/scsi/scsi_lib.c	2011-12-21 16:56:23.000000000 +0800
> > > > > > +++ linux/drivers/scsi/scsi_lib.c	2011-12-22 09:33:09.000000000 +0800
> > > > > > @@ -401,9 +401,8 @@ static inline int scsi_host_is_busy(stru
> > > > > >   */
> > > > > >  static void scsi_run_queue(struct request_queue *q)
> > > > > >  {
> > > > > > -	struct scsi_device *sdev = q->queuedata;
> > > > > > +	struct scsi_device *sdev = q->queuedata, *head_sdev = NULL;
> > > > > >  	struct Scsi_Host *shost;
> > > > > > -	LIST_HEAD(starved_list);
> > > > > >  	unsigned long flags;
> > > > > >  
> > > > > >  	/* if the device is dead, sdev will be NULL, so no queue to run */
> > > > > > @@ -415,9 +414,8 @@ static void scsi_run_queue(struct reques
> > > > > >  		scsi_single_lun_run(sdev);
> > > > > >  
> > > > > >  	spin_lock_irqsave(shost->host_lock, flags);
> > > > > > -	list_splice_init(&shost->starved_list, &starved_list);
> > > > > >  
> > > > > > -	while (!list_empty(&starved_list)) {
> > > > > > +	while (!list_empty(&shost->starved_list)) {
> > > > > 
> > > > > The original reason for working from a copy instead of the original list
> > > > > was that the device can end up back on the starved list because of a
> > > > > variety of conditions in the HBA and so this would cause the loop not to
> > > > > exit, so this piece of the patch doesn't look right to me.
> > > > +               /*
> > > > +                * the head sdev is no longer starved and removed from the
> > > > +                * starved list, select a new sdev as head.
> > > > +                */
> > > > +               if (head_sdev == sdev && list_empty(&sdev->starved_entry))
> > > > +                       head_sdev = NULL;
> > > > I had this in the loop, which is to guarantee the loop will exit if a
> > > > device is removed from the starved list.
> > > 
> > > And the non-head sdevs? which can also get put back onto the list
> > yes, it will be put back to the list tail. Note we always handle the
> > head sdev first. move the cursor of list iterating to next dev and next,
> > we will hit head_sdev == sdev eventually, then the loop exit. or the
> > cursor isn't moving, which means we have scsi_host_is_busy(). in both
> > cases, the loop will exit.
> > 
> > > What's the reason for not just traversing the list once using list
> > > splice?
> > > 
> > > Basically, the changelog doesn't seem to explain what you're doing and
> > > the logic looks flawed.
> > The main reason is we have multiple CPU trying to get the list to its
> > local list and then handle it, then putback unprocessed entries back. we
> > have multiple cpu handles blk softirq, because request rq_affinity is on
> > default. And the putback always happen, because scsi_run_queue is called
> > after every request finish. When it's called, scsi_host has just one
> > free slot. The in-coordination of multiple threads handle of the starved
> > list will mess the list. I observed dev which doesn't get chance to
> > dispatch request is put to starved list tail, while dev which dispatches
> > request is still at head. eg, this will cause unfairness. This means
> > some devices will get starved.
> James,
> can you take a look at it? the two patches give 12% improvement in a
> simple file creation workload. I updated the patch log.

The exit criteria still don't look right to me.

given two devices on the starved list, s1 and s2.  First loop around
head_sdev is s1, we run the queue and it gets put back.  Now we loop to
s2.  as we drop the lock to run the queue, a competing thread gets in,
runs the starved loop and s1 gets removed.  We run s2, it gets put back,
but we don't see s1 is gone (since the sdev we're processing is s2 and
the check head_sdev == sdev fails) and it remains our head_sdev.  The
result is a loop until s2 gets removed, which would be highly
undesirable.

The problem essentially is that you're using local variables to track
what you've made a global list, so any other list manipulation external
to your local variables is invisible to you.  You can fix this by
checking list_empty(&head_sdev->starved_list) instead, but this will
still lead to running the starved list up to twice as much as we would
have previously.

It strikes me that a far better (and certainly easier to follow logic)
is to split the queue up for CPU affinity.  So the first pass instead of
splicing everything, it just takes all the unaffine requests plus all
those affine to that CPU, but leaves all those whose affinities don't
match the CPU on the global list (for any competing thread, which must
necessarily be on a different CPU, to process).

James

> Thanks,
> Shaohua
> 
> Subject: scsi: scsi_run_queue() doesn't use local list to handle starved sdev
> 
> scsi_run_queue() picks off all sdev from host starved_list to a local list,
> then handle them, and putback unprocessed entries back. we have multiple cpu
> handles blk softirq, because request rq_affinity is on by default. And the
> putback always happen, because scsi_run_queue() is called after every request
> finish. When it's called, scsi_host has just one free slot. The in-coordination
> of multiple threads handle of the starved list will mess the list. I observed
> dev which doesn't get chance to dispatch request is put to starved list tail,
> while dev which dispatches request successfully is still at head. This will
> cause unfairness, which means some devices will get starved.
> 
> Signed-off-by: Shaohua Li <shaohua.li@intel.com>
> ---
>  drivers/scsi/scsi_lib.c |   25 +++++++++++++++----------
>  1 file changed, 15 insertions(+), 10 deletions(-)
> 
> Index: linux/drivers/scsi/scsi_lib.c
> ===================================================================
> --- linux.orig/drivers/scsi/scsi_lib.c	2011-12-23 10:21:22.000000000 +0800
> +++ linux/drivers/scsi/scsi_lib.c	2012-01-09 15:14:53.000000000 +0800
> @@ -401,9 +401,8 @@ static inline int scsi_host_is_busy(stru
>   */
>  static void scsi_run_queue(struct request_queue *q)
>  {
> -	struct scsi_device *sdev = q->queuedata;
> +	struct scsi_device *sdev = q->queuedata, *head_sdev = NULL;
>  	struct Scsi_Host *shost;
> -	LIST_HEAD(starved_list);
>  	unsigned long flags;
>  
>  	/* if the device is dead, sdev will be NULL, so no queue to run */
> @@ -415,9 +414,8 @@ static void scsi_run_queue(struct reques
>  		scsi_single_lun_run(sdev);
>  
>  	spin_lock_irqsave(shost->host_lock, flags);
> -	list_splice_init(&shost->starved_list, &starved_list);
>  
> -	while (!list_empty(&starved_list)) {
> +	while (!list_empty(&shost->starved_list)) {
>  		/*
>  		 * As long as shost is accepting commands and we have
>  		 * starved queues, call blk_run_queue. scsi_request_fn
> @@ -431,8 +429,13 @@ static void scsi_run_queue(struct reques
>  		if (scsi_host_is_busy(shost))
>  			break;
>  
> -		sdev = list_entry(starved_list.next,
> +		sdev = list_entry(shost->starved_list.next,
>  				  struct scsi_device, starved_entry);
> +		if (sdev == head_sdev)
> +			break;
> +		if (!head_sdev)
> +			head_sdev = sdev;
> +
>  		list_del_init(&sdev->starved_entry);
>  		if (scsi_target_is_busy(scsi_target(sdev))) {
>  			list_move_tail(&sdev->starved_entry,
> @@ -445,9 +448,13 @@ static void scsi_run_queue(struct reques
>  		__blk_run_queue(sdev->request_queue);
>  		spin_unlock(sdev->request_queue->queue_lock);
>  		spin_lock(shost->host_lock);
> +		/*
> +		 * the head sdev is no longer starved and removed from the
> +		 * starved list, select a new sdev as head.
> +		 */
> +		if (head_sdev == sdev && list_empty(&sdev->starved_entry))
> +			head_sdev = NULL;
>  	}
> -	/* put any unprocessed entries back */
> -	list_splice(&starved_list, &shost->starved_list);
>  	spin_unlock_irqrestore(shost->host_lock, flags);
>  
>  	blk_run_queue(q);
> @@ -1316,9 +1323,7 @@ static inline int scsi_target_queue_read
>  	}
>  
>  	if (scsi_target_is_busy(starget)) {
> -		if (list_empty(&sdev->starved_entry))
> -			list_add_tail(&sdev->starved_entry,
> -				      &shost->starved_list);
> +		list_move_tail(&sdev->starved_entry, &shost->starved_list);
>  		return 0;
>  	}
>  
> 
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [patch 1/2]scsi: scsi_run_queue() doesn't use local list to handle starved sdev
  2012-01-09 17:30           ` James Bottomley
@ 2012-01-10  3:27             ` Shaohua Li
  0 siblings, 0 replies; 8+ messages in thread
From: Shaohua Li @ 2012-01-10  3:27 UTC (permalink / raw)
  To: James Bottomley
  Cc: lkml, linux-scsi, Jens Axboe, Christoph Hellwig, Ted Ts'o,
	Wu, Fengguang, Darrick J. Wong

On Mon, 2012-01-09 at 11:30 -0600, James Bottomley wrote:
> On Mon, 2012-01-09 at 15:31 +0800, Shaohua Li wrote:
> > On Fri, 2011-12-23 at 09:53 +0800, Shaohua Li wrote:
> > > On Thu, 2011-12-22 at 19:17 -0600, James Bottomley wrote:
> > > > On Fri, 2011-12-23 at 08:40 +0800, Shaohua Li wrote:
> > > > > On Thu, 2011-12-22 at 18:27 +0000, James Bottomley wrote:
> > > > > > On Thu, 2011-12-22 at 11:10 +0800, Shaohua Li wrote:
> > > > > > > scsi_run_queue() picks off all sdev from host starved_list to a local list,
> > > > > > > then handle them. If there are multiple threads running scsi_run_queue(),
> > > > > > > the starved_list will get messed. This is quite common, because request
> > > > > > > rq_affinity is on by default.
> > > > > > > 
> > > > > > > Signed-off-by: Shaohua Li <shaohua.li@intel.com>
> > > > > > > ---
> > > > > > >  drivers/scsi/scsi_lib.c |   21 ++++++++++++++-------
> > > > > > >  1 file changed, 14 insertions(+), 7 deletions(-)
> > > > > > > 
> > > > > > > Index: linux/drivers/scsi/scsi_lib.c
> > > > > > > ===================================================================
> > > > > > > --- linux.orig/drivers/scsi/scsi_lib.c	2011-12-21 16:56:23.000000000 +0800
> > > > > > > +++ linux/drivers/scsi/scsi_lib.c	2011-12-22 09:33:09.000000000 +0800
> > > > > > > @@ -401,9 +401,8 @@ static inline int scsi_host_is_busy(stru
> > > > > > >   */
> > > > > > >  static void scsi_run_queue(struct request_queue *q)
> > > > > > >  {
> > > > > > > -	struct scsi_device *sdev = q->queuedata;
> > > > > > > +	struct scsi_device *sdev = q->queuedata, *head_sdev = NULL;
> > > > > > >  	struct Scsi_Host *shost;
> > > > > > > -	LIST_HEAD(starved_list);
> > > > > > >  	unsigned long flags;
> > > > > > >  
> > > > > > >  	/* if the device is dead, sdev will be NULL, so no queue to run */
> > > > > > > @@ -415,9 +414,8 @@ static void scsi_run_queue(struct reques
> > > > > > >  		scsi_single_lun_run(sdev);
> > > > > > >  
> > > > > > >  	spin_lock_irqsave(shost->host_lock, flags);
> > > > > > > -	list_splice_init(&shost->starved_list, &starved_list);
> > > > > > >  
> > > > > > > -	while (!list_empty(&starved_list)) {
> > > > > > > +	while (!list_empty(&shost->starved_list)) {
> > > > > > 
> > > > > > The original reason for working from a copy instead of the original list
> > > > > > was that the device can end up back on the starved list because of a
> > > > > > variety of conditions in the HBA and so this would cause the loop not to
> > > > > > exit, so this piece of the patch doesn't look right to me.
> > > > > +               /*
> > > > > +                * the head sdev is no longer starved and removed from the
> > > > > +                * starved list, select a new sdev as head.
> > > > > +                */
> > > > > +               if (head_sdev == sdev && list_empty(&sdev->starved_entry))
> > > > > +                       head_sdev = NULL;
> > > > > I had this in the loop, which is to guarantee the loop will exit if a
> > > > > device is removed from the starved list.
> > > > 
> > > > And the non-head sdevs? which can also get put back onto the list
> > > yes, it will be put back to the list tail. Note we always handle the
> > > head sdev first. move the cursor of list iterating to next dev and next,
> > > we will hit head_sdev == sdev eventually, then the loop exit. or the
> > > cursor isn't moving, which means we have scsi_host_is_busy(). in both
> > > cases, the loop will exit.
> > > 
> > > > What's the reason for not just traversing the list once using list
> > > > splice?
> > > > 
> > > > Basically, the changelog doesn't seem to explain what you're doing and
> > > > the logic looks flawed.
> > > The main reason is we have multiple CPU trying to get the list to its
> > > local list and then handle it, then putback unprocessed entries back. we
> > > have multiple cpu handles blk softirq, because request rq_affinity is on
> > > default. And the putback always happen, because scsi_run_queue is called
> > > after every request finish. When it's called, scsi_host has just one
> > > free slot. The in-coordination of multiple threads handle of the starved
> > > list will mess the list. I observed dev which doesn't get chance to
> > > dispatch request is put to starved list tail, while dev which dispatches
> > > request is still at head. eg, this will cause unfairness. This means
> > > some devices will get starved.
> > James,
> > can you take a look at it? the two patches give 12% improvement in a
> > simple file creation workload. I updated the patch log.
> 
> The exit criteria still don't look right to me.
> 
> given two devices on the starved list, s1 and s2.  First loop around
> head_sdev is s1, we run the queue and it gets put back.  Now we loop to
> s2.  as we drop the lock to run the queue, a competing thread gets in,
> runs the starved loop and s1 gets removed.  We run s2, it gets put back,
> but we don't see s1 is gone (since the sdev we're processing is s2 and
> the check head_sdev == sdev fails) and it remains our head_sdev.  The
> result is a loop until s2 gets removed, which would be highly
> undesirable.
> 
> The problem essentially is that you're using local variables to track
> what you've made a global list, so any other list manipulation external
> to your local variables is invisible to you.  You can fix this by
> checking list_empty(&head_sdev->starved_list) instead, but this will
> still lead to running the starved list up to twice as much as we would
> have previously.
putback mostly means host busy, it can also help avoid the loop. but ok
there is rare case the list can run twice. I gave up fixing this issue. 

The issue the second patch tries to fix is still valid and can be
simplified without the first patch. Interesting is I get similar
throughput just with the simplified patch (below). This means the most
starvation list mess is from unplug path. I guess this one is
acceptable?

Thanks,
Shaohua

Subject: SCSI: don't change sdev starvation list order without request dispatched

The sdev is deleted from starved list and then try to dispatch from this
device. It's quite possible the sdev can't eventually dispatch a request,
then the sdev will be in starved list tail. This isn't fair.
There are two cases here:
1. unplug path. scsi_request_fn() calls to scsi_target_queue_ready(), then
the dev is removed from starved list, but quite possible host queue isn't
ready, the dev is moved to starved list without dispatching any request.
2. scsi_run_queue path. It deletes the dev from starved list first (both
global and local starved lists), then handles the dev. Then we could have
the same process like case 1.

This patch fixes the first case. Case 2 isn't fixed, because there is a
rare case scsi_run_queue finds host isn't busy but scsi_request_fn finds
host is busy (other CPU is faster to get host queue depth). Not deleting
the dev from starved list in scsi_run_queue will keep scsi_run_queue
looping (though this is very rare case, because host will become busy).
Fortunately fixing case 1 already gives big improvement for starvation in
my test. In a 12 disk JBOD setup, running file creation under EXT4, this
gives 12% more throughput.

Signed-off-by: Shaohua Li <shaohua.li@intel.com>
---
 drivers/scsi/scsi_lib.c |    7 +------
 1 file changed, 1 insertion(+), 6 deletions(-)

Index: linux/drivers/scsi/scsi_lib.c
===================================================================
--- linux.orig/drivers/scsi/scsi_lib.c	2012-01-10 10:35:36.000000000 +0800
+++ linux/drivers/scsi/scsi_lib.c	2012-01-10 10:39:08.000000000 +0800
@@ -1316,15 +1316,10 @@ static inline int scsi_target_queue_read
 	}
 
 	if (scsi_target_is_busy(starget)) {
-		if (list_empty(&sdev->starved_entry))
-			list_add_tail(&sdev->starved_entry,
-				      &shost->starved_list);
+		list_move_tail(&sdev->starved_entry, &shost->starved_list);
 		return 0;
 	}
 
-	/* We're OK to process the command, so we can't be starved */
-	if (!list_empty(&sdev->starved_entry))
-		list_del_init(&sdev->starved_entry);
 	return 1;
 }
 



^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2012-01-10  3:12 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-12-22  3:10 [patch 1/2]scsi: scsi_run_queue() doesn't use local list to handle starved sdev Shaohua Li
2011-12-22 18:27 ` James Bottomley
2011-12-23  0:40   ` Shaohua Li
2011-12-23  1:17     ` James Bottomley
2011-12-23  1:53       ` Shaohua Li
2012-01-09  7:31         ` Shaohua Li
2012-01-09 17:30           ` James Bottomley
2012-01-10  3:27             ` Shaohua Li

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).