All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/2] vhost_worker fixes
@ 2013-08-14  7:01 Bart Van Assche
  2013-08-14  7:02 ` [PATCH 1/2] vhost: Reduce vhost_work_flush() wakeup latency Bart Van Assche
                   ` (2 more replies)
  0 siblings, 3 replies; 12+ messages in thread
From: Bart Van Assche @ 2013-08-14  7:01 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: Asias He, kvm-devel

There are two patches in this series:
* A patch that reduces vhost_work_flush() latency on busy systems.
* A patch that avoids that vhost_work_flush() returns early.

I found these issues via code inspection.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 1/2] vhost: Reduce vhost_work_flush() wakeup latency
  2013-08-14  7:01 [PATCH 0/2] vhost_worker fixes Bart Van Assche
@ 2013-08-14  7:02 ` Bart Van Assche
  2013-08-14 11:37   ` Michael S. Tsirkin
  2013-08-14  7:03 ` [PATCH 2/2] vhost: Avoid that vhost_work_flush() returns early Bart Van Assche
  2013-08-14 11:39 ` [PATCH 0/2] vhost_worker fixes Michael S. Tsirkin
  2 siblings, 1 reply; 12+ messages in thread
From: Bart Van Assche @ 2013-08-14  7:02 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: Asias He, kvm-devel

If the TIF_NEED_RESCHED task flag is set, wake up any vhost_work_flush()
waiters before rescheduling instead of after rescheduling.

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Asias He <asias@redhat.com>
---
 drivers/vhost/vhost.c |   42 +++++++++++++++++++-----------------------
 1 file changed, 19 insertions(+), 23 deletions(-)

diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index e58cf00..e7ffc10 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -201,47 +201,43 @@ static void vhost_vq_reset(struct vhost_dev *dev,
 static int vhost_worker(void *data)
 {
 	struct vhost_dev *dev = data;
-	struct vhost_work *work = NULL;
-	unsigned uninitialized_var(seq);
+	struct vhost_work *work;
+	unsigned seq;
 	mm_segment_t oldfs = get_fs();
 
 	set_fs(USER_DS);
 	use_mm(dev->mm);
 
-	for (;;) {
+	spin_lock_irq(&dev->work_lock);
+	while (!kthread_should_stop()) {
 		/* mb paired w/ kthread_stop */
 		set_current_state(TASK_INTERRUPTIBLE);
-
-		spin_lock_irq(&dev->work_lock);
-		if (work) {
-			work->done_seq = seq;
-			if (work->flushing)
-				wake_up_all(&work->done);
-		}
-
-		if (kthread_should_stop()) {
-			spin_unlock_irq(&dev->work_lock);
-			__set_current_state(TASK_RUNNING);
-			break;
-		}
 		if (!list_empty(&dev->work_list)) {
 			work = list_first_entry(&dev->work_list,
 						struct vhost_work, node);
 			list_del_init(&work->node);
 			seq = work->queue_seq;
-		} else
-			work = NULL;
-		spin_unlock_irq(&dev->work_lock);
+			spin_unlock_irq(&dev->work_lock);
 
-		if (work) {
 			__set_current_state(TASK_RUNNING);
 			work->fn(work);
-			if (need_resched())
-				schedule();
-		} else
+
+			spin_lock_irq(&dev->work_lock);
+			work->done_seq = seq;
+			if (work->flushing)
+				wake_up_all(&work->done);
+		}
+		if (list_empty(&dev->work_list) || need_resched()) {
+			spin_unlock_irq(&dev->work_lock);
+
 			schedule();
 
+			spin_lock_irq(&dev->work_lock);
+		}
 	}
+	spin_unlock_irq(&dev->work_lock);
+
+	__set_current_state(TASK_RUNNING);
 	unuse_mm(dev->mm);
 	set_fs(oldfs);
 	return 0;
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 2/2] vhost: Avoid that vhost_work_flush() returns early
  2013-08-14  7:01 [PATCH 0/2] vhost_worker fixes Bart Van Assche
  2013-08-14  7:02 ` [PATCH 1/2] vhost: Reduce vhost_work_flush() wakeup latency Bart Van Assche
@ 2013-08-14  7:03 ` Bart Van Assche
  2013-08-14 11:46   ` Michael S. Tsirkin
  2013-08-14 11:39 ` [PATCH 0/2] vhost_worker fixes Michael S. Tsirkin
  2 siblings, 1 reply; 12+ messages in thread
From: Bart Van Assche @ 2013-08-14  7:03 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: Asias He, kvm-devel

If two or more items are queued on dev->work_list before
vhost_worker() starts processing these then the value of
work->done_seq will be set to the sequence number of a work item
that has not yet been processed. Avoid this by letting
vhost_worker() count the number of items that have already been
processed.

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Asias He <asias@redhat.com>
---
 drivers/vhost/vhost.c |    5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index e7ffc10..11d668a 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -202,7 +202,7 @@ static int vhost_worker(void *data)
 {
 	struct vhost_dev *dev = data;
 	struct vhost_work *work;
-	unsigned seq;
+	unsigned seq = 0;
 	mm_segment_t oldfs = get_fs();
 
 	set_fs(USER_DS);
@@ -216,14 +216,13 @@ static int vhost_worker(void *data)
 			work = list_first_entry(&dev->work_list,
 						struct vhost_work, node);
 			list_del_init(&work->node);
-			seq = work->queue_seq;
 			spin_unlock_irq(&dev->work_lock);
 
 			__set_current_state(TASK_RUNNING);
 			work->fn(work);
 
 			spin_lock_irq(&dev->work_lock);
-			work->done_seq = seq;
+			work->done_seq = ++seq;
 			if (work->flushing)
 				wake_up_all(&work->done);
 		}
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/2] vhost: Reduce vhost_work_flush() wakeup latency
  2013-08-14  7:02 ` [PATCH 1/2] vhost: Reduce vhost_work_flush() wakeup latency Bart Van Assche
@ 2013-08-14 11:37   ` Michael S. Tsirkin
  2013-08-14 15:22     ` Bart Van Assche
  0 siblings, 1 reply; 12+ messages in thread
From: Michael S. Tsirkin @ 2013-08-14 11:37 UTC (permalink / raw)
  To: Bart Van Assche; +Cc: Asias He, kvm-devel

On Wed, Aug 14, 2013 at 09:02:32AM +0200, Bart Van Assche wrote:
> If the TIF_NEED_RESCHED task flag is set, wake up any vhost_work_flush()
> waiters before rescheduling instead of after rescheduling.
> 
> Signed-off-by: Bart Van Assche <bvanassche@acm.org>
> Cc: Michael S. Tsirkin <mst@redhat.com>
> Cc: Asias He <asias@redhat.com>

Why exactly? It's not like flush needs to be extra fast ...

> ---
>  drivers/vhost/vhost.c |   42 +++++++++++++++++++-----------------------
>  1 file changed, 19 insertions(+), 23 deletions(-)
> 
> diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
> index e58cf00..e7ffc10 100644
> --- a/drivers/vhost/vhost.c
> +++ b/drivers/vhost/vhost.c
> @@ -201,47 +201,43 @@ static void vhost_vq_reset(struct vhost_dev *dev,
>  static int vhost_worker(void *data)
>  {
>  	struct vhost_dev *dev = data;
> -	struct vhost_work *work = NULL;
> -	unsigned uninitialized_var(seq);
> +	struct vhost_work *work;
> +	unsigned seq;
>  	mm_segment_t oldfs = get_fs();
>  
>  	set_fs(USER_DS);
>  	use_mm(dev->mm);
>  
> -	for (;;) {
> +	spin_lock_irq(&dev->work_lock);
> +	while (!kthread_should_stop()) {
>  		/* mb paired w/ kthread_stop */
>  		set_current_state(TASK_INTERRUPTIBLE);
> -
> -		spin_lock_irq(&dev->work_lock);
> -		if (work) {
> -			work->done_seq = seq;
> -			if (work->flushing)
> -				wake_up_all(&work->done);
> -		}
> -
> -		if (kthread_should_stop()) {
> -			spin_unlock_irq(&dev->work_lock);
> -			__set_current_state(TASK_RUNNING);
> -			break;
> -		}
>  		if (!list_empty(&dev->work_list)) {
>  			work = list_first_entry(&dev->work_list,
>  						struct vhost_work, node);
>  			list_del_init(&work->node);
>  			seq = work->queue_seq;
> -		} else
> -			work = NULL;
> -		spin_unlock_irq(&dev->work_lock);
> +			spin_unlock_irq(&dev->work_lock);
>  
> -		if (work) {
>  			__set_current_state(TASK_RUNNING);
>  			work->fn(work);
> -			if (need_resched())
> -				schedule();
> -		} else
> +
> +			spin_lock_irq(&dev->work_lock);
> +			work->done_seq = seq;
> +			if (work->flushing)
> +				wake_up_all(&work->done);
> +		}
> +		if (list_empty(&dev->work_list) || need_resched()) {
> +			spin_unlock_irq(&dev->work_lock);
> +
>  			schedule();
>  
> +			spin_lock_irq(&dev->work_lock);
> +		}
>  	}
> +	spin_unlock_irq(&dev->work_lock);
> +
> +	__set_current_state(TASK_RUNNING);
>  	unuse_mm(dev->mm);
>  	set_fs(oldfs);
>  	return 0;
> -- 
> 1.7.10.4

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 0/2] vhost_worker fixes
  2013-08-14  7:01 [PATCH 0/2] vhost_worker fixes Bart Van Assche
  2013-08-14  7:02 ` [PATCH 1/2] vhost: Reduce vhost_work_flush() wakeup latency Bart Van Assche
  2013-08-14  7:03 ` [PATCH 2/2] vhost: Avoid that vhost_work_flush() returns early Bart Van Assche
@ 2013-08-14 11:39 ` Michael S. Tsirkin
  2 siblings, 0 replies; 12+ messages in thread
From: Michael S. Tsirkin @ 2013-08-14 11:39 UTC (permalink / raw)
  To: Bart Van Assche; +Cc: Asias He, kvm-devel

On Wed, Aug 14, 2013 at 09:01:27AM +0200, Bart Van Assche wrote:
> There are two patches in this series:
> * A patch that reduces vhost_work_flush() latency on busy systems.
> * A patch that avoids that vhost_work_flush() returns early.
> 
> I found these issues via code inspection.

Interesting. What makes you think vhost_work_flush
latency matters?

Please copy all relevant mailing lists in the future:
you can find them in the MAINTAINERS file.


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/2] vhost: Avoid that vhost_work_flush() returns early
  2013-08-14  7:03 ` [PATCH 2/2] vhost: Avoid that vhost_work_flush() returns early Bart Van Assche
@ 2013-08-14 11:46   ` Michael S. Tsirkin
  2013-08-14 15:19     ` Bart Van Assche
  0 siblings, 1 reply; 12+ messages in thread
From: Michael S. Tsirkin @ 2013-08-14 11:46 UTC (permalink / raw)
  To: Bart Van Assche; +Cc: Asias He, kvm-devel

On Wed, Aug 14, 2013 at 09:03:28AM +0200, Bart Van Assche wrote:
> If two or more items are queued on dev->work_list before
> vhost_worker() starts processing these then the value of
> work->done_seq will be set to the sequence number of a work item
> that has not yet been processed. Avoid this by letting
> vhost_worker() count the number of items that have already been
> processed.
> 
> Signed-off-by: Bart Van Assche <bvanassche@acm.org>
> Cc: Michael S. Tsirkin <mst@redhat.com>
> Cc: Asias He <asias@redhat.com>

I'm confused by this explanation.
done_seq is set here:
                if (work) {
                        work->done_seq = seq;
                        if (work->flushing)
                                wake_up_all(&work->done);
                }

and work is set here:

                if (!list_empty(&dev->work_list)) {
                        work = list_first_entry(&dev->work_list,
                                                struct vhost_work, node);
                        list_del_init(&work->node);
                        seq = work->queue_seq;
                }

this work is processed on the next line:

                if (work) {
                        __set_current_state(TASK_RUNNING);
                        work->fn(work);
                        if (need_resched())
                                schedule();
                }

so how do we end up with a sequence of a work item
that isn't processed?


> ---
>  drivers/vhost/vhost.c |    5 ++---
>  1 file changed, 2 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
> index e7ffc10..11d668a 100644
> --- a/drivers/vhost/vhost.c
> +++ b/drivers/vhost/vhost.c
> @@ -202,7 +202,7 @@ static int vhost_worker(void *data)
>  {
>  	struct vhost_dev *dev = data;
>  	struct vhost_work *work;
> -	unsigned seq;
> +	unsigned seq = 0;
>  	mm_segment_t oldfs = get_fs();
>  
>  	set_fs(USER_DS);
> @@ -216,14 +216,13 @@ static int vhost_worker(void *data)
>  			work = list_first_entry(&dev->work_list,
>  						struct vhost_work, node);
>  			list_del_init(&work->node);
> -			seq = work->queue_seq;
>  			spin_unlock_irq(&dev->work_lock);
>  
>  			__set_current_state(TASK_RUNNING);
>  			work->fn(work);
>  
>  			spin_lock_irq(&dev->work_lock);
> -			work->done_seq = seq;
> +			work->done_seq = ++seq;
>  			if (work->flushing)
>  				wake_up_all(&work->done);
>  		}
> -- 
> 1.7.10.4

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/2] vhost: Avoid that vhost_work_flush() returns early
  2013-08-14 11:46   ` Michael S. Tsirkin
@ 2013-08-14 15:19     ` Bart Van Assche
  0 siblings, 0 replies; 12+ messages in thread
From: Bart Van Assche @ 2013-08-14 15:19 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: Asias He, kvm-devel

On 08/14/13 13:46, Michael S. Tsirkin wrote:
> I'm confused by this explanation.
> done_seq is set here:
>                  if (work) {
>                          work->done_seq = seq;
>                          if (work->flushing)
>                                  wake_up_all(&work->done);
>                  }
>
> and work is set here:
>
>                  if (!list_empty(&dev->work_list)) {
>                          work = list_first_entry(&dev->work_list,
>                                                  struct vhost_work, node);
>                          list_del_init(&work->node);
>                          seq = work->queue_seq;
>                  }
>
> this work is processed on the next line:
>
>                  if (work) {
>                          __set_current_state(TASK_RUNNING);
>                          work->fn(work);
>                          if (need_resched())
>                                  schedule();
>                  }
>
> so how do we end up with a sequence of a work item
> that isn't processed?

I was wondering what would happen if a work item got requeued before 
done_seq is set. I think you are right and this should work fine.

Bart.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/2] vhost: Reduce vhost_work_flush() wakeup latency
  2013-08-14 11:37   ` Michael S. Tsirkin
@ 2013-08-14 15:22     ` Bart Van Assche
  2013-08-14 17:58       ` Michael S. Tsirkin
  0 siblings, 1 reply; 12+ messages in thread
From: Bart Van Assche @ 2013-08-14 15:22 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: Asias He, kvm-devel

On 08/14/13 13:37, Michael S. Tsirkin wrote:
> On Wed, Aug 14, 2013 at 09:02:32AM +0200, Bart Van Assche wrote:
>> If the TIF_NEED_RESCHED task flag is set, wake up any vhost_work_flush()
>> waiters before rescheduling instead of after rescheduling.
>>
>> Signed-off-by: Bart Van Assche <bvanassche@acm.org>
>> Cc: Michael S. Tsirkin <mst@redhat.com>
>> Cc: Asias He <asias@redhat.com>
>
> Why exactly? It's not like flush needs to be extra fast ...

I'm not worried about how fast a flush is processed either. But I think 
this patch is a nice cleanup of the vhost_worker() function. It 
eliminates the uninitialized_var() construct and moves the assignment of 
"done_seq" below the read of "queue_seq".

Bart.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/2] vhost: Reduce vhost_work_flush() wakeup latency
  2013-08-14 15:22     ` Bart Van Assche
@ 2013-08-14 17:58       ` Michael S. Tsirkin
  2013-08-15  1:30         ` Asias He
  0 siblings, 1 reply; 12+ messages in thread
From: Michael S. Tsirkin @ 2013-08-14 17:58 UTC (permalink / raw)
  To: Bart Van Assche; +Cc: Asias He, kvm-devel

On Wed, Aug 14, 2013 at 05:22:36PM +0200, Bart Van Assche wrote:
> On 08/14/13 13:37, Michael S. Tsirkin wrote:
> >On Wed, Aug 14, 2013 at 09:02:32AM +0200, Bart Van Assche wrote:
> >>If the TIF_NEED_RESCHED task flag is set, wake up any vhost_work_flush()
> >>waiters before rescheduling instead of after rescheduling.
> >>
> >>Signed-off-by: Bart Van Assche <bvanassche@acm.org>
> >>Cc: Michael S. Tsirkin <mst@redhat.com>
> >>Cc: Asias He <asias@redhat.com>
> >
> >Why exactly? It's not like flush needs to be extra fast ...
> 
> I'm not worried about how fast a flush is processed either. But I
> think this patch is a nice cleanup of the vhost_worker() function.
> It eliminates the uninitialized_var() construct and moves the
> assignment of "done_seq" below the read of "queue_seq".
> 
> Bart.

I'm not worried about uninitialized_var - it's just a compiler bug.
done_seq below read is nice, but this is at the cost of sprinkling
lock/unlock, lock/unlock all over the code.  Maybe we can come up with
something that doesn't have this property.

-- 
MST

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/2] vhost: Reduce vhost_work_flush() wakeup latency
  2013-08-14 17:58       ` Michael S. Tsirkin
@ 2013-08-15  1:30         ` Asias He
  2013-08-15  6:13           ` Bart Van Assche
  0 siblings, 1 reply; 12+ messages in thread
From: Asias He @ 2013-08-15  1:30 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: Bart Van Assche, kvm-devel

On Wed, Aug 14, 2013 at 08:58:25PM +0300, Michael S. Tsirkin wrote:
> On Wed, Aug 14, 2013 at 05:22:36PM +0200, Bart Van Assche wrote:
> > On 08/14/13 13:37, Michael S. Tsirkin wrote:
> > >On Wed, Aug 14, 2013 at 09:02:32AM +0200, Bart Van Assche wrote:
> > >>If the TIF_NEED_RESCHED task flag is set, wake up any vhost_work_flush()
> > >>waiters before rescheduling instead of after rescheduling.
> > >>
> > >>Signed-off-by: Bart Van Assche <bvanassche@acm.org>
> > >>Cc: Michael S. Tsirkin <mst@redhat.com>
> > >>Cc: Asias He <asias@redhat.com>
> > >
> > >Why exactly? It's not like flush needs to be extra fast ...
> > 
> > I'm not worried about how fast a flush is processed either. But I
> > think this patch is a nice cleanup of the vhost_worker() function.
> > It eliminates the uninitialized_var() construct and moves the
> > assignment of "done_seq" below the read of "queue_seq".
> > 
> > Bart.
> 
> I'm not worried about uninitialized_var - it's just a compiler bug.
> done_seq below read is nice, but this is at the cost of sprinkling
> lock/unlock, lock/unlock all over the code.  Maybe we can come up with
> something that doesn't have this property.

The extra locking introduced here does not look good to me neither.

> -- 
> MST

-- 
Asias

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/2] vhost: Reduce vhost_work_flush() wakeup latency
  2013-08-15  1:30         ` Asias He
@ 2013-08-15  6:13           ` Bart Van Assche
  2013-08-15  7:35             ` Michael S. Tsirkin
  0 siblings, 1 reply; 12+ messages in thread
From: Bart Van Assche @ 2013-08-15  6:13 UTC (permalink / raw)
  To: Asias He; +Cc: Michael S. Tsirkin, kvm-devel

On 08/15/13 03:30, Asias He wrote:
> On Wed, Aug 14, 2013 at 08:58:25PM +0300, Michael S. Tsirkin wrote:
>> On Wed, Aug 14, 2013 at 05:22:36PM +0200, Bart Van Assche wrote:
>>> On 08/14/13 13:37, Michael S. Tsirkin wrote:
>>>> On Wed, Aug 14, 2013 at 09:02:32AM +0200, Bart Van Assche wrote:
>>>>> If the TIF_NEED_RESCHED task flag is set, wake up any vhost_work_flush()
>>>>> waiters before rescheduling instead of after rescheduling.
>>>>>
>>>>> Signed-off-by: Bart Van Assche <bvanassche@acm.org>
>>>>> Cc: Michael S. Tsirkin <mst@redhat.com>
>>>>> Cc: Asias He <asias@redhat.com>
>>>>
>>>> Why exactly? It's not like flush needs to be extra fast ...
>>>
>>> I'm not worried about how fast a flush is processed either. But I
>>> think this patch is a nice cleanup of the vhost_worker() function.
>>> It eliminates the uninitialized_var() construct and moves the
>>> assignment of "done_seq" below the read of "queue_seq".
>>>
>>> Bart.
>>
>> I'm not worried about uninitialized_var - it's just a compiler bug.
>> done_seq below read is nice, but this is at the cost of sprinkling
>> lock/unlock, lock/unlock all over the code.  Maybe we can come up with
>> something that doesn't have this property.
>
> The extra locking introduced here does not look good to me neither.

Please note that although there are additional locking statements in the 
code, there are no additional locking calls at runtime.

Bart.


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/2] vhost: Reduce vhost_work_flush() wakeup latency
  2013-08-15  6:13           ` Bart Van Assche
@ 2013-08-15  7:35             ` Michael S. Tsirkin
  0 siblings, 0 replies; 12+ messages in thread
From: Michael S. Tsirkin @ 2013-08-15  7:35 UTC (permalink / raw)
  To: Bart Van Assche; +Cc: Asias He, kvm-devel

On Thu, Aug 15, 2013 at 08:13:14AM +0200, Bart Van Assche wrote:
> On 08/15/13 03:30, Asias He wrote:
> >On Wed, Aug 14, 2013 at 08:58:25PM +0300, Michael S. Tsirkin wrote:
> >>On Wed, Aug 14, 2013 at 05:22:36PM +0200, Bart Van Assche wrote:
> >>>On 08/14/13 13:37, Michael S. Tsirkin wrote:
> >>>>On Wed, Aug 14, 2013 at 09:02:32AM +0200, Bart Van Assche wrote:
> >>>>>If the TIF_NEED_RESCHED task flag is set, wake up any vhost_work_flush()
> >>>>>waiters before rescheduling instead of after rescheduling.
> >>>>>
> >>>>>Signed-off-by: Bart Van Assche <bvanassche@acm.org>
> >>>>>Cc: Michael S. Tsirkin <mst@redhat.com>
> >>>>>Cc: Asias He <asias@redhat.com>
> >>>>
> >>>>Why exactly? It's not like flush needs to be extra fast ...
> >>>
> >>>I'm not worried about how fast a flush is processed either. But I
> >>>think this patch is a nice cleanup of the vhost_worker() function.
> >>>It eliminates the uninitialized_var() construct and moves the
> >>>assignment of "done_seq" below the read of "queue_seq".
> >>>
> >>>Bart.
> >>
> >>I'm not worried about uninitialized_var - it's just a compiler bug.
> >>done_seq below read is nice, but this is at the cost of sprinkling
> >>lock/unlock, lock/unlock all over the code.  Maybe we can come up with
> >>something that doesn't have this property.
> >
> >The extra locking introduced here does not look good to me neither.
> 
> Please note that although there are additional locking statements in
> the code, there are no additional locking calls at runtime.
> 
> Bart.

That's true but this is supposed to be a cleanup.

-- 
MST

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2013-08-15  7:34 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-08-14  7:01 [PATCH 0/2] vhost_worker fixes Bart Van Assche
2013-08-14  7:02 ` [PATCH 1/2] vhost: Reduce vhost_work_flush() wakeup latency Bart Van Assche
2013-08-14 11:37   ` Michael S. Tsirkin
2013-08-14 15:22     ` Bart Van Assche
2013-08-14 17:58       ` Michael S. Tsirkin
2013-08-15  1:30         ` Asias He
2013-08-15  6:13           ` Bart Van Assche
2013-08-15  7:35             ` Michael S. Tsirkin
2013-08-14  7:03 ` [PATCH 2/2] vhost: Avoid that vhost_work_flush() returns early Bart Van Assche
2013-08-14 11:46   ` Michael S. Tsirkin
2013-08-14 15:19     ` Bart Van Assche
2013-08-14 11:39 ` [PATCH 0/2] vhost_worker fixes Michael S. Tsirkin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.