All of lore.kernel.org
 help / color / mirror / Atom feed
* Degregated I/O Performance since 3.4 - Regression in 3.4?
@ 2012-04-23 10:02 Tobias Geiger
  2012-04-23 11:53 ` Stefano Stabellini
  0 siblings, 1 reply; 15+ messages in thread
From: Tobias Geiger @ 2012-04-23 10:02 UTC (permalink / raw)
  To: xen-devel

Hello!

i noticed a considerable drop in I/O Performance when using 3.4 (rc3 and rc4 
tested) as Dom0 Kernel;

With 3.3 i get over 100mb/s in a HVM DomU (win64) with PV Drivers 
(gplpv_Vista2008x64_0.11.0.357.msi); 
With 3.4 it drops to about a third of that.

Xen Version is xen-unstable: 
xen_changeset          : Tue Apr 17 19:13:52 2012 +0100 25209:e6b20ec1824c

Disk config line is:
disk = [ '/dev/vg_ssd/win7system,,hda' ] 
- it uses blkback.

Greetings
Tobias

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Degregated I/O Performance since 3.4 - Regression in 3.4?
  2012-04-23 10:02 Degregated I/O Performance since 3.4 - Regression in 3.4? Tobias Geiger
@ 2012-04-23 11:53 ` Stefano Stabellini
  2012-04-23 15:24   ` Konrad Rzeszutek Wilk
  0 siblings, 1 reply; 15+ messages in thread
From: Stefano Stabellini @ 2012-04-23 11:53 UTC (permalink / raw)
  To: Tobias Geiger; +Cc: xen-devel

On Mon, 23 Apr 2012, Tobias Geiger wrote:
> Hello!
> 
> i noticed a considerable drop in I/O Performance when using 3.4 (rc3 and rc4 
> tested) as Dom0 Kernel;
> 
> With 3.3 i get over 100mb/s in a HVM DomU (win64) with PV Drivers 
> (gplpv_Vista2008x64_0.11.0.357.msi); 
> With 3.4 it drops to about a third of that.
> 
> Xen Version is xen-unstable: 
> xen_changeset          : Tue Apr 17 19:13:52 2012 +0100 25209:e6b20ec1824c
> 
> Disk config line is:
> disk = [ '/dev/vg_ssd/win7system,,hda' ] 
> - it uses blkback.

I fail to see what could be the cause of the issue: nothing on the
blkback side should affect performances significantly.
You could try reverting the four patches to blkback that were applied
between 3.3 and 3.4-rc3 just to make sure it is not a blkback
regression:

$ git shortlog v3.3..v3.4-rc3 drivers/block/xen-blkback
Daniel De Graaf (2):
      xen/blkback: use grant-table.c hypercall wrappers
      xen/blkback: Enable blkback on HVM guests

Konrad Rzeszutek Wilk (2):
      xen/blkback: Squash the discard support for 'file' and 'phy' type.
      xen/blkback: Make optional features be really optional.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Degregated I/O Performance since 3.4 - Regression in 3.4?
  2012-04-23 11:53 ` Stefano Stabellini
@ 2012-04-23 15:24   ` Konrad Rzeszutek Wilk
  2012-04-23 20:53     ` Tobias Geiger
  0 siblings, 1 reply; 15+ messages in thread
From: Konrad Rzeszutek Wilk @ 2012-04-23 15:24 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: Tobias Geiger, xen-devel

On Mon, Apr 23, 2012 at 12:53:03PM +0100, Stefano Stabellini wrote:
> On Mon, 23 Apr 2012, Tobias Geiger wrote:
> > Hello!
> > 
> > i noticed a considerable drop in I/O Performance when using 3.4 (rc3 and rc4 
> > tested) as Dom0 Kernel;
> > 
> > With 3.3 i get over 100mb/s in a HVM DomU (win64) with PV Drivers 
> > (gplpv_Vista2008x64_0.11.0.357.msi); 
> > With 3.4 it drops to about a third of that.
> > 
> > Xen Version is xen-unstable: 
> > xen_changeset          : Tue Apr 17 19:13:52 2012 +0100 25209:e6b20ec1824c
> > 
> > Disk config line is:
> > disk = [ '/dev/vg_ssd/win7system,,hda' ] 
> > - it uses blkback.
> 
> I fail to see what could be the cause of the issue: nothing on the
> blkback side should affect performances significantly.
> You could try reverting the four patches to blkback that were applied
> between 3.3 and 3.4-rc3 just to make sure it is not a blkback
> regression:
> 
> $ git shortlog v3.3..v3.4-rc3 drivers/block/xen-blkback
> Daniel De Graaf (2):
>       xen/blkback: use grant-table.c hypercall wrappers

Hm.. Perhaps this patch fixes it a possible perf (I would think that
the compiler would have kept the result of the first call to vaddr(req, i)
somewhere.. but not sure) lost with the mentioned patch:

diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
index 73f196c..65dbadc 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -327,13 +327,15 @@ static void xen_blkbk_unmap(struct pending_req *req)
 	int ret;
 
 	for (i = 0; i < req->nr_pages; i++) {
+		unsigned long addr;
 		handle = pending_handle(req, i);
 		if (handle == BLKBACK_INVALID_HANDLE)
 			continue;
-		gnttab_set_unmap_op(&unmap[invcount], vaddr(req, i),
+		addr = vaddr(req, i);
+		gnttab_set_unmap_op(&unmap[invcount], addr,
 				    GNTMAP_host_map, handle);
 		pending_handle(req, i) = BLKBACK_INVALID_HANDLE;
-		pages[invcount] = virt_to_page(vaddr(req, i));
+		pages[invcount] = virt_to_page(addr);
 		invcount++;
 	}
 
>       xen/blkback: Enable blkback on HVM guests
> 
> Konrad Rzeszutek Wilk (2):
>       xen/blkback: Squash the discard support for 'file' and 'phy' type.
>       xen/blkback: Make optional features be really optional.
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: Degregated I/O Performance since 3.4 - Regression in 3.4?
  2012-04-23 15:24   ` Konrad Rzeszutek Wilk
@ 2012-04-23 20:53     ` Tobias Geiger
  2012-04-24  7:27       ` Jan Beulich
  0 siblings, 1 reply; 15+ messages in thread
From: Tobias Geiger @ 2012-04-23 20:53 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk; +Cc: xen-devel, Stefano Stabellini

Am 23.04.2012 17:24, schrieb Konrad Rzeszutek Wilk:
> On Mon, Apr 23, 2012 at 12:53:03PM +0100, Stefano Stabellini wrote:
>> On Mon, 23 Apr 2012, Tobias Geiger wrote:
>>> Hello!
>>>
>>> i noticed a considerable drop in I/O Performance when using 3.4 (rc3 and rc4
>>> tested) as Dom0 Kernel;
>>>
>>> With 3.3 i get over 100mb/s in a HVM DomU (win64) with PV Drivers
>>> (gplpv_Vista2008x64_0.11.0.357.msi);
>>> With 3.4 it drops to about a third of that.
>>>
>>> Xen Version is xen-unstable:
>>> xen_changeset          : Tue Apr 17 19:13:52 2012 +0100 25209:e6b20ec1824c
>>>
>>> Disk config line is:
>>> disk = [ '/dev/vg_ssd/win7system,,hda' ]
>>> - it uses blkback.
>> I fail to see what could be the cause of the issue: nothing on the
>> blkback side should affect performances significantly.
>> You could try reverting the four patches to blkback that were applied
>> between 3.3 and 3.4-rc3 just to make sure it is not a blkback
>> regression:
>>
>> $ git shortlog v3.3..v3.4-rc3 drivers/block/xen-blkback
>> Daniel De Graaf (2):
>>        xen/blkback: use grant-table.c hypercall wrappers
> Hm.. Perhaps this patch fixes it a possible perf (I would think that
> the compiler would have kept the result of the first call to vaddr(req, i)
> somewhere.. but not sure) lost with the mentioned patch:
>
> diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
> index 73f196c..65dbadc 100644
> --- a/drivers/block/xen-blkback/blkback.c
> +++ b/drivers/block/xen-blkback/blkback.c
> @@ -327,13 +327,15 @@ static void xen_blkbk_unmap(struct pending_req *req)
>   	int ret;
>
>   	for (i = 0; i<  req->nr_pages; i++) {
> +		unsigned long addr;
>   		handle = pending_handle(req, i);
>   		if (handle == BLKBACK_INVALID_HANDLE)
>   			continue;
> -		gnttab_set_unmap_op(&unmap[invcount], vaddr(req, i),
> +		addr = vaddr(req, i);
> +		gnttab_set_unmap_op(&unmap[invcount], addr,
>   				    GNTMAP_host_map, handle);
>   		pending_handle(req, i) = BLKBACK_INVALID_HANDLE;
> -		pages[invcount] = virt_to_page(vaddr(req, i));
> +		pages[invcount] = virt_to_page(addr);
>   		invcount++;
>   	}
>
>>        xen/blkback: Enable blkback on HVM guests
>>
>> Konrad Rzeszutek Wilk (2):
>>        xen/blkback: Squash the discard support for 'file' and 'phy' type.
>>        xen/blkback: Make optional features be really optional.
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
that made it even worse :)
Write Performance is down to about 7mb/s (with 3.3: ~130mb/s)
Read "only" down to 40mb/s (with 3.3: ~140mb/s)

Greetings
Tobias

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Degregated I/O Performance since 3.4 - Regression in 3.4?
  2012-04-23 20:53     ` Tobias Geiger
@ 2012-04-24  7:27       ` Jan Beulich
  2012-04-24 12:09         ` Tobias Geiger
  0 siblings, 1 reply; 15+ messages in thread
From: Jan Beulich @ 2012-04-24  7:27 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk, Tobias Geiger; +Cc: xen-devel, Stefano Stabellini

>>> On 23.04.12 at 22:53, Tobias Geiger <tobias.geiger@vido.info> wrote:
> Am 23.04.2012 17:24, schrieb Konrad Rzeszutek Wilk:
>> On Mon, Apr 23, 2012 at 12:53:03PM +0100, Stefano Stabellini wrote:
>>> On Mon, 23 Apr 2012, Tobias Geiger wrote:
>>>> Hello!
>>>>
>>>> i noticed a considerable drop in I/O Performance when using 3.4 (rc3 and rc4
>>>> tested) as Dom0 Kernel;
>>>>
>>>> With 3.3 i get over 100mb/s in a HVM DomU (win64) with PV Drivers
>>>> (gplpv_Vista2008x64_0.11.0.357.msi);
>>>> With 3.4 it drops to about a third of that.
>>>>
>>>> Xen Version is xen-unstable:
>>>> xen_changeset          : Tue Apr 17 19:13:52 2012 +0100 25209:e6b20ec1824c
>>>>
>>>> Disk config line is:
>>>> disk = [ '/dev/vg_ssd/win7system,,hda' ]
>>>> - it uses blkback.
>>> I fail to see what could be the cause of the issue: nothing on the
>>> blkback side should affect performances significantly.
>>> You could try reverting the four patches to blkback that were applied
>>> between 3.3 and 3.4-rc3 just to make sure it is not a blkback
>>> regression:
>>>
>>> $ git shortlog v3.3..v3.4-rc3 drivers/block/xen-blkback
>>> Daniel De Graaf (2):
>>>        xen/blkback: use grant-table.c hypercall wrappers
>> Hm.. Perhaps this patch fixes it a possible perf (I would think that
>> the compiler would have kept the result of the first call to vaddr(req, i)
>> somewhere.. but not sure) lost with the mentioned patch:
>>
>> diff --git a/drivers/block/xen-blkback/blkback.c 
> b/drivers/block/xen-blkback/blkback.c
>> index 73f196c..65dbadc 100644
>> --- a/drivers/block/xen-blkback/blkback.c
>> +++ b/drivers/block/xen-blkback/blkback.c
>> @@ -327,13 +327,15 @@ static void xen_blkbk_unmap(struct pending_req *req)
>>   	int ret;
>>
>>   	for (i = 0; i<  req->nr_pages; i++) {
>> +		unsigned long addr;
>>   		handle = pending_handle(req, i);
>>   		if (handle == BLKBACK_INVALID_HANDLE)
>>   			continue;
>> -		gnttab_set_unmap_op(&unmap[invcount], vaddr(req, i),
>> +		addr = vaddr(req, i);
>> +		gnttab_set_unmap_op(&unmap[invcount], addr,
>>   				    GNTMAP_host_map, handle);
>>   		pending_handle(req, i) = BLKBACK_INVALID_HANDLE;
>> -		pages[invcount] = virt_to_page(vaddr(req, i));
>> +		pages[invcount] = virt_to_page(addr);
>>   		invcount++;
>>   	}
>>
>>>        xen/blkback: Enable blkback on HVM guests
>>>
>>> Konrad Rzeszutek Wilk (2):
>>>        xen/blkback: Squash the discard support for 'file' and 'phy' type.
>>>        xen/blkback: Make optional features be really optional.
>>>
>>> _______________________________________________
>>> Xen-devel mailing list
>>> Xen-devel@lists.xen.org 
>>> http://lists.xen.org/xen-devel 
> that made it even worse :)
> Write Performance is down to about 7mb/s (with 3.3: ~130mb/s)
> Read "only" down to 40mb/s (with 3.3: ~140mb/s)

I doubt this patch can have any meaningful positive or negative
performance effect at all - are you sure you're doing comparable
runs? After all this is all just about a few arithmetic operations
and an array access, which I'd expect to hide in the noise.

Jan

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Degregated I/O Performance since 3.4 - Regression in 3.4?
  2012-04-24  7:27       ` Jan Beulich
@ 2012-04-24 12:09         ` Tobias Geiger
  2012-04-24 12:52           ` Stefano Stabellini
  0 siblings, 1 reply; 15+ messages in thread
From: Tobias Geiger @ 2012-04-24 12:09 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Stefano Stabellini, xen-devel, Konrad Rzeszutek Wilk

Am Dienstag, 24. April 2012, 09:27:42 schrieb Jan Beulich:
> >>> On 23.04.12 at 22:53, Tobias Geiger <tobias.geiger@vido.info> wrote:
> > Am 23.04.2012 17:24, schrieb Konrad Rzeszutek Wilk:
> >> On Mon, Apr 23, 2012 at 12:53:03PM +0100, Stefano Stabellini wrote:
> >>> On Mon, 23 Apr 2012, Tobias Geiger wrote:
> >>>> Hello!
> >>>> 
> >>>> i noticed a considerable drop in I/O Performance when using 3.4 (rc3
> >>>> and rc4 tested) as Dom0 Kernel;
> >>>> 
> >>>> With 3.3 i get over 100mb/s in a HVM DomU (win64) with PV Drivers
> >>>> (gplpv_Vista2008x64_0.11.0.357.msi);
> >>>> With 3.4 it drops to about a third of that.
> >>>> 
> >>>> Xen Version is xen-unstable:
> >>>> xen_changeset          : Tue Apr 17 19:13:52 2012 +0100
> >>>> 25209:e6b20ec1824c
> >>>> 
> >>>> Disk config line is:
> >>>> disk = [ '/dev/vg_ssd/win7system,,hda' ]
> >>>> - it uses blkback.
> >>> 
> >>> I fail to see what could be the cause of the issue: nothing on the
> >>> blkback side should affect performances significantly.
> >>> You could try reverting the four patches to blkback that were applied
> >>> between 3.3 and 3.4-rc3 just to make sure it is not a blkback
> >>> regression:
> >>> 
> >>> $ git shortlog v3.3..v3.4-rc3 drivers/block/xen-blkback
> >>> 
> >>> Daniel De Graaf (2):
> >>>        xen/blkback: use grant-table.c hypercall wrappers
> >> 
> >> Hm.. Perhaps this patch fixes it a possible perf (I would think that
> >> the compiler would have kept the result of the first call to vaddr(req,
> >> i) somewhere.. but not sure) lost with the mentioned patch:
> >> 
> >> diff --git a/drivers/block/xen-blkback/blkback.c
> > 
> > b/drivers/block/xen-blkback/blkback.c
> > 
> >> index 73f196c..65dbadc 100644
> >> --- a/drivers/block/xen-blkback/blkback.c
> >> +++ b/drivers/block/xen-blkback/blkback.c
> >> @@ -327,13 +327,15 @@ static void xen_blkbk_unmap(struct pending_req
> >> *req)
> >> 
> >>   	int ret;
> >>   	
> >>   	for (i = 0; i<  req->nr_pages; i++) {
> >> 
> >> +		unsigned long addr;
> >> 
> >>   		handle = pending_handle(req, i);
> >>   		if (handle == BLKBACK_INVALID_HANDLE)
> >>   		
> >>   			continue;
> >> 
> >> -		gnttab_set_unmap_op(&unmap[invcount], vaddr(req, i),
> >> +		addr = vaddr(req, i);
> >> +		gnttab_set_unmap_op(&unmap[invcount], addr,
> >> 
> >>   				    GNTMAP_host_map, handle);
> >>   		
> >>   		pending_handle(req, i) = BLKBACK_INVALID_HANDLE;
> >> 
> >> -		pages[invcount] = virt_to_page(vaddr(req, i));
> >> +		pages[invcount] = virt_to_page(addr);
> >> 
> >>   		invcount++;
> >>   	
> >>   	}
> >>   	
> >>>        xen/blkback: Enable blkback on HVM guests
> >>> 
> >>> Konrad Rzeszutek Wilk (2):
> >>>        xen/blkback: Squash the discard support for 'file' and 'phy'
> >>>        type. xen/blkback: Make optional features be really optional.
> >>> 
> >>> _______________________________________________
> >>> Xen-devel mailing list
> >>> Xen-devel@lists.xen.org
> >>> http://lists.xen.org/xen-devel
> > 
> > that made it even worse :)
> > Write Performance is down to about 7mb/s (with 3.3: ~130mb/s)
> > Read "only" down to 40mb/s (with 3.3: ~140mb/s)
> 
> I doubt this patch can have any meaningful positive or negative
> performance effect at all - are you sure you're doing comparable
> runs? After all this is all just about a few arithmetic operations
> and an array access, which I'd expect to hide in the noise.
> 
> Jan

I redid the test; 

a) with 3.3.0 kernel 
b) with 3.4.0-rc4
c) with 3.40-rc4 and above patch

everything else remained the same, i.e. test-program and test-scenario was not 
changed and started after about 5min of domu bootup (so that no strange 
bootup-effects become relevant); same phy-backend (lvm on ssd), same everything 
else; so i cant see what else except the used dom0 kernel is causing this 
issue; but here are the numbers:

a) read: 135mb/s write: 142mb/s
b) read: 39mb/s  write: 39mb/s
c) read: 40mb/s  write: 40mb/s

Only thing that may become relevant is the difference in kernel-config betwen 
3.3 and 3.4 - here's the diff :
http://pastebin.com/raw.php?i=Dy71Fegq

Jan, it seems you're right: The patch doesn't add extra performance regression 
- i guess i had an i/o intensive task running in dom0 while doing the 
benchmark yesterday, so that the write performance got so bad. sorry for that.

Still there's a significant performance penalty from 3.3 to 3.4 

Greetings
Tobias

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Degregated I/O Performance since 3.4 - Regression in 3.4?
  2012-04-24 12:09         ` Tobias Geiger
@ 2012-04-24 12:52           ` Stefano Stabellini
  2012-04-24 14:07             ` Tobias Geiger
  0 siblings, 1 reply; 15+ messages in thread
From: Stefano Stabellini @ 2012-04-24 12:52 UTC (permalink / raw)
  To: Tobias Geiger
  Cc: Stefano Stabellini, xen-devel, Jan Beulich, Konrad Rzeszutek Wilk

On Tue, 24 Apr 2012, Tobias Geiger wrote:
> Am Dienstag, 24. April 2012, 09:27:42 schrieb Jan Beulich:
> > >>> On 23.04.12 at 22:53, Tobias Geiger <tobias.geiger@vido.info> wrote:
> > > Am 23.04.2012 17:24, schrieb Konrad Rzeszutek Wilk:
> > >> On Mon, Apr 23, 2012 at 12:53:03PM +0100, Stefano Stabellini wrote:
> > >>> On Mon, 23 Apr 2012, Tobias Geiger wrote:
> > >>>> Hello!
> > >>>> 
> > >>>> i noticed a considerable drop in I/O Performance when using 3.4 (rc3
> > >>>> and rc4 tested) as Dom0 Kernel;
> > >>>> 
> > >>>> With 3.3 i get over 100mb/s in a HVM DomU (win64) with PV Drivers
> > >>>> (gplpv_Vista2008x64_0.11.0.357.msi);
> > >>>> With 3.4 it drops to about a third of that.
> > >>>> 
> > >>>> Xen Version is xen-unstable:
> > >>>> xen_changeset          : Tue Apr 17 19:13:52 2012 +0100
> > >>>> 25209:e6b20ec1824c
> > >>>> 
> > >>>> Disk config line is:
> > >>>> disk = [ '/dev/vg_ssd/win7system,,hda' ]
> > >>>> - it uses blkback.
> > >>> 
> > >>> I fail to see what could be the cause of the issue: nothing on the
> > >>> blkback side should affect performances significantly.
> > >>> You could try reverting the four patches to blkback that were applied
> > >>> between 3.3 and 3.4-rc3 just to make sure it is not a blkback
> > >>> regression:
> > >>> 
> > >>> $ git shortlog v3.3..v3.4-rc3 drivers/block/xen-blkback
> > >>> 
> > >>> Daniel De Graaf (2):
> > >>>        xen/blkback: use grant-table.c hypercall wrappers
> > >> 
> > >> Hm.. Perhaps this patch fixes it a possible perf (I would think that
> > >> the compiler would have kept the result of the first call to vaddr(req,
> > >> i) somewhere.. but not sure) lost with the mentioned patch:
> > >> 
> > >> diff --git a/drivers/block/xen-blkback/blkback.c
> > > 
> > > b/drivers/block/xen-blkback/blkback.c
> > > 
> > >> index 73f196c..65dbadc 100644
> > >> --- a/drivers/block/xen-blkback/blkback.c
> > >> +++ b/drivers/block/xen-blkback/blkback.c
> > >> @@ -327,13 +327,15 @@ static void xen_blkbk_unmap(struct pending_req
> > >> *req)
> > >> 
> > >>   	int ret;
> > >>   	
> > >>   	for (i = 0; i<  req->nr_pages; i++) {
> > >> 
> > >> +		unsigned long addr;
> > >> 
> > >>   		handle = pending_handle(req, i);
> > >>   		if (handle == BLKBACK_INVALID_HANDLE)
> > >>   		
> > >>   			continue;
> > >> 
> > >> -		gnttab_set_unmap_op(&unmap[invcount], vaddr(req, i),
> > >> +		addr = vaddr(req, i);
> > >> +		gnttab_set_unmap_op(&unmap[invcount], addr,
> > >> 
> > >>   				    GNTMAP_host_map, handle);
> > >>   		
> > >>   		pending_handle(req, i) = BLKBACK_INVALID_HANDLE;
> > >> 
> > >> -		pages[invcount] = virt_to_page(vaddr(req, i));
> > >> +		pages[invcount] = virt_to_page(addr);
> > >> 
> > >>   		invcount++;
> > >>   	
> > >>   	}
> > >>   	
> > >>>        xen/blkback: Enable blkback on HVM guests
> > >>> 
> > >>> Konrad Rzeszutek Wilk (2):
> > >>>        xen/blkback: Squash the discard support for 'file' and 'phy'
> > >>>        type. xen/blkback: Make optional features be really optional.
> > >>> 
> > >>> _______________________________________________
> > >>> Xen-devel mailing list
> > >>> Xen-devel@lists.xen.org
> > >>> http://lists.xen.org/xen-devel
> > > 
> > > that made it even worse :)
> > > Write Performance is down to about 7mb/s (with 3.3: ~130mb/s)
> > > Read "only" down to 40mb/s (with 3.3: ~140mb/s)
> > 
> > I doubt this patch can have any meaningful positive or negative
> > performance effect at all - are you sure you're doing comparable
> > runs? After all this is all just about a few arithmetic operations
> > and an array access, which I'd expect to hide in the noise.
> > 
> > Jan
> 
> I redid the test; 
> 
> a) with 3.3.0 kernel 
> b) with 3.4.0-rc4
> c) with 3.40-rc4 and above patch
> 
> everything else remained the same, i.e. test-program and test-scenario was not 
> changed and started after about 5min of domu bootup (so that no strange 
> bootup-effects become relevant); same phy-backend (lvm on ssd), same everything 
> else; so i cant see what else except the used dom0 kernel is causing this 
> issue; but here are the numbers:
> 
> a) read: 135mb/s write: 142mb/s
> b) read: 39mb/s  write: 39mb/s
> c) read: 40mb/s  write: 40mb/s
> 
> Only thing that may become relevant is the difference in kernel-config betwen 
> 3.3 and 3.4 - here's the diff :
> http://pastebin.com/raw.php?i=Dy71Fegq
> 
> Jan, it seems you're right: The patch doesn't add extra performance regression 
> - i guess i had an i/o intensive task running in dom0 while doing the 
> benchmark yesterday, so that the write performance got so bad. sorry for that.
> 
> Still there's a significant performance penalty from 3.3 to 3.4 

Could you please try to revert the following commits?

git revert -n a71e23d9925517e609dfcb72b5874f33cdb0d2ad
git revert -n 3389bb8bf76180eecaffdfa7dd5b35fa4a2ce9b5
git revert -n 4dae76705fc8f9854bb732f9944e7ff9ba7a8e9f
git revert -n b2167ba6dd89d55ced26a867fad8f0fe388fd595
git revert -n 4f14faaab4ee46a046b6baff85644be199de718c
git revert -n 9846ff10af12f9e7caac696737db6c990592a74a

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Degregated I/O Performance since 3.4 - Regression in 3.4?
  2012-04-24 12:52           ` Stefano Stabellini
@ 2012-04-24 14:07             ` Tobias Geiger
  2012-04-24 14:16               ` Stefano Stabellini
  2012-04-24 16:30               ` Konrad Rzeszutek Wilk
  0 siblings, 2 replies; 15+ messages in thread
From: Tobias Geiger @ 2012-04-24 14:07 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: xen-devel, Jan Beulich, Konrad Rzeszutek Wilk

Am Dienstag, 24. April 2012, 14:52:31 schrieb Stefano Stabellini:
> On Tue, 24 Apr 2012, Tobias Geiger wrote:
> > Am Dienstag, 24. April 2012, 09:27:42 schrieb Jan Beulich:
> > > >>> On 23.04.12 at 22:53, Tobias Geiger <tobias.geiger@vido.info> wrote:
> > > > Am 23.04.2012 17:24, schrieb Konrad Rzeszutek Wilk:
> > > >> On Mon, Apr 23, 2012 at 12:53:03PM +0100, Stefano Stabellini wrote:
> > > >>> On Mon, 23 Apr 2012, Tobias Geiger wrote:
> > > >>>> Hello!
> > > >>>> 
> > > >>>> i noticed a considerable drop in I/O Performance when using 3.4
> > > >>>> (rc3 and rc4 tested) as Dom0 Kernel;
> > > >>>> 
> > > >>>> With 3.3 i get over 100mb/s in a HVM DomU (win64) with PV Drivers
> > > >>>> (gplpv_Vista2008x64_0.11.0.357.msi);
> > > >>>> With 3.4 it drops to about a third of that.
> > > >>>> 
> > > >>>> Xen Version is xen-unstable:
> > > >>>> xen_changeset          : Tue Apr 17 19:13:52 2012 +0100
> > > >>>> 25209:e6b20ec1824c
> > > >>>> 
> > > >>>> Disk config line is:
> > > >>>> disk = [ '/dev/vg_ssd/win7system,,hda' ]
> > > >>>> - it uses blkback.
> > > >>> 
> > > >>> I fail to see what could be the cause of the issue: nothing on the
> > > >>> blkback side should affect performances significantly.
> > > >>> You could try reverting the four patches to blkback that were
> > > >>> applied between 3.3 and 3.4-rc3 just to make sure it is not a
> > > >>> blkback regression:
> > > >>> 
> > > >>> $ git shortlog v3.3..v3.4-rc3 drivers/block/xen-blkback
> > > >>> 
> > > >>> Daniel De Graaf (2):
> > > >>>        xen/blkback: use grant-table.c hypercall wrappers
> > > >> 
> > > >> Hm.. Perhaps this patch fixes it a possible perf (I would think that
> > > >> the compiler would have kept the result of the first call to
> > > >> vaddr(req, i) somewhere.. but not sure) lost with the mentioned
> > > >> patch:
> > > >> 
> > > >> diff --git a/drivers/block/xen-blkback/blkback.c
> > > > 
> > > > b/drivers/block/xen-blkback/blkback.c
> > > > 
> > > >> index 73f196c..65dbadc 100644
> > > >> --- a/drivers/block/xen-blkback/blkback.c
> > > >> +++ b/drivers/block/xen-blkback/blkback.c
> > > >> @@ -327,13 +327,15 @@ static void xen_blkbk_unmap(struct pending_req
> > > >> *req)
> > > >> 
> > > >>   	int ret;
> > > >>   	
> > > >>   	for (i = 0; i<  req->nr_pages; i++) {
> > > >> 
> > > >> +		unsigned long addr;
> > > >> 
> > > >>   		handle = pending_handle(req, i);
> > > >>   		if (handle == BLKBACK_INVALID_HANDLE)
> > > >>   		
> > > >>   			continue;
> > > >> 
> > > >> -		gnttab_set_unmap_op(&unmap[invcount], vaddr(req, i),
> > > >> +		addr = vaddr(req, i);
> > > >> +		gnttab_set_unmap_op(&unmap[invcount], addr,
> > > >> 
> > > >>   				    GNTMAP_host_map, handle);
> > > >>   		
> > > >>   		pending_handle(req, i) = BLKBACK_INVALID_HANDLE;
> > > >> 
> > > >> -		pages[invcount] = virt_to_page(vaddr(req, i));
> > > >> +		pages[invcount] = virt_to_page(addr);
> > > >> 
> > > >>   		invcount++;
> > > >>   	
> > > >>   	}
> > > >>   	
> > > >>>        xen/blkback: Enable blkback on HVM guests
> > > >>> 
> > > >>> Konrad Rzeszutek Wilk (2):
> > > >>>        xen/blkback: Squash the discard support for 'file' and 'phy'
> > > >>>        type. xen/blkback: Make optional features be really
> > > >>>        optional.
> > > >>> 
> > > >>> _______________________________________________
> > > >>> Xen-devel mailing list
> > > >>> Xen-devel@lists.xen.org
> > > >>> http://lists.xen.org/xen-devel
> > > > 
> > > > that made it even worse :)
> > > > Write Performance is down to about 7mb/s (with 3.3: ~130mb/s)
> > > > Read "only" down to 40mb/s (with 3.3: ~140mb/s)
> > > 
> > > I doubt this patch can have any meaningful positive or negative
> > > performance effect at all - are you sure you're doing comparable
> > > runs? After all this is all just about a few arithmetic operations
> > > and an array access, which I'd expect to hide in the noise.
> > > 
> > > Jan
> > 
> > I redid the test;
> > 
> > a) with 3.3.0 kernel
> > b) with 3.4.0-rc4
> > c) with 3.40-rc4 and above patch
> > 
> > everything else remained the same, i.e. test-program and test-scenario
> > was not changed and started after about 5min of domu bootup (so that no
> > strange bootup-effects become relevant); same phy-backend (lvm on ssd),
> > same everything else; so i cant see what else except the used dom0
> > kernel is causing this issue; but here are the numbers:
> > 
> > a) read: 135mb/s write: 142mb/s
> > b) read: 39mb/s  write: 39mb/s
> > c) read: 40mb/s  write: 40mb/s
> > 
> > Only thing that may become relevant is the difference in kernel-config
> > betwen 3.3 and 3.4 - here's the diff :
> > http://pastebin.com/raw.php?i=Dy71Fegq
> > 
> > Jan, it seems you're right: The patch doesn't add extra performance
> > regression - i guess i had an i/o intensive task running in dom0 while
> > doing the benchmark yesterday, so that the write performance got so bad.
> > sorry for that.
> > 
> > Still there's a significant performance penalty from 3.3 to 3.4
> 
> Could you please try to revert the following commits?
> 
> git revert -n a71e23d9925517e609dfcb72b5874f33cdb0d2ad
> git revert -n 3389bb8bf76180eecaffdfa7dd5b35fa4a2ce9b5
> git revert -n 4dae76705fc8f9854bb732f9944e7ff9ba7a8e9f
> git revert -n b2167ba6dd89d55ced26a867fad8f0fe388fd595
> git revert -n 4f14faaab4ee46a046b6baff85644be199de718c
> git revert -n 9846ff10af12f9e7caac696737db6c990592a74a

after reverting said 6 commits (thanks for the ids of these - had difficulties 
to find them), the performance is back to normal.

should i try to circle it down to one of this 6, or do you have a hint on 
which it might be?

Greetings
Tobias

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Degregated I/O Performance since 3.4 - Regression in 3.4?
  2012-04-24 14:07             ` Tobias Geiger
@ 2012-04-24 14:16               ` Stefano Stabellini
  2012-04-24 16:30               ` Konrad Rzeszutek Wilk
  1 sibling, 0 replies; 15+ messages in thread
From: Stefano Stabellini @ 2012-04-24 14:16 UTC (permalink / raw)
  To: Tobias Geiger
  Cc: Konrad Rzeszutek Wilk, xen-devel, Jan Beulich, Stefano Stabellini

On Tue, 24 Apr 2012, Tobias Geiger wrote:
> Am Dienstag, 24. April 2012, 14:52:31 schrieb Stefano Stabellini:
> > On Tue, 24 Apr 2012, Tobias Geiger wrote:
> > > Am Dienstag, 24. April 2012, 09:27:42 schrieb Jan Beulich:
> > > > >>> On 23.04.12 at 22:53, Tobias Geiger <tobias.geiger@vido.info> wrote:
> > > > > Am 23.04.2012 17:24, schrieb Konrad Rzeszutek Wilk:
> > > > >> On Mon, Apr 23, 2012 at 12:53:03PM +0100, Stefano Stabellini wrote:
> > > > >>> On Mon, 23 Apr 2012, Tobias Geiger wrote:
> > > > >>>> Hello!
> > > > >>>> 
> > > > >>>> i noticed a considerable drop in I/O Performance when using 3.4
> > > > >>>> (rc3 and rc4 tested) as Dom0 Kernel;
> > > > >>>> 
> > > > >>>> With 3.3 i get over 100mb/s in a HVM DomU (win64) with PV Drivers
> > > > >>>> (gplpv_Vista2008x64_0.11.0.357.msi);
> > > > >>>> With 3.4 it drops to about a third of that.
> > > > >>>> 
> > > > >>>> Xen Version is xen-unstable:
> > > > >>>> xen_changeset          : Tue Apr 17 19:13:52 2012 +0100
> > > > >>>> 25209:e6b20ec1824c
> > > > >>>> 
> > > > >>>> Disk config line is:
> > > > >>>> disk = [ '/dev/vg_ssd/win7system,,hda' ]
> > > > >>>> - it uses blkback.
> > > > >>> 
> > > > >>> I fail to see what could be the cause of the issue: nothing on the
> > > > >>> blkback side should affect performances significantly.
> > > > >>> You could try reverting the four patches to blkback that were
> > > > >>> applied between 3.3 and 3.4-rc3 just to make sure it is not a
> > > > >>> blkback regression:
> > > > >>> 
> > > > >>> $ git shortlog v3.3..v3.4-rc3 drivers/block/xen-blkback
> > > > >>> 
> > > > >>> Daniel De Graaf (2):
> > > > >>>        xen/blkback: use grant-table.c hypercall wrappers
> > > > >> 
> > > > >> Hm.. Perhaps this patch fixes it a possible perf (I would think that
> > > > >> the compiler would have kept the result of the first call to
> > > > >> vaddr(req, i) somewhere.. but not sure) lost with the mentioned
> > > > >> patch:
> > > > >> 
> > > > >> diff --git a/drivers/block/xen-blkback/blkback.c
> > > > > 
> > > > > b/drivers/block/xen-blkback/blkback.c
> > > > > 
> > > > >> index 73f196c..65dbadc 100644
> > > > >> --- a/drivers/block/xen-blkback/blkback.c
> > > > >> +++ b/drivers/block/xen-blkback/blkback.c
> > > > >> @@ -327,13 +327,15 @@ static void xen_blkbk_unmap(struct pending_req
> > > > >> *req)
> > > > >> 
> > > > >>   	int ret;
> > > > >>   	
> > > > >>   	for (i = 0; i<  req->nr_pages; i++) {
> > > > >> 
> > > > >> +		unsigned long addr;
> > > > >> 
> > > > >>   		handle = pending_handle(req, i);
> > > > >>   		if (handle == BLKBACK_INVALID_HANDLE)
> > > > >>   		
> > > > >>   			continue;
> > > > >> 
> > > > >> -		gnttab_set_unmap_op(&unmap[invcount], vaddr(req, i),
> > > > >> +		addr = vaddr(req, i);
> > > > >> +		gnttab_set_unmap_op(&unmap[invcount], addr,
> > > > >> 
> > > > >>   				    GNTMAP_host_map, handle);
> > > > >>   		
> > > > >>   		pending_handle(req, i) = BLKBACK_INVALID_HANDLE;
> > > > >> 
> > > > >> -		pages[invcount] = virt_to_page(vaddr(req, i));
> > > > >> +		pages[invcount] = virt_to_page(addr);
> > > > >> 
> > > > >>   		invcount++;
> > > > >>   	
> > > > >>   	}
> > > > >>   	
> > > > >>>        xen/blkback: Enable blkback on HVM guests
> > > > >>> 
> > > > >>> Konrad Rzeszutek Wilk (2):
> > > > >>>        xen/blkback: Squash the discard support for 'file' and 'phy'
> > > > >>>        type. xen/blkback: Make optional features be really
> > > > >>>        optional.
> > > > >>> 
> > > > >>> _______________________________________________
> > > > >>> Xen-devel mailing list
> > > > >>> Xen-devel@lists.xen.org
> > > > >>> http://lists.xen.org/xen-devel
> > > > > 
> > > > > that made it even worse :)
> > > > > Write Performance is down to about 7mb/s (with 3.3: ~130mb/s)
> > > > > Read "only" down to 40mb/s (with 3.3: ~140mb/s)
> > > > 
> > > > I doubt this patch can have any meaningful positive or negative
> > > > performance effect at all - are you sure you're doing comparable
> > > > runs? After all this is all just about a few arithmetic operations
> > > > and an array access, which I'd expect to hide in the noise.
> > > > 
> > > > Jan
> > > 
> > > I redid the test;
> > > 
> > > a) with 3.3.0 kernel
> > > b) with 3.4.0-rc4
> > > c) with 3.40-rc4 and above patch
> > > 
> > > everything else remained the same, i.e. test-program and test-scenario
> > > was not changed and started after about 5min of domu bootup (so that no
> > > strange bootup-effects become relevant); same phy-backend (lvm on ssd),
> > > same everything else; so i cant see what else except the used dom0
> > > kernel is causing this issue; but here are the numbers:
> > > 
> > > a) read: 135mb/s write: 142mb/s
> > > b) read: 39mb/s  write: 39mb/s
> > > c) read: 40mb/s  write: 40mb/s
> > > 
> > > Only thing that may become relevant is the difference in kernel-config
> > > betwen 3.3 and 3.4 - here's the diff :
> > > http://pastebin.com/raw.php?i=Dy71Fegq
> > > 
> > > Jan, it seems you're right: The patch doesn't add extra performance
> > > regression - i guess i had an i/o intensive task running in dom0 while
> > > doing the benchmark yesterday, so that the write performance got so bad.
> > > sorry for that.
> > > 
> > > Still there's a significant performance penalty from 3.3 to 3.4
> > 
> > Could you please try to revert the following commits?
> > 
> > git revert -n a71e23d9925517e609dfcb72b5874f33cdb0d2ad
> > git revert -n 3389bb8bf76180eecaffdfa7dd5b35fa4a2ce9b5
> > git revert -n 4dae76705fc8f9854bb732f9944e7ff9ba7a8e9f
> > git revert -n b2167ba6dd89d55ced26a867fad8f0fe388fd595
> > git revert -n 4f14faaab4ee46a046b6baff85644be199de718c
> > git revert -n 9846ff10af12f9e7caac696737db6c990592a74a
> 
> after reverting said 6 commits (thanks for the ids of these - had difficulties 
> to find them), the performance is back to normal.
> 
> should i try to circle it down to one of this 6, or do you have a hint on 
> which it might be?

that would be great :)

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Degregated I/O Performance since 3.4 - Regression in 3.4?
  2012-04-24 14:07             ` Tobias Geiger
  2012-04-24 14:16               ` Stefano Stabellini
@ 2012-04-24 16:30               ` Konrad Rzeszutek Wilk
  2012-04-24 23:21                 ` Tobias Geiger
  1 sibling, 1 reply; 15+ messages in thread
From: Konrad Rzeszutek Wilk @ 2012-04-24 16:30 UTC (permalink / raw)
  To: Tobias Geiger; +Cc: xen-devel, Jan Beulich, Stefano Stabellini

> > > I redid the test;
> > > 
> > > a) with 3.3.0 kernel
> > > b) with 3.4.0-rc4
> > > c) with 3.40-rc4 and above patch
> > > 
> > > everything else remained the same, i.e. test-program and test-scenario
> > > was not changed and started after about 5min of domu bootup (so that no
> > > strange bootup-effects become relevant); same phy-backend (lvm on ssd),
> > > same everything else; so i cant see what else except the used dom0
> > > kernel is causing this issue; but here are the numbers:
> > > 
> > > a) read: 135mb/s write: 142mb/s
> > > b) read: 39mb/s  write: 39mb/s
> > > c) read: 40mb/s  write: 40mb/s
> > > 
> > > Only thing that may become relevant is the difference in kernel-config
> > > betwen 3.3 and 3.4 - here's the diff :
> > > http://pastebin.com/raw.php?i=Dy71Fegq
> > > 
> > > Jan, it seems you're right: The patch doesn't add extra performance
> > > regression - i guess i had an i/o intensive task running in dom0 while
> > > doing the benchmark yesterday, so that the write performance got so bad.
> > > sorry for that.
> > > 
> > > Still there's a significant performance penalty from 3.3 to 3.4
> > 
> > Could you please try to revert the following commits?
> > 
> > git revert -n a71e23d9925517e609dfcb72b5874f33cdb0d2ad
No way
> > git revert -n 3389bb8bf76180eecaffdfa7dd5b35fa4a2ce9b5

Startup.
> > git revert -n 4dae76705fc8f9854bb732f9944e7ff9ba7a8e9f

Hm, this is just during startup.
> > git revert -n b2167ba6dd89d55ced26a867fad8f0fe388fd595

No way.


> > git revert -n 4f14faaab4ee46a046b6baff85644be199de718c

Perhaps? But I am not seeing it.

> > git revert -n 9846ff10af12f9e7caac696737db6c990592a74a

Perhaps?
> 
> after reverting said 6 commits (thanks for the ids of these - had difficulties 
> to find them), the performance is back to normal.
> 
> should i try to circle it down to one of this 6, or do you have a hint on 
> which it might be?

I think either off these: 4f14faaab4ee46a046b6baff85644be199de718c
9846ff10af12f9e7caac696737db6c990592a74a might be the culprit.

Try the 9846ff10 first.

> 
> Greetings
> Tobias

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Degregated I/O Performance since 3.4 - Regression in 3.4?
  2012-04-24 16:30               ` Konrad Rzeszutek Wilk
@ 2012-04-24 23:21                 ` Tobias Geiger
  2012-04-25 13:44                   ` Stefano Stabellini
  0 siblings, 1 reply; 15+ messages in thread
From: Tobias Geiger @ 2012-04-24 23:21 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk; +Cc: xen-devel, Jan Beulich, Stefano Stabellini

Am 24.04.2012 18:30, schrieb Konrad Rzeszutek Wilk:
>>>> I redid the test;
>>>>
>>>> a) with 3.3.0 kernel
>>>> b) with 3.4.0-rc4
>>>> c) with 3.40-rc4 and above patch
>>>>
>>>> everything else remained the same, i.e. test-program and test-scenario
>>>> was not changed and started after about 5min of domu bootup (so that no
>>>> strange bootup-effects become relevant); same phy-backend (lvm on ssd),
>>>> same everything else; so i cant see what else except the used dom0
>>>> kernel is causing this issue; but here are the numbers:
>>>>
>>>> a) read: 135mb/s write: 142mb/s
>>>> b) read: 39mb/s  write: 39mb/s
>>>> c) read: 40mb/s  write: 40mb/s
>>>>
>>>> Only thing that may become relevant is the difference in kernel-config
>>>> betwen 3.3 and 3.4 - here's the diff :
>>>> http://pastebin.com/raw.php?i=Dy71Fegq
>>>>
>>>> Jan, it seems you're right: The patch doesn't add extra performance
>>>> regression - i guess i had an i/o intensive task running in dom0 while
>>>> doing the benchmark yesterday, so that the write performance got so bad.
>>>> sorry for that.
>>>>
>>>> Still there's a significant performance penalty from 3.3 to 3.4
>>> Could you please try to revert the following commits?
>>>
>>> git revert -n a71e23d9925517e609dfcb72b5874f33cdb0d2ad
> No way
>>> git revert -n 3389bb8bf76180eecaffdfa7dd5b35fa4a2ce9b5
> Startup.
>>> git revert -n 4dae76705fc8f9854bb732f9944e7ff9ba7a8e9f
> Hm, this is just during startup.
>>> git revert -n b2167ba6dd89d55ced26a867fad8f0fe388fd595
> No way.
>
>
>>> git revert -n 4f14faaab4ee46a046b6baff85644be199de718c
> Perhaps? But I am not seeing it.
>
>>> git revert -n 9846ff10af12f9e7caac696737db6c990592a74a
> Perhaps?
>> after reverting said 6 commits (thanks for the ids of these - had difficulties
>> to find them), the performance is back to normal.
>>
>> should i try to circle it down to one of this 6, or do you have a hint on
>> which it might be?
> I think either off these: 4f14faaab4ee46a046b6baff85644be199de718c
> 9846ff10af12f9e7caac696737db6c990592a74a might be the culprit.
>
> Try the 9846ff10 first.
>
>> Greetings
>> Tobias

Hi,

9846ff10 was it!
after reverting it, performance returned to normal.

Thanks!
Tobias

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Degregated I/O Performance since 3.4 - Regression in 3.4?
  2012-04-24 23:21                 ` Tobias Geiger
@ 2012-04-25 13:44                   ` Stefano Stabellini
  2012-04-25 13:57                     ` Tobias Geiger
  0 siblings, 1 reply; 15+ messages in thread
From: Stefano Stabellini @ 2012-04-25 13:44 UTC (permalink / raw)
  To: Tobias Geiger
  Cc: Stefano Stabellini, xen-devel, Jan Beulich, Konrad Rzeszutek Wilk

On Wed, 25 Apr 2012, Tobias Geiger wrote:
> Hi,
> 
> 9846ff10 was it!
> after reverting it, performance returned to normal.

Argh, that is one of my commits, and is supposed to be a performance
improvement!

I couldn't reproduce the regression you are seeing but the patch is
pretty simple, so just by reading it again I think I manage to spot a
possible error. Could you please try again with the appended patch?



diff --git a/drivers/xen/events.c b/drivers/xen/events.c
index 4b33acd..0a8a17c 100644
--- a/drivers/xen/events.c
+++ b/drivers/xen/events.c
@@ -274,7 +274,7 @@ static unsigned int cpu_from_evtchn(unsigned int evtchn)
 
 static bool pirq_check_eoi_map(unsigned irq)
 {
-	return test_bit(irq, pirq_eoi_map);
+	return test_bit(pirq_from_irq(irq), pirq_eoi_map);
 }
 
 static bool pirq_needs_eoi_flag(unsigned irq)

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: Degregated I/O Performance since 3.4 - Regression in 3.4?
  2012-04-25 13:44                   ` Stefano Stabellini
@ 2012-04-25 13:57                     ` Tobias Geiger
  2012-04-25 15:05                       ` Stefano Stabellini
  0 siblings, 1 reply; 15+ messages in thread
From: Tobias Geiger @ 2012-04-25 13:57 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: xen-devel, Jan Beulich, Konrad Rzeszutek Wilk

Am Mittwoch, 25. April 2012, 15:44:44 schrieb Stefano Stabellini:
> On Wed, 25 Apr 2012, Tobias Geiger wrote:
> > Hi,
> > 
> > 9846ff10 was it!
> > after reverting it, performance returned to normal.
> 
> Argh, that is one of my commits, and is supposed to be a performance
> improvement!
> 
> I couldn't reproduce the regression you are seeing but the patch is
> pretty simple, so just by reading it again I think I manage to spot a
> possible error. Could you please try again with the appended patch?
> 
> 
> 
> diff --git a/drivers/xen/events.c b/drivers/xen/events.c
> index 4b33acd..0a8a17c 100644
> --- a/drivers/xen/events.c
> +++ b/drivers/xen/events.c
> @@ -274,7 +274,7 @@ static unsigned int cpu_from_evtchn(unsigned int
> evtchn)
> 
>  static bool pirq_check_eoi_map(unsigned irq)
>  {
> -	return test_bit(irq, pirq_eoi_map);
> +	return test_bit(pirq_from_irq(irq), pirq_eoi_map);
>  }
> 
>  static bool pirq_needs_eoi_flag(unsigned irq)

Looks good; 

I applied it to 3.4.0-rc4 and the performance went to normal again.

Thanks!
Tobias

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Degregated I/O Performance since 3.4 - Regression in 3.4?
  2012-04-25 15:05                       ` Stefano Stabellini
@ 2012-04-25 15:03                         ` Tobias Geiger
  0 siblings, 0 replies; 15+ messages in thread
From: Tobias Geiger @ 2012-04-25 15:03 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: xen-devel, Jan Beulich, Konrad Rzeszutek Wilk

Am Mittwoch, 25. April 2012, 17:05:26 schrieb Stefano Stabellini:
> On Wed, 25 Apr 2012, Tobias Geiger wrote:
> > Am Mittwoch, 25. April 2012, 15:44:44 schrieb Stefano Stabellini:
> > > On Wed, 25 Apr 2012, Tobias Geiger wrote:
> > > > Hi,
> > > > 
> > > > 9846ff10 was it!
> > > > after reverting it, performance returned to normal.
> > > 
> > > Argh, that is one of my commits, and is supposed to be a performance
> > > improvement!
> > > 
> > > I couldn't reproduce the regression you are seeing but the patch is
> > > pretty simple, so just by reading it again I think I manage to spot a
> > > possible error. Could you please try again with the appended patch?
> > > 
> > > 
> > > 
> > > diff --git a/drivers/xen/events.c b/drivers/xen/events.c
> > > index 4b33acd..0a8a17c 100644
> > > --- a/drivers/xen/events.c
> > > +++ b/drivers/xen/events.c
> > > @@ -274,7 +274,7 @@ static unsigned int cpu_from_evtchn(unsigned int
> > > evtchn)
> > > 
> > >  static bool pirq_check_eoi_map(unsigned irq)
> > >  {
> > > 
> > > -	return test_bit(irq, pirq_eoi_map);
> > > +	return test_bit(pirq_from_irq(irq), pirq_eoi_map);
> > > 
> > >  }
> > >  
> > >  static bool pirq_needs_eoi_flag(unsigned irq)
> > 
> > Looks good;
> > 
> > I applied it to 3.4.0-rc4 and the performance went to normal again.
> 
> I'll add your tested-by if it is OK for you

of course - thanks!

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Degregated I/O Performance since 3.4 - Regression in 3.4?
  2012-04-25 13:57                     ` Tobias Geiger
@ 2012-04-25 15:05                       ` Stefano Stabellini
  2012-04-25 15:03                         ` Tobias Geiger
  0 siblings, 1 reply; 15+ messages in thread
From: Stefano Stabellini @ 2012-04-25 15:05 UTC (permalink / raw)
  To: Tobias Geiger
  Cc: Konrad Rzeszutek Wilk, xen-devel, Jan Beulich, Stefano Stabellini

On Wed, 25 Apr 2012, Tobias Geiger wrote:
> Am Mittwoch, 25. April 2012, 15:44:44 schrieb Stefano Stabellini:
> > On Wed, 25 Apr 2012, Tobias Geiger wrote:
> > > Hi,
> > > 
> > > 9846ff10 was it!
> > > after reverting it, performance returned to normal.
> > 
> > Argh, that is one of my commits, and is supposed to be a performance
> > improvement!
> > 
> > I couldn't reproduce the regression you are seeing but the patch is
> > pretty simple, so just by reading it again I think I manage to spot a
> > possible error. Could you please try again with the appended patch?
> > 
> > 
> > 
> > diff --git a/drivers/xen/events.c b/drivers/xen/events.c
> > index 4b33acd..0a8a17c 100644
> > --- a/drivers/xen/events.c
> > +++ b/drivers/xen/events.c
> > @@ -274,7 +274,7 @@ static unsigned int cpu_from_evtchn(unsigned int
> > evtchn)
> > 
> >  static bool pirq_check_eoi_map(unsigned irq)
> >  {
> > -	return test_bit(irq, pirq_eoi_map);
> > +	return test_bit(pirq_from_irq(irq), pirq_eoi_map);
> >  }
> > 
> >  static bool pirq_needs_eoi_flag(unsigned irq)
> 
> Looks good; 
> 
> I applied it to 3.4.0-rc4 and the performance went to normal again.

I'll add your tested-by if it is OK for you

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2012-04-25 15:05 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-04-23 10:02 Degregated I/O Performance since 3.4 - Regression in 3.4? Tobias Geiger
2012-04-23 11:53 ` Stefano Stabellini
2012-04-23 15:24   ` Konrad Rzeszutek Wilk
2012-04-23 20:53     ` Tobias Geiger
2012-04-24  7:27       ` Jan Beulich
2012-04-24 12:09         ` Tobias Geiger
2012-04-24 12:52           ` Stefano Stabellini
2012-04-24 14:07             ` Tobias Geiger
2012-04-24 14:16               ` Stefano Stabellini
2012-04-24 16:30               ` Konrad Rzeszutek Wilk
2012-04-24 23:21                 ` Tobias Geiger
2012-04-25 13:44                   ` Stefano Stabellini
2012-04-25 13:57                     ` Tobias Geiger
2012-04-25 15:05                       ` Stefano Stabellini
2012-04-25 15:03                         ` Tobias Geiger

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.