All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] GSoC 2011: S3 Trio, AHCI
@ 2011-04-05 13:36 Roland Elek
  2011-04-06 18:21 ` Luiz Capitulino
  2011-04-07 10:57 ` Natalia Portillo
  0 siblings, 2 replies; 8+ messages in thread
From: Roland Elek @ 2011-04-05 13:36 UTC (permalink / raw)
  To: qemu-devel

Dear Qemu developers,

First, I'd like to reintroduce myself, as my university and official 
duties prevented me from being active in the community since last year. 
I am Roland Elek, a student from Hungary, and a successful student 
participant of Google Summer of Code 2010. This year, I would like to 
participate again. I know I'm a bit late, but I'm still hoping to get 
things arranged before the deadline.

Last year, I worked on AHCI emulation with Alex as my mentor. Do you 
think a proper summer project could be proposed from what is still 
missing? If so, can I kindly ask someone to give me some pointers to 
what the project needs the most, and where I should look first for 
things to include in my proposal? Also, if the idea is feasible, would 
there be someone who could be my mentor?

Last year, I was also interested in working on S3 Trio emulation. This 
year, the same idea is on the ideas list. The hardware is pretty 
thoroughly documented through source code and textual documentation, and 
I'm already familiar with adding PCI devices to Qemu, so I do see a 
rough outline of how I would implement it.

However, last year, Paul Brook commented [1] that he wasn't convinced 
about the usefulness of emulating an S3 Trio or Virge card, because of 
performance reasons. He suggested that accelerating the 2D engine would 
be tricky because the framebuffer is exposed to the guest. This might be 
just me not fully understanding his point, but isn't this also the case 
with the Cirrus Logic GD5446 card?

He also suggested paravirtualization for 3D acceleration. Do you think 
it would make a good summer project?

Thank you in advance for your help.

Regards,
Roland

[1] http://lists.gnu.org/archive/html/qemu-devel/2010-04/msg00012.html

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Qemu-devel] GSoC 2011: S3 Trio, AHCI
  2011-04-05 13:36 [Qemu-devel] GSoC 2011: S3 Trio, AHCI Roland Elek
@ 2011-04-06 18:21 ` Luiz Capitulino
  2011-04-06 22:54   ` Paul Brook
  2011-04-07 10:57 ` Natalia Portillo
  1 sibling, 1 reply; 8+ messages in thread
From: Luiz Capitulino @ 2011-04-06 18:21 UTC (permalink / raw)
  To: Roland Elek; +Cc: Stefan Hajnoczi, qemu-devel, paul

On Tue, 05 Apr 2011 15:36:12 +0200
Roland Elek <elek.roland@gmail.com> wrote:

> Dear Qemu developers,
> 
> First, I'd like to reintroduce myself, as my university and official 
> duties prevented me from being active in the community since last year. 
> I am Roland Elek, a student from Hungary, and a successful student 
> participant of Google Summer of Code 2010. This year, I would like to 
> participate again. I know I'm a bit late, but I'm still hoping to get 
> things arranged before the deadline.
> 
> Last year, I worked on AHCI emulation with Alex as my mentor. Do you 
> think a proper summer project could be proposed from what is still 
> missing? If so, can I kindly ask someone to give me some pointers to 
> what the project needs the most, and where I should look first for 
> things to include in my proposal? Also, if the idea is feasible, would 
> there be someone who could be my mentor?

The process is the same, you choose a project from the list (or come with
yours own), contact the suggested mentor to talk about the project and
then submit your proposal.

Ideas page:

 http://wiki.qemu.org/Google_Summer_of_Code_2011

> Last year, I was also interested in working on S3 Trio emulation. This 
> year, the same idea is on the ideas list. The hardware is pretty 
> thoroughly documented through source code and textual documentation, and 
> I'm already familiar with adding PCI devices to Qemu, so I do see a 
> rough outline of how I would implement it.
> 
> However, last year, Paul Brook commented [1] that he wasn't convinced 
> about the usefulness of emulating an S3 Trio or Virge card, because of 
> performance reasons. He suggested that accelerating the 2D engine would 
> be tricky because the framebuffer is exposed to the guest. This might be 
> just me not fully understanding his point, but isn't this also the case 
> with the Cirrus Logic GD5446 card?
> 
> He also suggested paravirtualization for 3D acceleration. Do you think 
> it would make a good summer project?

I can't comment on these issues, CC'ing Paul, Anthony and Stefan.

> 
> Thank you in advance for your help.
> 
> Regards,
> Roland
> 
> [1] http://lists.gnu.org/archive/html/qemu-devel/2010-04/msg00012.html
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Qemu-devel] GSoC 2011: S3 Trio, AHCI
  2011-04-06 18:21 ` Luiz Capitulino
@ 2011-04-06 22:54   ` Paul Brook
  2011-04-07 10:10     ` Alon Levy
                       ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Paul Brook @ 2011-04-06 22:54 UTC (permalink / raw)
  To: Luiz Capitulino; +Cc: Stefan Hajnoczi, qemu-devel, Roland Elek

> > Last year, I was also interested in working on S3 Trio emulation. This
> > year, the same idea is on the ideas list. The hardware is pretty
> > thoroughly documented through source code and textual documentation, and
> > I'm already familiar with adding PCI devices to Qemu, so I do see a
> > rough outline of how I would implement it.
> > 
> > However, last year, Paul Brook commented [1] that he wasn't convinced
> > about the usefulness of emulating an S3 Trio or Virge card, because of
> > performance reasons. He suggested that accelerating the 2D engine would
> > be tricky because the framebuffer is exposed to the guest. This might be
> > just me not fully understanding his point, but isn't this also the case
> > with the Cirrus Logic GD5446 card?
> > 
> > He also suggested paravirtualization for 3D acceleration. Do you think
> > it would make a good summer project?
> 
> I can't comment on these issues, CC'ing Paul, Anthony and Stefan.

My understanding is that Cirrus logic cards also have 2D acceleration.  We 
implement this in qemu, but not in a way that's likely to be fast.  I don't 
really know either card in detail, but they're both a similar age, so I'd 
expect the functionality to be fairly similar.

The 2D engines you're talking about are of questionable benefit.  IIUC They're 
basically a memcpy engine with some weird bitmasking operations that line up 
with the windows 3 GDI raster ops.  While accelerating this maybe made sense 
on a 386, it's not worth the effort on modern CPUs.  The latency and overhead 
of setting up and syncronising with the async blit engine is greater than the 
cost of just doing it in software.  In practice modern desktop environments 
just use the 3D engine.

IMO emulating useful 'real' 3D hardware is not feasible.  In theory you could 
emulate an old card, however these are also of limited practical benefit.  For 
the S3 cards the 3D engine is so crippled that even when new it wasn't worth 
using.  You could maybe implement an old fixed-function card like, e.g. an 
i810 or 3dfx card, however drivers for these are also getting hard to come by, 
and the functionality is still limited.  You basically get raster offloading, 
and everything else is done in software.  Emulation overhead may be greater 
than useful offloaded work.

For good 3D support you're looking at something shader based.  Emulating real 
hardware is not going to happen.  With real hardware the interface qemu needs 
to emulate is directly tied to the implementation details of that particular 
chipset.  The guest driver generally uses intimate knowledge of these 
implementation details (e.g. vram layout, shader ISA).  Different 
implementations may provide the same high-level functionality, however the 
low-level implementations are very different.  Reconstructing high-level 
operations from the low-level stream is extremely hard, probably harder than 
the main CPU emulation that qemu does.

IMO a good rule of thumb is that the output of the render pipeline should not 
be guest visible.  Anything where the guest can observe/manipulate the output 
or intermediate results makes it very hard to isolate the guest from the 
implementation details (i.e. whatever hardware acceleration the host 
provides).

There are already a handful of different paravirtual graphics drivers, of 
varying quality and openness.  This includes:

- Several OpenGL passthrough drivers.  These are effectively just re-
implementing GLX, often badly.  I suspect that given a decent virtual network, 
remote X (including 3D via GLX) already works pretty well.

- SPICE. IIUC this is an ugly hack that maps directly onto legacy Windows/GDI 
operations.  I'm not aware of any substantive plan for making this work well 
in other environments (using the subset that's basically a dumb framebuffer 
doesn't count), or for doing 3D.

- Whatever VMware uses.

- Whatever VirtualBox uses.

- At least two gallium3D based projects.  I think this includes Xen, and 
possibly VirtualBox.  Given the whole point of Gallium3D is to provide a 
common abstraction layer between the application API and the hardware this 
would be my choice.

Paul

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Qemu-devel] GSoC 2011: S3 Trio, AHCI
  2011-04-06 22:54   ` Paul Brook
@ 2011-04-07 10:10     ` Alon Levy
  2011-04-07 10:17     ` Alon Levy
  2011-04-07 13:13     ` Anthony Liguori
  2 siblings, 0 replies; 8+ messages in thread
From: Alon Levy @ 2011-04-07 10:10 UTC (permalink / raw)
  To: Paul Brook; +Cc: Stefan Hajnoczi, Roland Elek, qemu-devel, Luiz Capitulino

On Wed, Apr 06, 2011 at 11:54:47PM +0100, Paul Brook wrote:
> > > Last year, I was also interested in working on S3 Trio emulation. This
> > > year, the same idea is on the ideas list. The hardware is pretty
> > > thoroughly documented through source code and textual documentation, and
> > > I'm already familiar with adding PCI devices to Qemu, so I do see a
> > > rough outline of how I would implement it.
> > > 
> > > However, last year, Paul Brook commented [1] that he wasn't convinced
> > > about the usefulness of emulating an S3 Trio or Virge card, because of
> > > performance reasons. He suggested that accelerating the 2D engine would
> > > be tricky because the framebuffer is exposed to the guest. This might be
> > > just me not fully understanding his point, but isn't this also the case
> > > with the Cirrus Logic GD5446 card?
> > > 
> > > He also suggested paravirtualization for 3D acceleration. Do you think
> > > it would make a good summer project?
> > 
> > I can't comment on these issues, CC'ing Paul, Anthony and Stefan.
> 
> My understanding is that Cirrus logic cards also have 2D acceleration.  We 
> implement this in qemu, but not in a way that's likely to be fast.  I don't 
> really know either card in detail, but they're both a similar age, so I'd 
> expect the functionality to be fairly similar.
> 
> The 2D engines you're talking about are of questionable benefit.  IIUC They're 
> basically a memcpy engine with some weird bitmasking operations that line up 
> with the windows 3 GDI raster ops.  While accelerating this maybe made sense 
> on a 386, it's not worth the effort on modern CPUs.  The latency and overhead 
> of setting up and syncronising with the async blit engine is greater than the 
> cost of just doing it in software.  In practice modern desktop environments 
> just use the 3D engine.
> 
> IMO emulating useful 'real' 3D hardware is not feasible.  In theory you could 
> emulate an old card, however these are also of limited practical benefit.  For 
> the S3 cards the 3D engine is so crippled that even when new it wasn't worth 
> using.  You could maybe implement an old fixed-function card like, e.g. an 
> i810 or 3dfx card, however drivers for these are also getting hard to come by, 
> and the functionality is still limited.  You basically get raster offloading, 
> and everything else is done in software.  Emulation overhead may be greater 
> than useful offloaded work.
> 
> For good 3D support you're looking at something shader based.  Emulating real 
> hardware is not going to happen.  With real hardware the interface qemu needs 
> to emulate is directly tied to the implementation details of that particular 
> chipset.  The guest driver generally uses intimate knowledge of these 
> implementation details (e.g. vram layout, shader ISA).  Different 
> implementations may provide the same high-level functionality, however the 
> low-level implementations are very different.  Reconstructing high-level 
> operations from the low-level stream is extremely hard, probably harder than 
> the main CPU emulation that qemu does.
> 
> IMO a good rule of thumb is that the output of the render pipeline should not 
> be guest visible.  Anything where the guest can observe/manipulate the output 
> or intermediate results makes it very hard to isolate the guest from the 
> implementation details (i.e. whatever hardware acceleration the host 
> provides).
> 
> There are already a handful of different paravirtual graphics drivers, of 
> varying quality and openness.  This includes:
> 
> - Several OpenGL passthrough drivers.  These are effectively just re-
> implementing GLX, often badly.  I suspect that given a decent virtual network, 
> remote X (including 3D via GLX) already works pretty well.
> 
> - SPICE. IIUC this is an ugly hack that maps directly onto legacy Windows/GDI 
Hey, take that back! ;) except for the ugly and hack parts, yes. Also has grown
surfaces support to map better to how X works.
> operations.  I'm not aware of any substantive plan for making this work well 
> in other environments (using the subset that's basically a dumb framebuffer 
> doesn't count), or for doing 3D.
We are planning 3D support, based on shaders (basically, vgallium), feature
page is http://spice-space.org/page/Features/Spice3D

> 
> - Whatever VMware uses.
> 
> - Whatever VirtualBox uses.
> 
> - At least two gallium3D based projects.  I think this includes Xen, and 
> possibly VirtualBox.  Given the whole point of Gallium3D is to provide a 
> common abstraction layer between the application API and the hardware this 
> would be my choice.
> 
> Paul
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Qemu-devel] GSoC 2011: S3 Trio, AHCI
  2011-04-06 22:54   ` Paul Brook
  2011-04-07 10:10     ` Alon Levy
@ 2011-04-07 10:17     ` Alon Levy
  2011-04-07 13:13     ` Anthony Liguori
  2 siblings, 0 replies; 8+ messages in thread
From: Alon Levy @ 2011-04-07 10:17 UTC (permalink / raw)
  To: Paul Brook; +Cc: Stefan Hajnoczi, Roland Elek, qemu-devel, Luiz Capitulino

On Wed, Apr 06, 2011 at 11:54:47PM +0100, Paul Brook wrote:
> > > Last year, I was also interested in working on S3 Trio emulation. This
> > > year, the same idea is on the ideas list. The hardware is pretty
> > > thoroughly documented through source code and textual documentation, and
> > > I'm already familiar with adding PCI devices to Qemu, so I do see a
> > > rough outline of how I would implement it.
> > > 
> > > However, last year, Paul Brook commented [1] that he wasn't convinced
> > > about the usefulness of emulating an S3 Trio or Virge card, because of
> > > performance reasons. He suggested that accelerating the 2D engine would
> > > be tricky because the framebuffer is exposed to the guest. This might be
> > > just me not fully understanding his point, but isn't this also the case
> > > with the Cirrus Logic GD5446 card?
> > > 
> > > He also suggested paravirtualization for 3D acceleration. Do you think
> > > it would make a good summer project?
> > 
> > I can't comment on these issues, CC'ing Paul, Anthony and Stefan.
> 
> My understanding is that Cirrus logic cards also have 2D acceleration.  We 
> implement this in qemu, but not in a way that's likely to be fast.  I don't 
> really know either card in detail, but they're both a similar age, so I'd 
> expect the functionality to be fairly similar.
> 
> The 2D engines you're talking about are of questionable benefit.  IIUC They're 
> basically a memcpy engine with some weird bitmasking operations that line up 
> with the windows 3 GDI raster ops.  While accelerating this maybe made sense 
> on a 386, it's not worth the effort on modern CPUs.  The latency and overhead 
> of setting up and syncronising with the async blit engine is greater than the 
> cost of just doing it in software.  In practice modern desktop environments 
> just use the 3D engine.
> 
> IMO emulating useful 'real' 3D hardware is not feasible.  In theory you could 
> emulate an old card, however these are also of limited practical benefit.  For 
> the S3 cards the 3D engine is so crippled that even when new it wasn't worth 
> using.  You could maybe implement an old fixed-function card like, e.g. an 
> i810 or 3dfx card, however drivers for these are also getting hard to come by, 
> and the functionality is still limited.  You basically get raster offloading, 
> and everything else is done in software.  Emulation overhead may be greater 
> than useful offloaded work.
> 
> For good 3D support you're looking at something shader based.  Emulating real 
> hardware is not going to happen.  With real hardware the interface qemu needs 
> to emulate is directly tied to the implementation details of that particular 
> chipset.  The guest driver generally uses intimate knowledge of these 
> implementation details (e.g. vram layout, shader ISA).  Different 
> implementations may provide the same high-level functionality, however the 
> low-level implementations are very different.  Reconstructing high-level 
> operations from the low-level stream is extremely hard, probably harder than 
> the main CPU emulation that qemu does.
> 
> IMO a good rule of thumb is that the output of the render pipeline should not 
> be guest visible.  Anything where the guest can observe/manipulate the output 
> or intermediate results makes it very hard to isolate the guest from the 
> implementation details (i.e. whatever hardware acceleration the host 
> provides).
> 
> There are already a handful of different paravirtual graphics drivers, of 
> varying quality and openness.  This includes:
> 
> - Several OpenGL passthrough drivers.  These are effectively just re-
> implementing GLX, often badly.  I suspect that given a decent virtual network, 
> remote X (including 3D via GLX) already works pretty well.
> 
> - SPICE. IIUC this is an ugly hack that maps directly onto legacy Windows/GDI 
> operations.  I'm not aware of any substantive plan for making this work well 
> in other environments (using the subset that's basically a dumb framebuffer 
Also, spice doesn't let the guest touch the framebuffer, unless it explicitly
asks for it (printscreen). So most of the time the 2d operations (like you
said, they are basically the gdi lexicon) are passed to the server and from it
to the client, which actually performs them. If you do a printscreen the server
also has to perform them to provide the updated framebuffer to the guest. The
guest however doesn't write to the framebuffers.

> doesn't count), or for doing 3D.
> 
> - Whatever VMware uses.
> 
> - Whatever VirtualBox uses.
> 
> - At least two gallium3D based projects.  I think this includes Xen, and 
> possibly VirtualBox.  Given the whole point of Gallium3D is to provide a 
> common abstraction layer between the application API and the hardware this 
> would be my choice.
> 
> Paul
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Qemu-devel] GSoC 2011: S3 Trio, AHCI
  2011-04-05 13:36 [Qemu-devel] GSoC 2011: S3 Trio, AHCI Roland Elek
  2011-04-06 18:21 ` Luiz Capitulino
@ 2011-04-07 10:57 ` Natalia Portillo
  2011-04-07 12:13   ` Alexander Graf
  1 sibling, 1 reply; 8+ messages in thread
From: Natalia Portillo @ 2011-04-07 10:57 UTC (permalink / raw)
  To: Roland Elek; +Cc: qemu-devel

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Hi Roland,

El 05/04/2011, a las 14:36, Roland Elek escribió:

> Dear Qemu developers,
> 
> First, I'd like to reintroduce myself, as my university and official duties prevented me from being active in the community since last year. I am Roland Elek, a student from Hungary, and a successful student participant of Google Summer of Code 2010. This year, I would like to participate again. I know I'm a bit late, but I'm still hoping to get things arranged before the deadline.
> 
> Last year, I worked on AHCI emulation with Alex as my mentor. Do you think a proper summer project could be proposed from what is still missing? If so, can I kindly ask someone to give me some pointers to what the project needs the most, and where I should look first for things to include in my proposal? Also, if the idea is feasible, would there be someone who could be my mentor?

You should ask Alex himself directly.

> Last year, I was also interested in working on S3 Trio emulation. This year, the same idea is on the ideas list. The hardware is pretty thoroughly documented through source code and textual documentation, and I'm already familiar with adding PCI devices to Qemu, so I do see a rough outline of how I would implement it.
> 
> However, last year, Paul Brook commented [1] that he wasn't convinced about the usefulness of emulating an S3 Trio or Virge card, because of performance reasons. He suggested that accelerating the 2D engine would be tricky because the framebuffer is exposed to the guest. This might be just me not fully understanding his point, but isn't this also the case with the Cirrus Logic GD5446 card?

The 2D accelenration engine of that cards were merely an implementation of Windows 3.1 GDI calls, a bitblt, draw a circle, so on, over a framebuffer.
They are pretty simple and easily converted to GDI+, SDL or Cocoa, whatever QEMU is needing in the host to draw the framebuffer.

The idea of emulating a S3 Trio however is not to give 2D acceleration to guests but to have a hardware with wider support from guests.
The S3 Trio is supported by almost all known x86 guests and a good couple of non-x86 ones (including BeOS, Windows NT, NeXTStep).

The GDI accelerated functions were used only by Windows and only in some resolutions (640x480 at 16 colors mostly). The card's VESA implementation was 2.0 (without 2D acceleration) and buggy enough to have made the manufacturer itself include a software implementation of VESA 3.0.

Anyway just digging again on Google shows me that the trio also accelerated YUV to RGB conversion (easily done, I have it in my webcam emulation) and that it is fully emulated by DOSBox (so their source can be used as a start) and of course, like past year, for VirtualPC (so emulation is possible and performance is not bad).

> He also suggested paravirtualization for 3D acceleration. Do you think it would make a good summer project?

For this you would need to implement some kind of MPI between guest and host and trap the WGL/GLX/AGL calls and pass them as host WGL/GLX/AGL calls.
It is feasible but you should provide your own drivers for the guests because emulating the registers of a real 3D call will be simply performance-nulling.

Whatever you think is best for your abilities, apply for it on GSoC webpage.

However personally there are already two students applying for S3 and I would prefer everyone to have a choice so I recommend you to apply for the AHCI finishing or 3D virtualization, as you see fit.

Regards,
Natalia Portillo

-----BEGIN PGP SIGNATURE-----
Version: GnuPG/MacGPG2 v2.0.17 (Darwin)
Comment: GPGTools - http://gpgtools.org

iF4EAREIAAYFAk2dmKcACgkQv/wfOsykIRTxTQD/QM1nKJGpLMRuCokKaoVBUYmK
94xs4L1rcbIXsxYoifwBALLZtuWZI29eP4Nz/DE55E5uX4AV3RHrcWw/ngvOPhD0
=46Q8
-----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Qemu-devel] GSoC 2011: S3 Trio, AHCI
  2011-04-07 10:57 ` Natalia Portillo
@ 2011-04-07 12:13   ` Alexander Graf
  0 siblings, 0 replies; 8+ messages in thread
From: Alexander Graf @ 2011-04-07 12:13 UTC (permalink / raw)
  To: Natalia Portillo; +Cc: qemu-devel, Roland Elek


On 07.04.2011, at 12:57, Natalia Portillo wrote:

> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA256
> 
> Hi Roland,
> 
> El 05/04/2011, a las 14:36, Roland Elek escribió:
> 
>> Dear Qemu developers,
>> 
>> First, I'd like to reintroduce myself, as my university and official duties prevented me from being active in the community since last year. I am Roland Elek, a student from Hungary, and a successful student participant of Google Summer of Code 2010. This year, I would like to participate again. I know I'm a bit late, but I'm still hoping to get things arranged before the deadline.
>> 
>> Last year, I worked on AHCI emulation with Alex as my mentor. Do you think a proper summer project could be proposed from what is still missing? If so, can I kindly ask someone to give me some pointers to what the project needs the most, and where I should look first for things to include in my proposal? Also, if the idea is feasible, would there be someone who could be my mentor?
> 
> You should ask Alex himself directly.

I don't think there's enough left to be done to warrant a full GSoC project.


Alex

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Qemu-devel] GSoC 2011: S3 Trio, AHCI
  2011-04-06 22:54   ` Paul Brook
  2011-04-07 10:10     ` Alon Levy
  2011-04-07 10:17     ` Alon Levy
@ 2011-04-07 13:13     ` Anthony Liguori
  2 siblings, 0 replies; 8+ messages in thread
From: Anthony Liguori @ 2011-04-07 13:13 UTC (permalink / raw)
  To: Paul Brook; +Cc: Stefan Hajnoczi, Roland Elek, qemu-devel, Luiz Capitulino

On 04/06/2011 05:54 PM, Paul Brook wrote:
>>> Last year, I was also interested in working on S3 Trio emulation. This
>>> year, the same idea is on the ideas list. The hardware is pretty
>>> thoroughly documented through source code and textual documentation, and
>>> I'm already familiar with adding PCI devices to Qemu, so I do see a
>>> rough outline of how I would implement it.
>>>
>>> However, last year, Paul Brook commented [1] that he wasn't convinced
>>> about the usefulness of emulating an S3 Trio or Virge card, because of
>>> performance reasons. He suggested that accelerating the 2D engine would
>>> be tricky because the framebuffer is exposed to the guest. This might be
>>> just me not fully understanding his point, but isn't this also the case
>>> with the Cirrus Logic GD5446 card?
>>>
>>> He also suggested paravirtualization for 3D acceleration. Do you think
>>> it would make a good summer project?
>> I can't comment on these issues, CC'ing Paul, Anthony and Stefan.
> My understanding is that Cirrus logic cards also have 2D acceleration.  We
> implement this in qemu, but not in a way that's likely to be fast.  I don't
> really know either card in detail, but they're both a similar age, so I'd
> expect the functionality to be fairly similar.
>
> The 2D engines you're talking about are of questionable benefit.  IIUC They're
> basically a memcpy engine with some weird bitmasking operations that line up
> with the windows 3 GDI raster ops.  While accelerating this maybe made sense
> on a 386, it's not worth the effort on modern CPUs.  The latency and overhead
> of setting up and syncronising with the async blit engine is greater than the
> cost of just doing it in software.  In practice modern desktop environments
> just use the 3D engine.

2d acceleration is more useful for more remote graphics protocols than 
local performance.  We make use of Cirrus's bitblt and it's a huge 
performance optimization for VNC.

The other big non-3d optimizations are YUV surfaces, hardware scaling, 
and RGBA hardware mouse rendering.  With those things, you can get 90% 
of the way to having a nice desktop experience.

And this is basically what VMware VGA has FWIW.  To get the rest of the 
way, you really need something like QXL that has offscreen surfaces, 
text rendering, etc.

Regards,

Anthony Liguori


> IMO emulating useful 'real' 3D hardware is not feasible.  In theory you could
> emulate an old card, however these are also of limited practical benefit.  For
> the S3 cards the 3D engine is so crippled that even when new it wasn't worth
> using.  You could maybe implement an old fixed-function card like, e.g. an
> i810 or 3dfx card, however drivers for these are also getting hard to come by,
> and the functionality is still limited.  You basically get raster offloading,
> and everything else is done in software.  Emulation overhead may be greater
> than useful offloaded work.
>
> For good 3D support you're looking at something shader based.  Emulating real
> hardware is not going to happen.  With real hardware the interface qemu needs
> to emulate is directly tied to the implementation details of that particular
> chipset.  The guest driver generally uses intimate knowledge of these
> implementation details (e.g. vram layout, shader ISA).  Different
> implementations may provide the same high-level functionality, however the
> low-level implementations are very different.  Reconstructing high-level
> operations from the low-level stream is extremely hard, probably harder than
> the main CPU emulation that qemu does.
>
> IMO a good rule of thumb is that the output of the render pipeline should not
> be guest visible.  Anything where the guest can observe/manipulate the output
> or intermediate results makes it very hard to isolate the guest from the
> implementation details (i.e. whatever hardware acceleration the host
> provides).
>
> There are already a handful of different paravirtual graphics drivers, of
> varying quality and openness.  This includes:
>
> - Several OpenGL passthrough drivers.  These are effectively just re-
> implementing GLX, often badly.  I suspect that given a decent virtual network,
> remote X (including 3D via GLX) already works pretty well.
>
> - SPICE. IIUC this is an ugly hack that maps directly onto legacy Windows/GDI
> operations.  I'm not aware of any substantive plan for making this work well
> in other environments (using the subset that's basically a dumb framebuffer
> doesn't count), or for doing 3D.
>
> - Whatever VMware uses.
>
> - Whatever VirtualBox uses.
>
> - At least two gallium3D based projects.  I think this includes Xen, and
> possibly VirtualBox.  Given the whole point of Gallium3D is to provide a
> common abstraction layer between the application API and the hardware this
> would be my choice.
>
> Paul
>

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2011-04-07 13:13 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-04-05 13:36 [Qemu-devel] GSoC 2011: S3 Trio, AHCI Roland Elek
2011-04-06 18:21 ` Luiz Capitulino
2011-04-06 22:54   ` Paul Brook
2011-04-07 10:10     ` Alon Levy
2011-04-07 10:17     ` Alon Levy
2011-04-07 13:13     ` Anthony Liguori
2011-04-07 10:57 ` Natalia Portillo
2011-04-07 12:13   ` Alexander Graf

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.