* [Qemu-devel] top(1) utility implementation in QEMU
@ 2016-09-26 13:44 prashanth sunder
2016-09-26 16:28 ` Daniel P. Berrange
0 siblings, 1 reply; 8+ messages in thread
From: prashanth sunder @ 2016-09-26 13:44 UTC (permalink / raw)
To: qemu-devel
Hi All,
Summary of the discussion and different approaches we had on IRC
regarding a top(1) tool in qemu
Implement unique naming for all event loop resources. Sometimes a
string literal can be used but other times the unique name needs to be
generated at runtime (e.g. filename for an fd).
Approach 1)
For a built-in QMP implementation:
We have callbacks from fds, BHs and Timers
So everytime one of them is registered - we add them to the list(what
we see through QMP)
and when they are unregistered - we remove them from the list.
Ex: aio_set_fd_handler(fd, NULL, NULL, NULL) - unregistering an fd -
will remove the fd from the list.
QMP API:
set-event-loop-profiling enable=on/off
[interval=seconds] [iothread=name] and it emits a QMP event with
[{name, counter, time_elapsed}]
Pros:
It works on all systems.
Cons:
Information present inside glib is exposed only via systemtap tracing
- these will not be available via QMP.
For example - I/O in chardevs, network IO etc
Approach 2)
Using Trace:
Add trace event for each type of event loop resource (timer, fd, bh,
etc) in order to see when a resource fires.
Write top(1)-like SystemTap script to get data from the trace backend.
Pros:
No performance overhead using trace
Cons:
The data available from trace depends on the trace-backend that qemu
is configured with.
It is dependent on availability of SystemTap and is backend specific
Approach 3)
Use Trace and extract trace backend data through QMP
Pros:
No performance overhead using trace
Cons:
User has to configure QMP to point to the trace backend.
Please let me know which implementation it is that I should follow and work on.
If I've missed out on anything important. Please add those points to this mail.
Regards,
Prashanth Sunder
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [Qemu-devel] top(1) utility implementation in QEMU
2016-09-26 13:44 [Qemu-devel] top(1) utility implementation in QEMU prashanth sunder
@ 2016-09-26 16:28 ` Daniel P. Berrange
2016-09-29 2:45 ` Fam Zheng
0 siblings, 1 reply; 8+ messages in thread
From: Daniel P. Berrange @ 2016-09-26 16:28 UTC (permalink / raw)
To: prashanth sunder; +Cc: qemu-devel
On Mon, Sep 26, 2016 at 07:14:33PM +0530, prashanth sunder wrote:
> Hi All,
>
> Summary of the discussion and different approaches we had on IRC
> regarding a top(1) tool in qemu
>
> Implement unique naming for all event loop resources. Sometimes a
> string literal can be used but other times the unique name needs to be
> generated at runtime (e.g. filename for an fd).
>
> Approach 1)
> For a built-in QMP implementation:
> We have callbacks from fds, BHs and Timers
> So everytime one of them is registered - we add them to the list(what
> we see through QMP)
> and when they are unregistered - we remove them from the list.
> Ex: aio_set_fd_handler(fd, NULL, NULL, NULL) - unregistering an fd -
> will remove the fd from the list.
>
> QMP API:
> set-event-loop-profiling enable=on/off
> [interval=seconds] [iothread=name] and it emits a QMP event with
> [{name, counter, time_elapsed}]
>
> Pros:
> It works on all systems.
> Cons:
> Information present inside glib is exposed only via systemtap tracing
> - these will not be available via QMP.
> For example - I/O in chardevs, network IO etc
There's other downsides to QMP approach
- Emitting data via QMP will change the behaviour of the system
itself, since QMP will trigger usage of the main event loop
which is the thing being traced. The degree of disturbance
will depend on the interval for emitting events
- If the interval is small and you're monitoring more than one
guest at a time, then the overhead of QMP could start to get
quite significant across the host as a whole. This was
mentioned at the summit wrt existing I/O stats expose by
QEMU for block / net device backends.
- The 'top' tool does not actually have direct access to
QMP for any libvirt guests and we've unlikely to want to
expose such QMP events via libvirt in any kind of supported
API, as they're very use-case specific in design. So at best
the app would have to use libvirt QMP passthrough which is
acceptable for developer / test environments, but not
something that's satisfactory for production deployments.
> Approach 2)
> Using Trace:
> Add trace event for each type of event loop resource (timer, fd, bh,
> etc) in order to see when a resource fires.
> Write top(1)-like SystemTap script to get data from the trace backend.
>
> Pros:
> No performance overhead using trace
Nothing is zero overhead, but more specifically it would avoid
the problem of the "top" tool data transport interfering with
the very data it is trying to measure from the event loop.
It also makes it easier to pull in data other sources. For example
you don't need to extend QMP for each new bit of internal state/data
that the top tool wants access to. You can get access to data that
QEMU doesn't have, such as in glib, or even in the kernel.
>
> Cons:
> The data available from trace depends on the trace-backend that qemu
> is configured with.
> It is dependent on availability of SystemTap and is backend specific
>
> Approach 3)
> Use Trace and extract trace backend data through QMP
>
> Pros:
> No performance overhead using trace
Not sure why you're claiming that - anything that feeds trace
data over QMP is going to have a potentially significant effect
as it'll send traffic through the event loop which is what is
being analysed.
Regards,
Daniel
--
|: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org -o- http://virt-manager.org :|
|: http://autobuild.org -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [Qemu-devel] top(1) utility implementation in QEMU
2016-09-26 16:28 ` Daniel P. Berrange
@ 2016-09-29 2:45 ` Fam Zheng
2016-09-30 17:08 ` Markus Armbruster
0 siblings, 1 reply; 8+ messages in thread
From: Fam Zheng @ 2016-09-29 2:45 UTC (permalink / raw)
To: Daniel P. Berrange; +Cc: prashanth sunder, qemu-devel
On Mon, 09/26 17:28, Daniel P. Berrange wrote:
> On Mon, Sep 26, 2016 at 07:14:33PM +0530, prashanth sunder wrote:
> > Hi All,
> >
> > Summary of the discussion and different approaches we had on IRC
> > regarding a top(1) tool in qemu
> >
> > Implement unique naming for all event loop resources. Sometimes a
> > string literal can be used but other times the unique name needs to be
> > generated at runtime (e.g. filename for an fd).
> >
> > Approach 1)
> > For a built-in QMP implementation:
> > We have callbacks from fds, BHs and Timers
> > So everytime one of them is registered - we add them to the list(what
> > we see through QMP)
> > and when they are unregistered - we remove them from the list.
> > Ex: aio_set_fd_handler(fd, NULL, NULL, NULL) - unregistering an fd -
> > will remove the fd from the list.
> >
> > QMP API:
> > set-event-loop-profiling enable=on/off
> > [interval=seconds] [iothread=name] and it emits a QMP event with
> > [{name, counter, time_elapsed}]
> >
> > Pros:
> > It works on all systems.
> > Cons:
> > Information present inside glib is exposed only via systemtap tracing
> > - these will not be available via QMP.
> > For example - I/O in chardevs, network IO etc
>
>
> There's other downsides to QMP approach
>
> - Emitting data via QMP will change the behaviour of the system
> itself, since QMP will trigger usage of the main event loop
> which is the thing being traced. The degree of disturbance
> will depend on the interval for emitting events
Yes, but compared to a guest that is busy enough to be analyzed with qemu-top,
I don't think this can be a high degree, even it's at a few events per second.
>
> - If the interval is small and you're monitoring more than one
> guest at a time, then the overhead of QMP could start to get
> quite significant across the host as a whole. This was
> mentioned at the summit wrt existing I/O stats expose by
> QEMU for block / net device backends.
qemu-top is supposed to run only in foreground when human attends. So I'm not
concerned about the system wide overall overhead.
>
> - The 'top' tool does not actually have direct access to
> QMP for any libvirt guests and we've unlikely to want to
> expose such QMP events via libvirt in any kind of supported
> API, as they're very use-case specific in design. So at best
> the app would have to use libvirt QMP passthrough which is
> acceptable for developer / test environments, but not
> something that's satisfactory for production deployments.
Just another idea: my original though on how to send statistics to 'qemu-top',
was a specialized channel like a socket with a minimized protocol (e.g. a
mini-QMP, with only whitelisted commands, or an event-only QMP, or simply in an
ad-hoc format).
>
> > Approach 2)
> > Using Trace:
> > Add trace event for each type of event loop resource (timer, fd, bh,
> > etc) in order to see when a resource fires.
> > Write top(1)-like SystemTap script to get data from the trace backend.
> >
> > Pros:
> > No performance overhead using trace
>
> Nothing is zero overhead, but more specifically it would avoid
> the problem of the "top" tool data transport interfering with
> the very data it is trying to measure from the event loop.
>
> It also makes it easier to pull in data other sources. For example
> you don't need to extend QMP for each new bit of internal state/data
> that the top tool wants access to. You can get access to data that
> QEMU doesn't have, such as in glib, or even in the kernel.
My expectation to do this with SystemTap only is optimistic: once the trace
events are there, the script shouldn't be complicated at all, and it will be
useful anyway because of the glib advantage. Probably something worth to do
anyway?
>
> >
> > Cons:
> > The data available from trace depends on the trace-backend that qemu
> > is configured with.
> > It is dependent on availability of SystemTap and is backend specific
> >
> > Approach 3)
> > Use Trace and extract trace backend data through QMP
Like Daniel, I don't think this makes much sense.
> >
> > Pros:
> > No performance overhead using trace
>
> Not sure why you're claiming that - anything that feeds trace
> data over QMP is going to have a potentially significant effect
> as it'll send traffic through the event loop which is what is
> being analysed.
>
> Regards,
> Daniel
> --
> |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org -o- http://virt-manager.org :|
> |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :|
> |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|
>
Fam
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [Qemu-devel] top(1) utility implementation in QEMU
2016-09-29 2:45 ` Fam Zheng
@ 2016-09-30 17:08 ` Markus Armbruster
2016-10-01 12:12 ` Fam Zheng
0 siblings, 1 reply; 8+ messages in thread
From: Markus Armbruster @ 2016-09-30 17:08 UTC (permalink / raw)
To: Fam Zheng; +Cc: Daniel P. Berrange, prashanth sunder, qemu-devel
Fam Zheng <famz@redhat.com> writes:
> On Mon, 09/26 17:28, Daniel P. Berrange wrote:
>> On Mon, Sep 26, 2016 at 07:14:33PM +0530, prashanth sunder wrote:
>> > Hi All,
>> >
>> > Summary of the discussion and different approaches we had on IRC
>> > regarding a top(1) tool in qemu
>> >
>> > Implement unique naming for all event loop resources. Sometimes a
>> > string literal can be used but other times the unique name needs to be
>> > generated at runtime (e.g. filename for an fd).
>> >
>> > Approach 1)
>> > For a built-in QMP implementation:
>> > We have callbacks from fds, BHs and Timers
>> > So everytime one of them is registered - we add them to the list(what
>> > we see through QMP)
>> > and when they are unregistered - we remove them from the list.
>> > Ex: aio_set_fd_handler(fd, NULL, NULL, NULL) - unregistering an fd -
>> > will remove the fd from the list.
>> >
>> > QMP API:
>> > set-event-loop-profiling enable=on/off
>> > [interval=seconds] [iothread=name] and it emits a QMP event with
>> > [{name, counter, time_elapsed}]
>> >
>> > Pros:
>> > It works on all systems.
>> > Cons:
>> > Information present inside glib is exposed only via systemtap tracing
>> > - these will not be available via QMP.
>> > For example - I/O in chardevs, network IO etc
>>
>>
>> There's other downsides to QMP approach
>>
>> - Emitting data via QMP will change the behaviour of the system
>> itself, since QMP will trigger usage of the main event loop
>> which is the thing being traced. The degree of disturbance
>> will depend on the interval for emitting events
>
> Yes, but compared to a guest that is busy enough to be analyzed with qemu-top,
> I don't think this can be a high degree, even it's at a few events per second.
>
>>
>> - If the interval is small and you're monitoring more than one
>> guest at a time, then the overhead of QMP could start to get
>> quite significant across the host as a whole. This was
>> mentioned at the summit wrt existing I/O stats expose by
>> QEMU for block / net device backends.
>
> qemu-top is supposed to run only in foreground when human attends. So I'm not
> concerned about the system wide overall overhead.
>
>>
>> - The 'top' tool does not actually have direct access to
>> QMP for any libvirt guests and we've unlikely to want to
>> expose such QMP events via libvirt in any kind of supported
>> API, as they're very use-case specific in design. So at best
>> the app would have to use libvirt QMP passthrough which is
>> acceptable for developer / test environments, but not
>> something that's satisfactory for production deployments.
>
> Just another idea: my original though on how to send statistics to 'qemu-top',
> was a specialized channel like a socket with a minimized protocol (e.g. a
> mini-QMP, with only whitelisted commands, or an event-only QMP, or simply in an
> ad-hoc format).
What's the advantage over simply using another QMP monitor? Naturally,
injecting arbitrary QMP commands behind libvirt's back isn't going to
end well, but "don't do that then". Information queries and listening
to events should be safe.
Note that we could have a QMP command to spawn monitors. Fun!
[...]
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [Qemu-devel] top(1) utility implementation in QEMU
2016-09-30 17:08 ` Markus Armbruster
@ 2016-10-01 12:12 ` Fam Zheng
2016-10-04 7:42 ` Markus Armbruster
0 siblings, 1 reply; 8+ messages in thread
From: Fam Zheng @ 2016-10-01 12:12 UTC (permalink / raw)
To: Markus Armbruster; +Cc: prashanth sunder, qemu-devel
On Fri, 09/30 19:08, Markus Armbruster wrote:
> Fam Zheng <famz@redhat.com> writes:
>
> > On Mon, 09/26 17:28, Daniel P. Berrange wrote:
> >> On Mon, Sep 26, 2016 at 07:14:33PM +0530, prashanth sunder wrote:
> >> > Hi All,
> >> >
> >> > Summary of the discussion and different approaches we had on IRC
> >> > regarding a top(1) tool in qemu
> >> >
> >> > Implement unique naming for all event loop resources. Sometimes a
> >> > string literal can be used but other times the unique name needs to be
> >> > generated at runtime (e.g. filename for an fd).
> >> >
> >> > Approach 1)
> >> > For a built-in QMP implementation:
> >> > We have callbacks from fds, BHs and Timers
> >> > So everytime one of them is registered - we add them to the list(what
> >> > we see through QMP)
> >> > and when they are unregistered - we remove them from the list.
> >> > Ex: aio_set_fd_handler(fd, NULL, NULL, NULL) - unregistering an fd -
> >> > will remove the fd from the list.
> >> >
> >> > QMP API:
> >> > set-event-loop-profiling enable=on/off
> >> > [interval=seconds] [iothread=name] and it emits a QMP event with
> >> > [{name, counter, time_elapsed}]
> >> >
> >> > Pros:
> >> > It works on all systems.
> >> > Cons:
> >> > Information present inside glib is exposed only via systemtap tracing
> >> > - these will not be available via QMP.
> >> > For example - I/O in chardevs, network IO etc
> >>
> >>
> >> There's other downsides to QMP approach
> >>
> >> - Emitting data via QMP will change the behaviour of the system
> >> itself, since QMP will trigger usage of the main event loop
> >> which is the thing being traced. The degree of disturbance
> >> will depend on the interval for emitting events
> >
> > Yes, but compared to a guest that is busy enough to be analyzed with qemu-top,
> > I don't think this can be a high degree, even it's at a few events per second.
> >
> >>
> >> - If the interval is small and you're monitoring more than one
> >> guest at a time, then the overhead of QMP could start to get
> >> quite significant across the host as a whole. This was
> >> mentioned at the summit wrt existing I/O stats expose by
> >> QEMU for block / net device backends.
> >
> > qemu-top is supposed to run only in foreground when human attends. So I'm not
> > concerned about the system wide overall overhead.
> >
> >>
> >> - The 'top' tool does not actually have direct access to
> >> QMP for any libvirt guests and we've unlikely to want to
> >> expose such QMP events via libvirt in any kind of supported
> >> API, as they're very use-case specific in design. So at best
> >> the app would have to use libvirt QMP passthrough which is
> >> acceptable for developer / test environments, but not
> >> something that's satisfactory for production deployments.
> >
> > Just another idea: my original though on how to send statistics to 'qemu-top',
> > was a specialized channel like a socket with a minimized protocol (e.g. a
> > mini-QMP, with only whitelisted commands, or an event-only QMP, or simply in an
> > ad-hoc format).
>
> What's the advantage over simply using another QMP monitor? Naturally,
> injecting arbitrary QMP commands behind libvirt's back isn't going to
> end well, but "don't do that then". Information queries and listening
> to events should be safe.
In order to avoid a Libvirt "tainted" state at production env, of course
assuming qemu-top is useful there at all.
> Note that we could have a QMP command to spawn monitors. Fun!
Cool, and how hard is it to implement a QMP command to kill monitors? :)
Fam
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [Qemu-devel] top(1) utility implementation in QEMU
2016-10-01 12:12 ` Fam Zheng
@ 2016-10-04 7:42 ` Markus Armbruster
2016-10-11 12:51 ` Fam Zheng
0 siblings, 1 reply; 8+ messages in thread
From: Markus Armbruster @ 2016-10-04 7:42 UTC (permalink / raw)
To: Fam Zheng; +Cc: prashanth sunder, qemu-devel
Fam Zheng <famz@redhat.com> writes:
> On Fri, 09/30 19:08, Markus Armbruster wrote:
>> Fam Zheng <famz@redhat.com> writes:
>>
>> > On Mon, 09/26 17:28, Daniel P. Berrange wrote:
>> >> On Mon, Sep 26, 2016 at 07:14:33PM +0530, prashanth sunder wrote:
>> >> > Hi All,
>> >> >
>> >> > Summary of the discussion and different approaches we had on IRC
>> >> > regarding a top(1) tool in qemu
>> >> >
>> >> > Implement unique naming for all event loop resources. Sometimes a
>> >> > string literal can be used but other times the unique name needs to be
>> >> > generated at runtime (e.g. filename for an fd).
>> >> >
>> >> > Approach 1)
>> >> > For a built-in QMP implementation:
>> >> > We have callbacks from fds, BHs and Timers
>> >> > So everytime one of them is registered - we add them to the list(what
>> >> > we see through QMP)
>> >> > and when they are unregistered - we remove them from the list.
>> >> > Ex: aio_set_fd_handler(fd, NULL, NULL, NULL) - unregistering an fd -
>> >> > will remove the fd from the list.
>> >> >
>> >> > QMP API:
>> >> > set-event-loop-profiling enable=on/off
>> >> > [interval=seconds] [iothread=name] and it emits a QMP event with
>> >> > [{name, counter, time_elapsed}]
>> >> >
>> >> > Pros:
>> >> > It works on all systems.
>> >> > Cons:
>> >> > Information present inside glib is exposed only via systemtap tracing
>> >> > - these will not be available via QMP.
>> >> > For example - I/O in chardevs, network IO etc
>> >>
>> >>
>> >> There's other downsides to QMP approach
>> >>
>> >> - Emitting data via QMP will change the behaviour of the system
>> >> itself, since QMP will trigger usage of the main event loop
>> >> which is the thing being traced. The degree of disturbance
>> >> will depend on the interval for emitting events
>> >
>> > Yes, but compared to a guest that is busy enough to be analyzed with qemu-top,
>> > I don't think this can be a high degree, even it's at a few events per second.
>> >
>> >>
>> >> - If the interval is small and you're monitoring more than one
>> >> guest at a time, then the overhead of QMP could start to get
>> >> quite significant across the host as a whole. This was
>> >> mentioned at the summit wrt existing I/O stats expose by
>> >> QEMU for block / net device backends.
>> >
>> > qemu-top is supposed to run only in foreground when human attends. So I'm not
>> > concerned about the system wide overall overhead.
>> >
>> >>
>> >> - The 'top' tool does not actually have direct access to
>> >> QMP for any libvirt guests and we've unlikely to want to
>> >> expose such QMP events via libvirt in any kind of supported
>> >> API, as they're very use-case specific in design. So at best
>> >> the app would have to use libvirt QMP passthrough which is
>> >> acceptable for developer / test environments, but not
>> >> something that's satisfactory for production deployments.
>> >
>> > Just another idea: my original though on how to send statistics to 'qemu-top',
>> > was a specialized channel like a socket with a minimized protocol (e.g. a
>> > mini-QMP, with only whitelisted commands, or an event-only QMP, or simply in an
>> > ad-hoc format).
>>
>> What's the advantage over simply using another QMP monitor? Naturally,
>> injecting arbitrary QMP commands behind libvirt's back isn't going to
>> end well, but "don't do that then". Information queries and listening
>> to events should be safe.
>
> In order to avoid a Libvirt "tainted" state at production env, of course
> assuming qemu-top is useful there at all.
Adding another QMP-like protocol seems like a rather steep price just
for avoiding "tainted".
Any chance we can provide this feature together with libvirt instead of
behind its back?
>> Note that we could have a QMP command to spawn monitors. Fun!
>
> Cool, and how hard is it to implement a QMP command to kill monitors? :)
For spawning, we need to adapt the current spawn code to work after
initial startup, too. For killing, we need to write new code. Might be
harder, but can't say until we try.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [Qemu-devel] top(1) utility implementation in QEMU
2016-10-04 7:42 ` Markus Armbruster
@ 2016-10-11 12:51 ` Fam Zheng
2016-10-11 13:09 ` Daniel P. Berrange
0 siblings, 1 reply; 8+ messages in thread
From: Fam Zheng @ 2016-10-11 12:51 UTC (permalink / raw)
To: Markus Armbruster; +Cc: prashanth sunder, qemu-devel
On Tue, 10/04 09:42, Markus Armbruster wrote:
> >> What's the advantage over simply using another QMP monitor? Naturally,
> >> injecting arbitrary QMP commands behind libvirt's back isn't going to
> >> end well, but "don't do that then". Information queries and listening
> >> to events should be safe.
> >
> > In order to avoid a Libvirt "tainted" state at production env, of course
> > assuming qemu-top is useful there at all.
>
> Adding another QMP-like protocol seems like a rather steep price just
> for avoiding "tainted".
>
> Any chance we can provide this feature together with libvirt instead of
> behind its back?
That would be the best, but I am not sure how to make an appropriate interface.
>
> >> Note that we could have a QMP command to spawn monitors. Fun!
> >
> > Cool, and how hard is it to implement a QMP command to kill monitors? :)
>
> For spawning, we need to adapt the current spawn code to work after
> initial startup, too. For killing, we need to write new code. Might be
> harder, but can't say until we try.
>
Fam
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [Qemu-devel] top(1) utility implementation in QEMU
2016-10-11 12:51 ` Fam Zheng
@ 2016-10-11 13:09 ` Daniel P. Berrange
0 siblings, 0 replies; 8+ messages in thread
From: Daniel P. Berrange @ 2016-10-11 13:09 UTC (permalink / raw)
To: Fam Zheng; +Cc: Markus Armbruster, prashanth sunder, qemu-devel
On Tue, Oct 11, 2016 at 08:51:01PM +0800, Fam Zheng wrote:
> On Tue, 10/04 09:42, Markus Armbruster wrote:
> > >> What's the advantage over simply using another QMP monitor? Naturally,
> > >> injecting arbitrary QMP commands behind libvirt's back isn't going to
> > >> end well, but "don't do that then". Information queries and listening
> > >> to events should be safe.
> > >
> > > In order to avoid a Libvirt "tainted" state at production env, of course
> > > assuming qemu-top is useful there at all.
> >
> > Adding another QMP-like protocol seems like a rather steep price just
> > for avoiding "tainted".
> >
> > Any chance we can provide this feature together with libvirt instead of
> > behind its back?
>
> That would be the best, but I am not sure how to make an appropriate interface.
IMHO the QMP monitor just isn't a good fit for performance monitoring due to
its inherant inefficiency wrt serialization/deserialiation. This is already
a problem with the limitation usage libvirt makes of QMP when we're collecting
data from a number of guests. I also think it is a bad choice for exposing
adhoc debugging facilities too - dynamic instrumentation is far more
flexible as it avoids us having to maintain long term stable QMP schemas
for instrumenting internal, ever changing data structures.
Regards,
Daniel
--
|: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org -o- http://virt-manager.org :|
|: http://entangle-photo.org -o- http://search.cpan.org/~danberr/ :|
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2016-10-11 13:09 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-09-26 13:44 [Qemu-devel] top(1) utility implementation in QEMU prashanth sunder
2016-09-26 16:28 ` Daniel P. Berrange
2016-09-29 2:45 ` Fam Zheng
2016-09-30 17:08 ` Markus Armbruster
2016-10-01 12:12 ` Fam Zheng
2016-10-04 7:42 ` Markus Armbruster
2016-10-11 12:51 ` Fam Zheng
2016-10-11 13:09 ` Daniel P. Berrange
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.