All of lore.kernel.org
 help / color / mirror / Atom feed
* Introspection API development
@ 2016-08-04  3:25 Stephen Pape
  2016-08-04  8:50 ` Paolo Bonzini
  2016-08-04 12:48 ` Stefan Hajnoczi
  0 siblings, 2 replies; 10+ messages in thread
From: Stephen Pape @ 2016-08-04  3:25 UTC (permalink / raw)
  To: kvm

Hello all,

For my own purposes, I've been modifying KVM to support introspection.
I have a project that uses Xen's "vm event" API, and I'm trying to get
things working with KVM as well. Our implementation needs to be able
to map guest memory directly. I have seen the qemu patch used by
libvmi, but they handle memory reads and writes through a socket,
which just won't work for us. I'm also looking to add other types of
hooks, and patching qemu seems too limited.

My approach involves modifying the kernel driver to export a
/dev/virt/ filesystem. I suppose I could do it all via /dev/kvm ioctls
as well.

My (relatively minor) patch allows processes besides the launching
process to do things like map guest memory and read VCPU states for a
VM. Soon, I'll be looking into adding support for handling events (cr3
writes, int3 traps, etc.). Eventually, an event should come in, a
program will handle it (while able to read memory/registers), and then
resume the VCPU.

My question is, is this anything the KVM group would be interested in
bringing upstream? I'd definitely be willing to change my approach if
necessary. If there's no interest, I'll just have to maintain my own
patches.

Any comments would be welcome.

Thanks!

-Stephen

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Introspection API development
  2016-08-04  3:25 Introspection API development Stephen Pape
@ 2016-08-04  8:50 ` Paolo Bonzini
  2016-08-04  8:55   ` Jan Kiszka
  2016-08-04 11:18   ` Mihai Donțu
  2016-08-04 12:48 ` Stefan Hajnoczi
  1 sibling, 2 replies; 10+ messages in thread
From: Paolo Bonzini @ 2016-08-04  8:50 UTC (permalink / raw)
  To: Stephen Pape, kvm, Jan Kiszka



On 04/08/2016 05:25, Stephen Pape wrote:
> 
> My approach involves modifying the kernel driver to export a
> /dev/virt/ filesystem. I suppose I could do it all via /dev/kvm ioctls
> as well.
> 
> My (relatively minor) patch allows processes besides the launching
> process to do things like map guest memory and read VCPU states for a
> VM. Soon, I'll be looking into adding support for handling events (cr3
> writes, int3 traps, etc.). Eventually, an event should come in, a
> program will handle it (while able to read memory/registers), and then
> resume the VCPU.

I think the interface should be implemented entirely in userspace and it
should be *mostly* socket-based; I say mostly because I understand that
reading memory directly can be useful.

So this is a lot like a mix of two interfaces:

- a debugger interface which lets you read/write registers and set events

- the vhost-user interface which lets you pass the memory map (a mapping
between guest physical addresses and offsets in a file descriptor) from
QEMU to another process.

The gdb stub protocol seems limited for the kind of event you want to
trap, but there was already a GSoC project a few years ago that looked
at gdb protocol extensions.  Jan, what was the outcome?

In any case, I think there should be a separation between the ioctl KVM
API and the socket userspace API.  By the way most of the KVM API is
already there---e.g. reading/writing registers, breakpoints,
etc.---though you'll want to add events such as cr3 or idtr writes.

Thanks,

Paolo

> My question is, is this anything the KVM group would be interested in
> bringing upstream? I'd definitely be willing to change my approach if
> necessary. If there's no interest, I'll just have to maintain my own
> patches.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Introspection API development
  2016-08-04  8:50 ` Paolo Bonzini
@ 2016-08-04  8:55   ` Jan Kiszka
  2016-08-04 11:18   ` Mihai Donțu
  1 sibling, 0 replies; 10+ messages in thread
From: Jan Kiszka @ 2016-08-04  8:55 UTC (permalink / raw)
  To: Paolo Bonzini, Stephen Pape, kvm

On 2016-08-04 10:50, Paolo Bonzini wrote:
> 
> 
> On 04/08/2016 05:25, Stephen Pape wrote:
>>
>> My approach involves modifying the kernel driver to export a
>> /dev/virt/ filesystem. I suppose I could do it all via /dev/kvm ioctls
>> as well.
>>
>> My (relatively minor) patch allows processes besides the launching
>> process to do things like map guest memory and read VCPU states for a
>> VM. Soon, I'll be looking into adding support for handling events (cr3
>> writes, int3 traps, etc.). Eventually, an event should come in, a
>> program will handle it (while able to read memory/registers), and then
>> resume the VCPU.
> 
> I think the interface should be implemented entirely in userspace and it
> should be *mostly* socket-based; I say mostly because I understand that
> reading memory directly can be useful.
> 
> So this is a lot like a mix of two interfaces:
> 
> - a debugger interface which lets you read/write registers and set events
> 
> - the vhost-user interface which lets you pass the memory map (a mapping
> between guest physical addresses and offsets in a file descriptor) from
> QEMU to another process.
> 
> The gdb stub protocol seems limited for the kind of event you want to
> trap, but there was already a GSoC project a few years ago that looked
> at gdb protocol extensions.  Jan, what was the outcome?

IIRC, the project was not started in the end.

Jan

> 
> In any case, I think there should be a separation between the ioctl KVM
> API and the socket userspace API.  By the way most of the KVM API is
> already there---e.g. reading/writing registers, breakpoints,
> etc.---though you'll want to add events such as cr3 or idtr writes.
> 
> Thanks,
> 
> Paolo
> 
>> My question is, is this anything the KVM group would be interested in
>> bringing upstream? I'd definitely be willing to change my approach if
>> necessary. If there's no interest, I'll just have to maintain my own
>> patches.


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Introspection API development
  2016-08-04  8:50 ` Paolo Bonzini
  2016-08-04  8:55   ` Jan Kiszka
@ 2016-08-04 11:18   ` Mihai Donțu
  2016-08-04 12:44     ` Jan Kiszka
  2016-08-04 12:56     ` Paolo Bonzini
  1 sibling, 2 replies; 10+ messages in thread
From: Mihai Donțu @ 2016-08-04 11:18 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: Stephen Pape, kvm, Jan Kiszka

On Thu, 4 Aug 2016 10:50:30 +0200 Paolo Bonzini wrote:
> On 04/08/2016 05:25, Stephen Pape wrote:
> > My approach involves modifying the kernel driver to export a
> > /dev/virt/ filesystem. I suppose I could do it all via /dev/kvm ioctls
> > as well.
> > 
> > My (relatively minor) patch allows processes besides the launching
> > process to do things like map guest memory and read VCPU states for a
> > VM. Soon, I'll be looking into adding support for handling events (cr3
> > writes, int3 traps, etc.). Eventually, an event should come in, a
> > program will handle it (while able to read memory/registers), and then
> > resume the VCPU.  
> 
> I think the interface should be implemented entirely in userspace and it
> should be *mostly* socket-based; I say mostly because I understand that
> reading memory directly can be useful.

We are working on something similar, but we're looking into making it
entirely in kernel and possibly leveraging VirtIO, due to performance
considerations (mostly caused by the overhead of hw virtualization).

The model we're aiming is: on a KVM host, out of the N running VM-s, one
has special privileges allowing it to manipulate the memory and vCPU
state of the others. We call that special VM an SVA (Security Virtual
Appliance) and it uses a channel (much like the one found on Xen -
evtchn) and a set of specific VMCALL-s to:

  * receive notifications from the host when a new VM is
    created/destroyed
  * manipulate the EPT of a specific VM
  * manipulate the vCPU state of a specific VM (GPRs)
  * manipulate the memory of a specific VM (insert code)

We don't have much code in place at the moment, but we plan to post a
RFC series in the near future.

Obviously we've tried the userspace / qemu approach since it would have
made development _much_ easier, but it's simply not "performant" enough.
This whole KVM work is actually a "glue" to an introspection technology
we have developed and which uses extensive hooking (via EPT) to monitor
execution of the kernel and user-mode processes, all the while aiming
to shave at most 20% out of the performance of each VM (in a 100-VM
setup).

> So this is a lot like a mix of two interfaces:
> 
> - a debugger interface which lets you read/write registers and set events
> 
> - the vhost-user interface which lets you pass the memory map (a mapping
> between guest physical addresses and offsets in a file descriptor) from
> QEMU to another process.
> 
> The gdb stub protocol seems limited for the kind of event you want to
> trap, but there was already a GSoC project a few years ago that looked
> at gdb protocol extensions.  Jan, what was the outcome?
> 
> In any case, I think there should be a separation between the ioctl KVM
> API and the socket userspace API.  By the way most of the KVM API is
> already there---e.g. reading/writing registers, breakpoints,
> etc.---though you'll want to add events such as cr3 or idtr writes.
> 
> > My question is, is this anything the KVM group would be interested in
> > bringing upstream? I'd definitely be willing to change my approach if
> > necessary. If there's no interest, I'll just have to maintain my own
> > patches.  

-- 
Mihai DONȚU

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Introspection API development
  2016-08-04 11:18   ` Mihai Donțu
@ 2016-08-04 12:44     ` Jan Kiszka
  2016-08-04 13:57       ` Mihai Donțu
  2016-08-04 12:56     ` Paolo Bonzini
  1 sibling, 1 reply; 10+ messages in thread
From: Jan Kiszka @ 2016-08-04 12:44 UTC (permalink / raw)
  To: Mihai Donțu, Paolo Bonzini; +Cc: Stephen Pape, kvm

On 2016-08-04 13:18, Mihai Donțu wrote:
> On Thu, 4 Aug 2016 10:50:30 +0200 Paolo Bonzini wrote:
>> On 04/08/2016 05:25, Stephen Pape wrote:
>>> My approach involves modifying the kernel driver to export a
>>> /dev/virt/ filesystem. I suppose I could do it all via /dev/kvm ioctls
>>> as well.
>>>
>>> My (relatively minor) patch allows processes besides the launching
>>> process to do things like map guest memory and read VCPU states for a
>>> VM. Soon, I'll be looking into adding support for handling events (cr3
>>> writes, int3 traps, etc.). Eventually, an event should come in, a
>>> program will handle it (while able to read memory/registers), and then
>>> resume the VCPU.  
>>
>> I think the interface should be implemented entirely in userspace and it
>> should be *mostly* socket-based; I say mostly because I understand that
>> reading memory directly can be useful.
> 
> We are working on something similar, but we're looking into making it
> entirely in kernel and possibly leveraging VirtIO, due to performance
> considerations (mostly caused by the overhead of hw virtualization).
> 
> The model we're aiming is: on a KVM host, out of the N running VM-s, one
> has special privileges allowing it to manipulate the memory and vCPU
> state of the others. We call that special VM an SVA (Security Virtual
> Appliance) and it uses a channel (much like the one found on Xen -
> evtchn) and a set of specific VMCALL-s to:
> 
>   * receive notifications from the host when a new VM is
>     created/destroyed
>   * manipulate the EPT of a specific VM
>   * manipulate the vCPU state of a specific VM (GPRs)
>   * manipulate the memory of a specific VM (insert code)
> 
> We don't have much code in place at the moment, but we plan to post a
> RFC series in the near future.
> 
> Obviously we've tried the userspace / qemu approach since it would have
> made development _much_ easier, but it's simply not "performant" enough.

What was the bottleneck? VCPU state monitoring/manipulation, VM memory
access or GPA-to-HPA (ie. EPT on Intel) manipulations? I suppose that
information will be essential when you want to convince the maintainers
to add another kernel interface (in times where they are rather reduced).

Jan

> This whole KVM work is actually a "glue" to an introspection technology
> we have developed and which uses extensive hooking (via EPT) to monitor
> execution of the kernel and user-mode processes, all the while aiming
> to shave at most 20% out of the performance of each VM (in a 100-VM
> setup).
> 
>> So this is a lot like a mix of two interfaces:
>>
>> - a debugger interface which lets you read/write registers and set events
>>
>> - the vhost-user interface which lets you pass the memory map (a mapping
>> between guest physical addresses and offsets in a file descriptor) from
>> QEMU to another process.
>>
>> The gdb stub protocol seems limited for the kind of event you want to
>> trap, but there was already a GSoC project a few years ago that looked
>> at gdb protocol extensions.  Jan, what was the outcome?
>>
>> In any case, I think there should be a separation between the ioctl KVM
>> API and the socket userspace API.  By the way most of the KVM API is
>> already there---e.g. reading/writing registers, breakpoints,
>> etc.---though you'll want to add events such as cr3 or idtr writes.
>>
>>> My question is, is this anything the KVM group would be interested in
>>> bringing upstream? I'd definitely be willing to change my approach if
>>> necessary. If there's no interest, I'll just have to maintain my own
>>> patches.  
> 


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Introspection API development
  2016-08-04  3:25 Introspection API development Stephen Pape
  2016-08-04  8:50 ` Paolo Bonzini
@ 2016-08-04 12:48 ` Stefan Hajnoczi
  2016-08-04 15:08   ` Stephen Pape
  1 sibling, 1 reply; 10+ messages in thread
From: Stefan Hajnoczi @ 2016-08-04 12:48 UTC (permalink / raw)
  To: Stephen Pape; +Cc: kvm

[-- Attachment #1: Type: text/plain, Size: 1249 bytes --]

On Wed, Aug 03, 2016 at 11:25:07PM -0400, Stephen Pape wrote:
> Hello all,
> 
> For my own purposes, I've been modifying KVM to support introspection.
> I have a project that uses Xen's "vm event" API, and I'm trying to get
> things working with KVM as well. Our implementation needs to be able
> to map guest memory directly. I have seen the qemu patch used by
> libvmi, but they handle memory reads and writes through a socket,
> which just won't work for us. I'm also looking to add other types of
> hooks, and patching qemu seems too limited.
> 
> My approach involves modifying the kernel driver to export a
> /dev/virt/ filesystem. I suppose I could do it all via /dev/kvm ioctls
> as well.
> 
> My (relatively minor) patch allows processes besides the launching
> process to do things like map guest memory and read VCPU states for a
> VM. Soon, I'll be looking into adding support for handling events (cr3
> writes, int3 traps, etc.). Eventually, an event should come in, a
> program will handle it (while able to read memory/registers), and then
> resume the VCPU.

Why not put your code into QEMU if your end goal is to handle the
vmexit?

Can you describe the use case and software you are developing?

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Introspection API development
  2016-08-04 11:18   ` Mihai Donțu
  2016-08-04 12:44     ` Jan Kiszka
@ 2016-08-04 12:56     ` Paolo Bonzini
  2016-08-04 13:41       ` Mihai Donțu
  1 sibling, 1 reply; 10+ messages in thread
From: Paolo Bonzini @ 2016-08-04 12:56 UTC (permalink / raw)
  To: Mihai Donțu; +Cc: Stephen Pape, kvm, Jan Kiszka



On 04/08/2016 13:18, Mihai Donțu wrote:
> The model we're aiming is: on a KVM host, out of the N running VM-s, one
> has special privileges allowing it to manipulate the memory and vCPU
> state of the others. We call that special VM an SVA (Security Virtual
> Appliance) and it uses a channel (much like the one found on Xen -
> evtchn) and a set of specific VMCALL-s to:
> 
>   * receive notifications from the host when a new VM is
>     created/destroyed
>   * manipulate the EPT of a specific VM
>   * manipulate the vCPU state of a specific VM (GPRs)
>   * manipulate the memory of a specific VM (insert code)

No special VMs and hypercalls, please.  Xen is a microkernel at its
core, KVM is not.  Just run a process on the host.

I'm not very convinced of manipulating the EPT page tables directly.
There must be some higher-level abstraction.  For example, KVM has
recently grown a new in-kernel interface to track dirty pages, and if
anything you should export that one as ioctls, and make QEMU use the ioctls.

> Obviously we've tried the userspace / qemu approach since it would have
> made development _much_ easier, but it's simply not "performant" enough.

That reminds me of kdbus.  Without having even stated what the
requirements are, "it's slow" is dogma rather than fact.  Even more so
if the client is proprietary and hidden behind a black-box "appliance".

vhost-user is performant enough for line-speed packet processing.  It's
obviously not the same thing as VM memory introspection, but it's a
logical suggestion.

Paolo

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Introspection API development
  2016-08-04 12:56     ` Paolo Bonzini
@ 2016-08-04 13:41       ` Mihai Donțu
  0 siblings, 0 replies; 10+ messages in thread
From: Mihai Donțu @ 2016-08-04 13:41 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: Stephen Pape, kvm, Jan Kiszka

On Thursday 04 August 2016 14:56:01 Paolo Bonzini wrote:
> On 04/08/2016 13:18, Mihai Donțu wrote:
> > The model we're aiming is: on a KVM host, out of the N running VM-s, one
> > has special privileges allowing it to manipulate the memory and vCPU
> > state of the others. We call that special VM an SVA (Security Virtual
> > Appliance) and it uses a channel (much like the one found on Xen -
> > evtchn) and a set of specific VMCALL-s to:
> > 
> >   * receive notifications from the host when a new VM is
> >     created/destroyed
> >   * manipulate the EPT of a specific VM
> >   * manipulate the vCPU state of a specific VM (GPRs)
> >   * manipulate the memory of a specific VM (insert code)  
> 
> No special VMs and hypercalls, please.  Xen is a microkernel at its
> core, KVM is not.  Just run a process on the host.

I personally have no problem with that approach, but I was thinking at
RHEV which uses the ESXi model: the "host OS" is pretty much locked
down and only runs the bare minimum, relying on "service VM-s" to
implement 3rd party features (not that I know of any for RHEV, yet).

> I'm not very convinced of manipulating the EPT page tables directly.
> There must be some higher-level abstraction.  For example, KVM has
> recently grown a new in-kernel interface to track dirty pages, and if
> anything you should export that one as ioctls, and make QEMU use the ioctls.

I merely gave EPT as an example of the type of low level access we'd
need. Obviously something generic would be much better, as we're also
looking at ARM.

> > Obviously we've tried the userspace / qemu approach since it would have
> > made development _much_ easier, but it's simply not "performant" enough.  
> 
> That reminds me of kdbus.  Without having even stated what the
> requirements are, "it's slow" is dogma rather than fact.  Even more so
> if the client is proprietary and hidden behind a black-box "appliance".

I apologise, I might have given the impression that we jumped into the
"it's too slow" wagon too fast. We're looking at both approaches
actually, I'm merely the only one looking into VirtIO. However, I don't
have any numbers yet. :-(

> vhost-user is performant enough for line-speed packet processing.  It's
> obviously not the same thing as VM memory introspection, but it's a
> logical suggestion.

I'll have a look at that again, thanks for the hint!

Here is a small presentation of what we're trying to achieve for KVM:
https://events.linuxfoundation.org/sites/events/files/slides/Zero-Footprint%20Guest%20Memory%20Introspection%20from%20Xen%20_%20draft11.pdf

When I'm talking about speed, I might be a bit biased as I've been
involved in the performance improvements only on the Xen side and we
tried pretty hard to keep some code paths as simple as possible.

-- 
Mihai DONȚU

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Introspection API development
  2016-08-04 12:44     ` Jan Kiszka
@ 2016-08-04 13:57       ` Mihai Donțu
  0 siblings, 0 replies; 10+ messages in thread
From: Mihai Donțu @ 2016-08-04 13:57 UTC (permalink / raw)
  To: Jan Kiszka; +Cc: Paolo Bonzini, Stephen Pape, kvm

[-- Attachment #1: Type: text/plain, Size: 4812 bytes --]

On Thursday 04 August 2016 14:44:10 Jan Kiszka wrote:
> On 2016-08-04 13:18, Mihai Donțu wrote:
> > On Thu, 4 Aug 2016 10:50:30 +0200 Paolo Bonzini wrote:  
> > > On 04/08/2016 05:25, Stephen Pape wrote:  
> > > > My approach involves modifying the kernel driver to export a
> > > > /dev/virt/ filesystem. I suppose I could do it all via /dev/kvm ioctls
> > > > as well.
> > > >
> > > > My (relatively minor) patch allows processes besides the launching
> > > > process to do things like map guest memory and read VCPU states for a
> > > > VM. Soon, I'll be looking into adding support for handling events (cr3
> > > > writes, int3 traps, etc.). Eventually, an event should come in, a
> > > > program will handle it (while able to read memory/registers), and then
> > > > resume the VCPU.
> > >
> > > I think the interface should be implemented entirely in userspace and it
> > > should be *mostly* socket-based; I say mostly because I understand that
> > > reading memory directly can be useful.  
> > 
> > We are working on something similar, but we're looking into making it
> > entirely in kernel and possibly leveraging VirtIO, due to performance
> > considerations (mostly caused by the overhead of hw virtualization).
> > 
> > The model we're aiming is: on a KVM host, out of the N running VM-s, one
> > has special privileges allowing it to manipulate the memory and vCPU
> > state of the others. We call that special VM an SVA (Security Virtual
> > Appliance) and it uses a channel (much like the one found on Xen -
> > evtchn) and a set of specific VMCALL-s to:
> > 
> >   * receive notifications from the host when a new VM is
> >     created/destroyed
> >   * manipulate the EPT of a specific VM
> >   * manipulate the vCPU state of a specific VM (GPRs)
> >   * manipulate the memory of a specific VM (insert code)
> > 
> > We don't have much code in place at the moment, but we plan to post a
> > RFC series in the near future.
> > 
> > Obviously we've tried the userspace / qemu approach since it would have
> > made development _much_ easier, but it's simply not "performant" enough.  
> 
> What was the bottleneck? VCPU state monitoring/manipulation, VM memory
> access or GPA-to-HPA (ie. EPT on Intel) manipulations? I suppose that
> information will be essential when you want to convince the maintainers
> to add another kernel interface (in times where they are rather reduced).

OK, a bit of background on my observations: I initially started with a
POC in which an introspecting tool resides entirely in host kernel.
That tool tracks the VM since creation to destruction and begins by
hooking some MSR-s (to determine when the guest gernel reached a
certain load stage) and then mark a range of pages as RO in EPT.
Anyway, the tool worked OK and the VM behaved decently (considering my
hooks into the KVM mmu were not the best), but it have me a sense of
the performance ontainable. Then I factored in the idea that the tool
should run in userspace (host or guest VM) to prevent it from bringing
down the entire host if it were to crash.

Right now, I'm not really trying to convince anyone of anything, as I'm
not truly convinced of my approach either. I need to make more progress
but the feeling that I will need bypass qemu is pretty strong. :-)

> > This whole KVM work is actually a "glue" to an introspection technology
> > we have developed and which uses extensive hooking (via EPT) to monitor
> > execution of the kernel and user-mode processes, all the while aiming
> > to shave at most 20% out of the performance of each VM (in a 100-VM
> > setup).
> >   
> > > So this is a lot like a mix of two interfaces:
> > >
> > > - a debugger interface which lets you read/write registers and set events
> > >
> > > - the vhost-user interface which lets you pass the memory map (a mapping
> > > between guest physical addresses and offsets in a file descriptor) from
> > > QEMU to another process.
> > >
> > > The gdb stub protocol seems limited for the kind of event you want to
> > > trap, but there was already a GSoC project a few years ago that looked
> > > at gdb protocol extensions.  Jan, what was the outcome?
> > >
> > > In any case, I think there should be a separation between the ioctl KVM
> > > API and the socket userspace API.  By the way most of the KVM API is
> > > already there---e.g. reading/writing registers, breakpoints,
> > > etc.---though you'll want to add events such as cr3 or idtr writes.
> > >  
> > > > My question is, is this anything the KVM group would be interested in
> > > > bringing upstream? I'd definitely be willing to change my approach if
> > > > necessary. If there's no interest, I'll just have to maintain my own
> > > > patches.

-- 
Mihai DONȚU

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Introspection API development
  2016-08-04 12:48 ` Stefan Hajnoczi
@ 2016-08-04 15:08   ` Stephen Pape
  0 siblings, 0 replies; 10+ messages in thread
From: Stephen Pape @ 2016-08-04 15:08 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: kvm

We've developed an introspection API that I'm using with Xen already.
Our libraries right now have a core and a Xen plugin that gets loaded
at runtime. My goal is to just make a KVM plugin, but KVM would need
introspection features for that to happen.

We have abstractions for things like mapping guest memory. Given a
virtual address, vcpu, and length, it does the virt->phys translations
for each page and maps them sequentially in the process' address
space. It makes it convenient for accessing structures in the guest,
especially if they span multiple pages.

I didn't see a great way to get that kind of feature into QEMU.
Besides that, we do things with #GP/#UD faults. As far as I understand
it, QEmu doesn't have access to vmexits of that type, but maybe I'm
missing something?

-Stephen

On Thu, Aug 4, 2016 at 8:48 AM, Stefan Hajnoczi <stefanha@gmail.com> wrote:
> On Wed, Aug 03, 2016 at 11:25:07PM -0400, Stephen Pape wrote:
>> Hello all,
>>
>> For my own purposes, I've been modifying KVM to support introspection.
>> I have a project that uses Xen's "vm event" API, and I'm trying to get
>> things working with KVM as well. Our implementation needs to be able
>> to map guest memory directly. I have seen the qemu patch used by
>> libvmi, but they handle memory reads and writes through a socket,
>> which just won't work for us. I'm also looking to add other types of
>> hooks, and patching qemu seems too limited.
>>
>> My approach involves modifying the kernel driver to export a
>> /dev/virt/ filesystem. I suppose I could do it all via /dev/kvm ioctls
>> as well.
>>
>> My (relatively minor) patch allows processes besides the launching
>> process to do things like map guest memory and read VCPU states for a
>> VM. Soon, I'll be looking into adding support for handling events (cr3
>> writes, int3 traps, etc.). Eventually, an event should come in, a
>> program will handle it (while able to read memory/registers), and then
>> resume the VCPU.
>
> Why not put your code into QEMU if your end goal is to handle the
> vmexit?
>
> Can you describe the use case and software you are developing?
>
> Stefan

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2016-08-04 15:08 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-08-04  3:25 Introspection API development Stephen Pape
2016-08-04  8:50 ` Paolo Bonzini
2016-08-04  8:55   ` Jan Kiszka
2016-08-04 11:18   ` Mihai Donțu
2016-08-04 12:44     ` Jan Kiszka
2016-08-04 13:57       ` Mihai Donțu
2016-08-04 12:56     ` Paolo Bonzini
2016-08-04 13:41       ` Mihai Donțu
2016-08-04 12:48 ` Stefan Hajnoczi
2016-08-04 15:08   ` Stephen Pape

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.