All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] sniffing traffic between VMs
@ 2013-10-07 14:47 Alexander Binun
  2013-10-10  9:02 ` Stefan Hajnoczi
  0 siblings, 1 reply; 23+ messages in thread
From: Alexander Binun @ 2013-10-07 14:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: kahilm, markbl

Hello Friends, 
  My name is Alex Binun and I am a researcher in the group of Prof. Shlomi Dolev, Ben-Gurion University of the Negev, Israel, http://www.cs.bgu.ac.il/~dolev/. The group investigates security in virtualization environments and implements a prototype on the top of KVM. Searching for relevant stuff we (the group) ran into the page of Stefan , see his latest  blog entry http://blog.vmsplice.net/search?updated-min=2013-01-01T00:00:00Z&updated-max=2014-01-01T00:00:00Z&max-results=5, and got your email.

Our first task is to trace the traffic between individual VMs and between VMs and the VMM (the KVM driver). So we are searching for proper places to insert "sniffer code". We suspect that some functions in qemu/hw/virtio should be targeted. And we will appreciate any hints on this places.

 Taking into account the efforts towards the standardization of virtual input/output mentioned by Stefan in his latest blog entry, the places for inserting traffic sniffers can be easily found.

Great thanks in advance, 
   Mark, Martin and Alex 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] sniffing traffic between VMs
  2013-10-07 14:47 [Qemu-devel] sniffing traffic between VMs Alexander Binun
@ 2013-10-10  9:02 ` Stefan Hajnoczi
  2013-10-10 11:00   ` [Qemu-devel] kvm binary is deprecated Alexander Binun
  0 siblings, 1 reply; 23+ messages in thread
From: Stefan Hajnoczi @ 2013-10-10  9:02 UTC (permalink / raw)
  To: Alexander Binun; +Cc: kahilm, markbl, qemu-devel

On Mon, Oct 07, 2013 at 05:47:46PM +0300, Alexander Binun wrote:
> Our first task is to trace the traffic between individual VMs and between VMs and the VMM (the KVM driver). So we are searching for proper places to insert "sniffer code". We suspect that some functions in qemu/hw/virtio should be targeted. And we will appreciate any hints on this places.

My blog post about -netdev pcap in QEMU is useful for QEMU network code
development setups.  But the simplest way to sniff traffic in a
production x86 KVM configuration is using tcpdump on the host.

The common networking setup on the host is a Linux software bridge (e.g.
virbr0) and one tap device per guest (e.g. vm001-tap, vm002-tap).  The
tap devices are added to the bridge so guests can communicate with each
other.

When a guest sends a packet, the vhost_net host kernel driver injects
the packet into the guest's tap device.  The Linux network stack then
hands the packet from the tap device to the bridge.

The bridge will forward the packet as appropriate.  In guest<->guest
communication this means the packet is forwarded to the destination
guest's tap device.

The vhost_net driver instance for the destination guest then reads the
packet from its tap device and places it into the guest's virtio-net
receive buffer.

This configuration means you have 3 places where you can run tcpdump on
the host:

1. On the source guest's tap device (e.g. vm001-tap).
2. On the bridge interface (e.g. virbr0).
3. On the destination guest's tap device (e.g. vm002-tap).

There are other options too like using openvswitch or macvtap.
Openvswitch might be interesting because I think it allows you to add
filtering rules into the kernel and send packets that match the rules up
to a userspace process for inspection.

Stefan

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [Qemu-devel] kvm binary is deprecated
  2013-10-10  9:02 ` Stefan Hajnoczi
@ 2013-10-10 11:00   ` Alexander Binun
  2013-10-11  9:05     ` Stefan Hajnoczi
  0 siblings, 1 reply; 23+ messages in thread
From: Alexander Binun @ 2013-10-10 11:00 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: kahilm, markbl, qemu-devel

Stefan , great thanks! We are setting up the scene for experiments...

Unfortunately, we ran into yet one trouble. The configuration: Ubuntu 13.04, internal KVM, Qemu 1.4.0. VMs are created using virt-manager.

When we try to create a VM the following error message appears:
     --- kvm binary is deprecated, please use qemu-system-x86_64 instead

The same message appears when I try to run kvm --version.

Question: how must be upgrade/degrade KVM oro Qemu in order to make them collaborate properly ?

Thanks, 
    Mark, Martin, Alex




On Thu 10 Oct 11:02 2013 Stefan Hajnoczi wrote:
> On Mon, Oct 07, 2013 at 05:47:46PM +0300, Alexander Binun wrote:
> > Our first task is to trace the traffic between individual VMs and between VMs and the VMM (the KVM driver). So we are searching for proper places to insert "sniffer code". We suspect that some functions in qemu/hw/virtio should be targeted. And we will appreciate any hints on this places.
> 
> My blog post about -netdev pcap in QEMU is useful for QEMU network code
> development setups.  But the simplest way to sniff traffic in a
> production x86 KVM configuration is using tcpdump on the host.
> 
> The common networking setup on the host is a Linux software bridge (e.g.
> virbr0) and one tap device per guest (e.g. vm001-tap, vm002-tap).  The
> tap devices are added to the bridge so guests can communicate with each
> other.
> 
> When a guest sends a packet, the vhost_net host kernel driver injects
> the packet into the guest's tap device.  The Linux network stack then
> hands the packet from the tap device to the bridge.
> 
> The bridge will forward the packet as appropriate.  In guest<->guest
> communication this means the packet is forwarded to the destination
> guest's tap device.
> 
> The vhost_net driver instance for the destination guest then reads the
> packet from its tap device and places it into the guest's virtio-net
> receive buffer.
> 
> This configuration means you have 3 places where you can run tcpdump on
> the host:
> 
> 1. On the source guest's tap device (e.g. vm001-tap).
> 2. On the bridge interface (e.g. virbr0).
> 3. On the destination guest's tap device (e.g. vm002-tap).
> 
> There are other options too like using openvswitch or macvtap.
> Openvswitch might be interesting because I think it allows you to add
> filtering rules into the kernel and send packets that match the rules up
> to a userspace process for inspection.
> 
> Stefan
> 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] kvm binary is deprecated
  2013-10-10 11:00   ` [Qemu-devel] kvm binary is deprecated Alexander Binun
@ 2013-10-11  9:05     ` Stefan Hajnoczi
  2013-10-12 14:45       ` Alexander Binun
  0 siblings, 1 reply; 23+ messages in thread
From: Stefan Hajnoczi @ 2013-10-11  9:05 UTC (permalink / raw)
  To: Alexander Binun; +Cc: kahilm, markbl, qemu-devel

On Thu, Oct 10, 2013 at 02:00:39PM +0300, Alexander Binun wrote:
> Stefan , great thanks! We are setting up the scene for experiments...
> 
> Unfortunately, we ran into yet one trouble. The configuration: Ubuntu 13.04, internal KVM, Qemu 1.4.0. VMs are created using virt-manager.
> 
> When we try to create a VM the following error message appears:
>      --- kvm binary is deprecated, please use qemu-system-x86_64 instead
> 
> The same message appears when I try to run kvm --version.
> 
> Question: how must be upgrade/degrade KVM oro Qemu in order to make them collaborate properly ?

It sounds like you may be building the old qemu-kvm.git source code.
Last year qemu-kvm.git was merged back into qemu.git.

It means you should use git://git.qemu-project.org/qemu.git if you are
building from source.

Some distros are creating transitional packages or wrapper scripts that
build QEMU (qemu-system-x86_64) and provide a /usr/bin/kvm or qemu-kvm
executable.

More info:
http://blog.vmsplice.net/2012/12/qemu-kvmgit-has-unforked-back-into.html

Stefan

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] kvm binary is deprecated
  2013-10-11  9:05     ` Stefan Hajnoczi
@ 2013-10-12 14:45       ` Alexander Binun
  2013-10-14  9:12         ` Stefan Hajnoczi
  0 siblings, 1 reply; 23+ messages in thread
From: Alexander Binun @ 2013-10-12 14:45 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: kahilm, markbl, qemu-devel

Hello Stefan,
   The qemu used by me is the one installed using apt-get install qemu. The executable is in /usr/bin. The KVM driver is the one supplied with Ubuntu 13.04.

The version of qemu is 1.4.0 (after running qemu --version I get the message

  --- QEMU emulator version 1.4.0 (Debian 1.4.0+dfsg-1expubuntu4), Copyright (c) 2003-2008 Fabrice Bellard

You mean I should use the build-from-sources qemu (getting the sources from git://git.qemu-project.org/qemu.git) ? Should I then compile from sources and mount the KVM ?

Regards, 
   Alex



> > When we try to create a VM the following error message appears:
> >      --- kvm binary is deprecated, please use qemu-system-x86_64 instead
> > 
> > The same message appears when I try to run kvm --version.
> > 
> > Question: how must be upgrade/degrade KVM oro Qemu in order to make them collaborate properly ?
> 
> It sounds like you may be building the old qemu-kvm.git source code.
> Last year qemu-kvm.git was merged back into qemu.git.
> 
> It means you should use git://git.qemu-project.org/qemu.git if you are
> building from source.
> 
> Some distros are creating transitional packages or wrapper scripts that
> build QEMU (qemu-system-x86_64) and provide a /usr/bin/kvm or qemu-kvm
> executable.
> 
> More info:
> http://blog.vmsplice.net/2012/12/qemu-kvmgit-has-unforked-back-into.html
> 
> Stefan
> 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] kvm binary is deprecated
  2013-10-12 14:45       ` Alexander Binun
@ 2013-10-14  9:12         ` Stefan Hajnoczi
  2013-10-14 10:36           ` Alexander Binun
  2013-12-18 11:53           ` [Qemu-devel] sniffing traffic between virtual machines Alexander Binun
  0 siblings, 2 replies; 23+ messages in thread
From: Stefan Hajnoczi @ 2013-10-14  9:12 UTC (permalink / raw)
  To: Alexander Binun; +Cc: kahilm, markbl, qemu-devel

On Sat, Oct 12, 2013 at 05:45:52PM +0300, Alexander Binun wrote:
>    The qemu used by me is the one installed using apt-get install qemu. The executable is in /usr/bin. The KVM driver is the one supplied with Ubuntu 13.04.
> 
> The version of qemu is 1.4.0 (after running qemu --version I get the message
> 
>   --- QEMU emulator version 1.4.0 (Debian 1.4.0+dfsg-1expubuntu4), Copyright (c) 2003-2008 Fabrice Bellard
> 
> You mean I should use the build-from-sources qemu (getting the sources from git://git.qemu-project.org/qemu.git) ? Should I then compile from sources and mount the KVM ?

In that case it sounds like everything is coming from Ubuntu 13.04 and
should work together.

Sorry, I don't know about Ubuntu 13.04.  Perhaps there is already a
solution if you search the Ubuntu bug tracker.

Stefan

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] kvm binary is deprecated
  2013-10-14  9:12         ` Stefan Hajnoczi
@ 2013-10-14 10:36           ` Alexander Binun
  2013-10-14 14:16             ` Stefan Hajnoczi
  2013-12-18 11:53           ` [Qemu-devel] sniffing traffic between virtual machines Alexander Binun
  1 sibling, 1 reply; 23+ messages in thread
From: Alexander Binun @ 2013-10-14 10:36 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: kahilm, markbl, qemu-devel

The workaround offered in bug trackers is: "change the path associated with the emulation tag in the xml definition file. Change it to qemu-system-x86_64".

Well, I am familiar with XML definition files for VMs: they are used manually when defining VMs in virsh (virsh define xmldef.xml and so on). There is the emulation tag there, pointing to the path to the emulator. 

virt-manager (used by me) creates such a file also (putting in into /etc/libvirt/qemu).

But so far I do not have valid XML definition files. So I intend to try the following ways:
   --- find an example definition file and create a VM manually (through virsh)
   --- use qemu & kvm compiled from the Git sources referred to by you.

Your opinion ?

Thanks in advance, 
    Alex






On Mon 14 Oct 11:12 2013 Stefan Hajnoczi wrote:
> On Sat, Oct 12, 2013 at 05:45:52PM +0300, Alexander Binun wrote:
> >    The qemu used by me is the one installed using apt-get install qemu. The executable is in /usr/bin. The KVM driver is the one supplied with Ubuntu 13.04.
> > 
> > The version of qemu is 1.4.0 (after running qemu --version I get the message
> > 
> >   --- QEMU emulator version 1.4.0 (Debian 1.4.0+dfsg-1expubuntu4), Copyright (c) 2003-2008 Fabrice Bellard
> > 
> > You mean I should use the build-from-sources qemu (getting the sources from git://git.qemu-project.org/qemu.git) ? Should I then compile from sources and mount the KVM ?
> 
> In that case it sounds like everything is coming from Ubuntu 13.04 and
> should work together.
> 
> Sorry, I don't know about Ubuntu 13.04.  Perhaps there is already a
> solution if you search the Ubuntu bug tracker.
> 
> Stefan
> 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] kvm binary is deprecated
  2013-10-14 10:36           ` Alexander Binun
@ 2013-10-14 14:16             ` Stefan Hajnoczi
  2013-10-24  9:23               ` [Qemu-devel] kvm binary is deprecated - solved! Alexander Binun
  0 siblings, 1 reply; 23+ messages in thread
From: Stefan Hajnoczi @ 2013-10-14 14:16 UTC (permalink / raw)
  To: Alexander Binun; +Cc: kahilm, markbl, qemu-devel

On Mon, Oct 14, 2013 at 12:36 PM, Alexander Binun <binun@cs.bgu.ac.il> wrote:
> The workaround offered in bug trackers is: "change the path associated with the emulation tag in the xml definition file. Change it to qemu-system-x86_64".
>
> Well, I am familiar with XML definition files for VMs: they are used manually when defining VMs in virsh (virsh define xmldef.xml and so on). There is the emulation tag there, pointing to the path to the emulator.
>
> virt-manager (used by me) creates such a file also (putting in into /etc/libvirt/qemu).
>
> But so far I do not have valid XML definition files. So I intend to try the following ways:
>    --- find an example definition file and create a VM manually (through virsh)
>    --- use qemu & kvm compiled from the Git sources referred to by you.

An easy trick:
# mv /usr/bin/kvm /usr/bin/kvm.orig
# ln -s /usr/bin/qemu-system-x86_64 /usr/bin/kvm

Hopefully libvirt will be happier with the actual qemu-system-x86_64
binary.  If this doesn't work you can move /usr/bin/kvm.orig back and
try the other methods.

Stefan

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [Qemu-devel] kvm binary is deprecated - solved!
  2013-10-14 14:16             ` Stefan Hajnoczi
@ 2013-10-24  9:23               ` Alexander Binun
  2013-10-24  9:49                 ` Stefan Hajnoczi
  0 siblings, 1 reply; 23+ messages in thread
From: Alexander Binun @ 2013-10-24  9:23 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: kahilm, yagel, markbl, qemu-devel

Hi Stefan ,
 Great thanks - your easy trick works! (after I upgraded Ubuntu 13.04 to 13.10).

As for sniffing the traffic between VMs - I have yet one idea and I would appreciate your feedback.

The activities at VM that involve modifying data can be divided into the following categories:
   1. Talk through network (sending net packets to other hosts)
   2. Disk operations
   3. Memory accesses

In essence memory accesses are always performed BEFORE disk or network operations are executed (and the corresponding drivers are employed). For example, we prepare data in a buffer and send it into a socket.

That is, a sniffer in the Linux should be put at a kernel driver that makes physical memory available to user space. 

Thanks,
   Alex

P.S. I CC  my colleague Dr.Reuven Yagel, a member of the team I am working in.




On Mon 14 Oct 16:16 2013 Stefan Hajnoczi wrote:
> On Mon, Oct 14, 2013 at 12:36 PM, Alexander Binun <binun@cs.bgu.ac.il> wrote:
> > The workaround offered in bug trackers is: "change the path associated with the emulation tag in the xml definition file. Change it to qemu-system-x86_64".
> >
> > Well, I am familiar with XML definition files for VMs: they are used manually when defining VMs in virsh (virsh define xmldef.xml and so on). There is the emulation tag there, pointing to the path to the emulator.
> >
> > virt-manager (used by me) creates such a file also (putting in into /etc/libvirt/qemu).
> >
> > But so far I do not have valid XML definition files. So I intend to try the following ways:
> >    --- find an example definition file and create a VM manually (through virsh)
> >    --- use qemu & kvm compiled from the Git sources referred to by you.
> 
> An easy trick:
> # mv /usr/bin/kvm /usr/bin/kvm.orig
> # ln -s /usr/bin/qemu-system-x86_64 /usr/bin/kvm
> 
> Hopefully libvirt will be happier with the actual qemu-system-x86_64
> binary.  If this doesn't work you can move /usr/bin/kvm.orig back and
> try the other methods.
> 
> Stefan
> 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] kvm binary is deprecated - solved!
  2013-10-24  9:23               ` [Qemu-devel] kvm binary is deprecated - solved! Alexander Binun
@ 2013-10-24  9:49                 ` Stefan Hajnoczi
  2013-10-24  9:54                   ` [Qemu-devel] observing VM actions Alexander Binun
  0 siblings, 1 reply; 23+ messages in thread
From: Stefan Hajnoczi @ 2013-10-24  9:49 UTC (permalink / raw)
  To: Alexander Binun; +Cc: kahilm, yagel, markbl, qemu-devel

On Thu, Oct 24, 2013 at 10:23 AM, Alexander Binun <binun@cs.bgu.ac.il> wrote:
> As for sniffing the traffic between VMs - I have yet one idea and I would appreciate your feedback.
[...]
> That is, a sniffer in the Linux should be put at a kernel driver that makes physical memory available to user space.

I'm not sure what you are trying to do.  Can you describe your goal?

Depending on what you are trying to observe, there may already be
sniffing or tracing mechanisms available.

Stefan

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [Qemu-devel] observing VM actions
  2013-10-24  9:49                 ` Stefan Hajnoczi
@ 2013-10-24  9:54                   ` Alexander Binun
  0 siblings, 0 replies; 23+ messages in thread
From: Alexander Binun @ 2013-10-24  9:54 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: kahilm, markbl, qemu-devel

I am trying to observe the memory/disk/network accesses done by a VM. The resulting log can be used to decide whether a VM initiates a malicious action (because , say, it runs a malicious software). 


On Thu 24 Oct 11:49 2013 Stefan Hajnoczi wrote:
> On Thu, Oct 24, 2013 at 10:23 AM, Alexander Binun <binun@cs.bgu.ac.il> wrote:
> > As for sniffing the traffic between VMs - I have yet one idea and I would appreciate your feedback.
> [...]
> > That is, a sniffer in the Linux should be put at a kernel driver that makes physical memory available to user space.
> 
> I'm not sure what you are trying to do.  Can you describe your goal?
> 
> Depending on what you are trying to observe, there may already be
> sniffing or tracing mechanisms available.
> 
> Stefan
> 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [Qemu-devel] sniffing traffic between virtual machines
  2013-10-14  9:12         ` Stefan Hajnoczi
  2013-10-14 10:36           ` Alexander Binun
@ 2013-12-18 11:53           ` Alexander Binun
  2013-12-19  9:05             ` Stefan Hajnoczi
  1 sibling, 1 reply; 23+ messages in thread
From: Alexander Binun @ 2013-12-18 11:53 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: kahilm, markbl, qemu-devel

Hello Friends, 
   Thanks for your hints; they really helped us!

We are trying to monitor the traffic (network packets etc) between VMs in KVM.  We succeeded to get the address of the system call table (see http://syprog.blogspot.co.il/2011/10/hijack-linux-system-calls-part-iii.html) and intercept the system calls going through the kernel.

In such a way we see ALL system calls (including those which were not initiated from within VMs).

How can we filter out the system calls not related to VMs ? What is your opinion regarding our approach ?

Best Regards, 
   Mark, Martin, Alex


On Mon 14 Oct 11:12 2013 Stefan Hajnoczi wrote:
> On Sat, Oct 12, 2013 at 05:45:52PM +0300, Alexander Binun wrote:
> >    The qemu used by me is the one installed using apt-get install qemu. The executable is in /usr/bin. The KVM driver is the one supplied with Ubuntu 13.04.
> > 
> > The version of qemu is 1.4.0 (after running qemu --version I get the message
> > 
> >   --- QEMU emulator version 1.4.0 (Debian 1.4.0+dfsg-1expubuntu4), Copyright (c) 2003-2008 Fabrice Bellard
> > 
> > You mean I should use the build-from-sources qemu (getting the sources from git://git.qemu-project.org/qemu.git) ? Should I then compile from sources and mount the KVM ?
> 
> In that case it sounds like everything is coming from Ubuntu 13.04 and
> should work together.
> 
> Sorry, I don't know about Ubuntu 13.04.  Perhaps there is already a
> solution if you search the Ubuntu bug tracker.
> 
> Stefan
> 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] sniffing traffic between virtual machines
  2013-12-18 11:53           ` [Qemu-devel] sniffing traffic between virtual machines Alexander Binun
@ 2013-12-19  9:05             ` Stefan Hajnoczi
  2014-03-05 16:35               ` [Qemu-devel] kill /destroy a VM - help Alexander Binun
  0 siblings, 1 reply; 23+ messages in thread
From: Stefan Hajnoczi @ 2013-12-19  9:05 UTC (permalink / raw)
  To: Alexander Binun; +Cc: kahilm, markbl, qemu-devel

On Wed, Dec 18, 2013 at 01:53:56PM +0200, Alexander Binun wrote:
> We are trying to monitor the traffic (network packets etc) between VMs in KVM.  We succeeded to get the address of the system call table (see http://syprog.blogspot.co.il/2011/10/hijack-linux-system-calls-part-iii.html) and intercept the system calls going through the kernel.
> 
> In such a way we see ALL system calls (including those which were not initiated from within VMs).

You do not see guest system calls when you hook host system calls.  You
only see host system calls (including those made by QEMU).

> How can we filter out the system calls not related to VMs ? What is your opinion regarding our approach ?

Maybe I'm missing context for this discussion but I wouldn't intercept
sytems calls in order to monitor VM network traffic.

You can monitor VM traffic using libpcap on the VM's tap interface on
the host.  If you want fancier deep packet inspection, Open vSwitch
offers a flow-based interface so you can monitor just certain
conversations.

Stefan

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [Qemu-devel] kill /destroy a VM - help
  2013-12-19  9:05             ` Stefan Hajnoczi
@ 2014-03-05 16:35               ` Alexander Binun
  2014-03-06 10:22                 ` Stefan Hajnoczi
  0 siblings, 1 reply; 23+ messages in thread
From: Alexander Binun @ 2014-03-05 16:35 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: kahilm, boaz.menuhin, markbl, qemu-devel

Hello friends, hello Stefan ,
   Thanks to your hints we succeeded to intercept  network traffic between VMs.

Now we encountered yet one problem: Our security module (which is a LKM) performs security check and, when suspecting malicious activity at a VCPU,  must suspend or even kill this VM. The problem is: how to suspend/kill a VCPU ?

We have taken the following approach: 
    1. Accessing the VM list (struct list_head vms_list ) through the kallsyms interface 
    2. Iterating through VMs, reaching every VCPU (as a structure struct kvm_vcpu *vcpu)
    3. Running security check on every such structure. That is we were seeking for a function like cpu_reset(struct kvm_vcpu*vcpu)

The following "reset funtions" were so far tried (taken from kvm_host.h)
   1. kvm_vcpu_uninit and kvm_x86_ops->vcpu_free. These cause the whole system (both host and guest OSs) hang.
   2. kvm_vcpu_reset and kvm_arch_vcpu_free lead to the linker error  "Warning! Function undefined". 

Which "reset function" could you recommend ?

Thanks in advance,
  an Israeli team (Mark, Martin, Boaz and Alex)



On Thu 19 Dec 11:05 2013 Stefan Hajnoczi wrote:
> On Wed, Dec 18, 2013 at 01:53:56PM +0200, Alexander Binun wrote:
> > We are trying to monitor the traffic (network packets etc) between VMs in KVM.  We succeeded to get the address of the system call table (see http://syprog.blogspot.co.il/2011/10/hijack-linux-system-calls-part-iii.html) and intercept the system calls going through the kernel.
> > 
> > In such a way we see ALL system calls (including those which were not initiated from within VMs).
> 
> You do not see guest system calls when you hook host system calls.  You
> only see host system calls (including those made by QEMU).
> 
> > How can we filter out the system calls not related to VMs ? What is your opinion regarding our approach ?
> 
> Maybe I'm missing context for this discussion but I wouldn't intercept
> sytems calls in order to monitor VM network traffic.
> 
> You can monitor VM traffic using libpcap on the VM's tap interface on
> the host.  If you want fancier deep packet inspection, Open vSwitch
> offers a flow-based interface so you can monitor just certain
> conversations.
> 
> Stefan
> 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] kill /destroy a VM - help
  2014-03-05 16:35               ` [Qemu-devel] kill /destroy a VM - help Alexander Binun
@ 2014-03-06 10:22                 ` Stefan Hajnoczi
  2014-03-06 10:31                   ` Alexander Binun
  0 siblings, 1 reply; 23+ messages in thread
From: Stefan Hajnoczi @ 2014-03-06 10:22 UTC (permalink / raw)
  To: Alexander Binun; +Cc: kahilm, boaz.menuhin, markbl, qemu-devel

On Wed, Mar 05, 2014 at 06:35:18PM +0200, Alexander Binun wrote:
> Now we encountered yet one problem: Our security module (which is a LKM) performs security check and, when suspecting malicious activity at a VCPU,  must suspend or even kill this VM. The problem is: how to suspend/kill a VCPU ?
> 
> We have taken the following approach: 
>     1. Accessing the VM list (struct list_head vms_list ) through the kallsyms interface 
>     2. Iterating through VMs, reaching every VCPU (as a structure struct kvm_vcpu *vcpu)
>     3. Running security check on every such structure. That is we were seeking for a function like cpu_reset(struct kvm_vcpu*vcpu)
> 
> The following "reset funtions" were so far tried (taken from kvm_host.h)
>    1. kvm_vcpu_uninit and kvm_x86_ops->vcpu_free. These cause the whole system (both host and guest OSs) hang.
>    2. kvm_vcpu_reset and kvm_arch_vcpu_free lead to the linker error  "Warning! Function undefined". 
> 
> Which "reset function" could you recommend ?

The simplest thing to kill a VM is to send SIGTERM to the QEMU process
(the process that contains the vcpu thread).

Stefan

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] kill /destroy a VM - help
  2014-03-06 10:22                 ` Stefan Hajnoczi
@ 2014-03-06 10:31                   ` Alexander Binun
  2014-03-06 11:28                     ` Paolo Bonzini
  0 siblings, 1 reply; 23+ messages in thread
From: Alexander Binun @ 2014-03-06 10:31 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: kahilm, boaz.menuhin, markbl, qemu-devel

Thanks for the rapid answer !

On Thu 06 Mar 12:22 2014 Stefan Hajnoczi wrote:
> On Wed, Mar 05, 2014 at 06:35:18PM +0200, Alexander Binun wrote:
> > Now we encountered yet one problem: Our security module (which is a LKM) performs security check and, when suspecting malicious activity at a VCPU,  must suspend or even kill this VM. The problem is: how to suspend/kill a VCPU ?
> > 
> > We have taken the following approach: 
> >     1. Accessing the VM list (struct list_head vms_list ) through the kallsyms interface 
> >     2. Iterating through VMs, reaching every VCPU (as a structure struct kvm_vcpu *vcpu)
> >     3. Running security check on every such structure. That is we were seeking for a function like cpu_reset(struct kvm_vcpu*vcpu)
> > 
> > The following "reset funtions" were so far tried (taken from kvm_host.h)
> >    1. kvm_vcpu_uninit and kvm_x86_ops->vcpu_free. These cause the whole system (both host and guest OSs) hang.
> >    2. kvm_vcpu_reset and kvm_arch_vcpu_free lead to the linker error  "Warning! Function undefined". 
> > 
> > Which "reset function" could you recommend ?
> 
> The simplest thing to kill a VM is to send SIGTERM to the QEMU process
> (the process that contains the vcpu thread).

Then - more questions :
   1. How can I access the Qemu process (relevant to a given VM) from within in the kernel context (being in a kernel module) ?
   2. Should I uninitialize some internal structures for the VM being killed ?
   3. My module detects malicious activities at a VCPU. How can one get the VM owning this VCPU ?

Thanks,
  the team 


> Stefan
> 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] kill /destroy a VM - help
  2014-03-06 10:31                   ` Alexander Binun
@ 2014-03-06 11:28                     ` Paolo Bonzini
  2014-03-06 15:54                       ` [Qemu-devel] kill /destroy a VM - still hangs! Alexander Binun
                                         ` (2 more replies)
  0 siblings, 3 replies; 23+ messages in thread
From: Paolo Bonzini @ 2014-03-06 11:28 UTC (permalink / raw)
  To: Alexander Binun, Stefan Hajnoczi; +Cc: kahilm, boaz.menuhin, markbl, qemu-devel

Il 06/03/2014 11:31, Alexander Binun ha scritto:
> Then - more questions :
>    1. How can I access the Qemu process (relevant to a given VM) from within in the kernel context (being in a kernel module) ?

The struct pid for the VCPU is in the "pid" field of struct kvm_vcpu.

 From there if needed you can get the task (with pid_task) and the 
task's thread group leader (the task's group_leader), and send a signal 
to it.

>    2. Should I uninitialize some internal structures for the VM being killed ?

No, it will happen automatically.  When QEMU is terminated, the VM's 
file descriptor is closed and this frees all internal structures.

>    3. My module detects malicious activities at a VCPU. How can one get the VM owning this VCPU ?

Field "kvm" in struct kvm_vcpu points to the struct kvm for the VM.

Paolo

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] kill /destroy a VM - still hangs!
  2014-03-06 11:28                     ` Paolo Bonzini
@ 2014-03-06 15:54                       ` Alexander Binun
  2014-03-09 15:40                       ` [Qemu-devel] trying to kill a VM Alexander Binun
  2014-03-13 12:59                       ` [Qemu-devel] different IDTs of the same VCPU Alexander Binun
  2 siblings, 0 replies; 23+ messages in thread
From: Alexander Binun @ 2014-03-06 15:54 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: kahilm, Stefan Hajnoczi, markbl, qemu-devel, boaz.menuhin

Hello Friends, 
   Thanks to your help I have found the task structure of the target process (denote it as TASK) and its group leader (TASK->tgid)

Now I did the following:

struct siginfo info;
..

info.si_signo = SIGTERM;
info.si_code = SI_QUEUE;
info.si_errno = 0; /* no recovery */
	
status = send_sig_info(SIGTERM, &info, task);

The result: both the host and the guest hang!

Can I use the kill function directly : kill (TASK->tgid, SIGTERM) ? This function is a user-space one...

Best Regards,
   the team




On Thu 06 Mar 13:28 2014 Paolo Bonzini wrote:
> Il 06/03/2014 11:31, Alexander Binun ha scritto:
> > Then - more questions :
> >    1. How can I access the Qemu process (relevant to a given VM) from within in the kernel context (being in a kernel module) ?
> 
> The struct pid for the VCPU is in the "pid" field of struct kvm_vcpu.
> 
>  From there if needed you can get the task (with pid_task) and the 
> task's thread group leader (the task's group_leader), and send a signal 
> to it.
> 
> >    2. Should I uninitialize some internal structures for the VM being killed ?
> 
> No, it will happen automatically.  When QEMU is terminated, the VM's 
> file descriptor is closed and this frees all internal structures.
> 
> >    3. My module detects malicious activities at a VCPU. How can one get the VM owning this VCPU ?
> 
> Field "kvm" in struct kvm_vcpu points to the struct kvm for the VM.
> 
> Paolo
> 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [Qemu-devel] trying to kill a VM
  2014-03-06 11:28                     ` Paolo Bonzini
  2014-03-06 15:54                       ` [Qemu-devel] kill /destroy a VM - still hangs! Alexander Binun
@ 2014-03-09 15:40                       ` Alexander Binun
  2014-03-13 12:59                       ` [Qemu-devel] different IDTs of the same VCPU Alexander Binun
  2 siblings, 0 replies; 23+ messages in thread
From: Alexander Binun @ 2014-03-09 15:40 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: kahilm, Stefan Hajnoczi, markbl, qemu-devel, boaz.menuhin

Hello Friends, 
  I have done a lot of tricks trying to kill a VM - so far in vain :-)

As you told me I have reached the VCPU of the VM to be killed:

struct kvm_vcpu *cpu = (struct kvm_vcpu*)vcpu;
struct pid *vcpu_pid = cpu->pid;
struct task_struct* task = pid_task(vcpu_pid,PIDTYPE_PID);

Then I am trying to kill the VM standing for vcpu_pid in the various ways:

--- kill_pid(task_pid(task), SIGKILL, 1); (SIGTERM is issued as well)
     --- I have tried task_pgrp(task) and task_tgid(task) as well

--- send_sig_info(SIGTERM, &info, task) where struct siginfo info has SIGTERM as signo and SI_QUEUE as si_code 

The same result: the host and guest hang! I am forced to reboot the system.

However when I get the VCPU ID manually (by running grep pid /var/run/libvirt/qemu/GUEST.xml as it is explained at http://chilung.blogspot.co.il/2013/08/kvm-how-to-find-guest-vms-process-id-pid.html) and send kill to this ID from the command line, the corresponding VM shuts off!

Which magic does the manual method in order to succeed ?

Thanks in advance, 
     the Israeli team



On Thu 06 Mar 13:28 2014 Paolo Bonzini wrote:
> Il 06/03/2014 11:31, Alexander Binun ha scritto:
> > Then - more questions :
> >    1. How can I access the Qemu process (relevant to a given VM) from within in the kernel context (being in a kernel module) ?
> 
> The struct pid for the VCPU is in the "pid" field of struct kvm_vcpu.
> 
>  From there if needed you can get the task (with pid_task) and the 
> task's thread group leader (the task's group_leader), and send a signal 
> to it.
> 
> >    2. Should I uninitialize some internal structures for the VM being killed ?
> 
> No, it will happen automatically.  When QEMU is terminated, the VM's 
> file descriptor is closed and this frees all internal structures.
> 
> >    3. My module detects malicious activities at a VCPU. How can one get the VM owning this VCPU ?
> 
> Field "kvm" in struct kvm_vcpu points to the struct kvm for the VM.
> 
> Paolo
> 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [Qemu-devel] different IDTs of the same VCPU
  2014-03-06 11:28                     ` Paolo Bonzini
  2014-03-06 15:54                       ` [Qemu-devel] kill /destroy a VM - still hangs! Alexander Binun
  2014-03-09 15:40                       ` [Qemu-devel] trying to kill a VM Alexander Binun
@ 2014-03-13 12:59                       ` Alexander Binun
  2014-03-13 15:15                         ` Paolo Bonzini
  2 siblings, 1 reply; 23+ messages in thread
From: Alexander Binun @ 2014-03-13 12:59 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: yagel, kahilm, Stefan Hajnoczi, qemu-devel, boaz.menuhin, markbl

Dear Friends, 
  
   Thanks for your assistance!

We would like to ask you a question about the KVM internals. 

Our module includes a timer which (once in every second) fetches the IDT value of every online VCPU in the system using the kvm_x86_ops->get_idt ; the code looks like:

  struct kvm_vcpu *curr_vcpu;
  struct desc_ptr dt;
  
  list_for_each_entry(kvm, vms_list, vm_list) 
  {
    for (i = 0; i < kvm->online_vcpus.counter; i++) 
       {
       curr_vcpu = kvm->vcpus[i];
       kvm_x86_ops->get_idt(curr_vcpu, &dt); 
    }      
  }

We have noticed that get_idt returns DIFFERENT values for the same VCPU (i.e. for the same value of i that refers to a given VCPU). We cannot understand this issue; could you explain ?

It is very strange since nobody changes the IDT value (as , for example, rootkits do).

Regards,
    the Israeli KVM team
  

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] different IDTs of the same VCPU
  2014-03-13 12:59                       ` [Qemu-devel] different IDTs of the same VCPU Alexander Binun
@ 2014-03-13 15:15                         ` Paolo Bonzini
  2014-03-17 11:54                           ` Alexander Binun
  0 siblings, 1 reply; 23+ messages in thread
From: Paolo Bonzini @ 2014-03-13 15:15 UTC (permalink / raw)
  To: Alexander Binun
  Cc: yagel, kahilm, Stefan Hajnoczi, qemu-devel, boaz.menuhin, markbl

Il 13/03/2014 13:59, Alexander Binun ha scritto:
> Dear Friends,
>
>    Thanks for your assistance!
>
> We would like to ask you a question about the KVM internals.
>
> Our module includes a timer which (once in every second) fetches the IDT value of every online VCPU in the system using the kvm_x86_ops->get_idt ; the code looks like:
>
>   struct kvm_vcpu *curr_vcpu;
>   struct desc_ptr dt;
>
>   list_for_each_entry(kvm, vms_list, vm_list)
>   {
>     for (i = 0; i < kvm->online_vcpus.counter; i++)
>        {
>        curr_vcpu = kvm->vcpus[i];
>        kvm_x86_ops->get_idt(curr_vcpu, &dt);
>     }
>   }
>
> We have noticed that get_idt returns DIFFERENT values for the same
> VCPU (i.e. for the same value of i that refers to a given VCPU). We
> cannot understand this issue; could you explain ?
>
> It is very strange since nobody changes the IDT value (as , for example, rootkits do).

At the very least, running nested virtualization would lead to different 
IDT values.

But more simply, on Intel you can hardly do anything with kvm_x86_ops or 
kvm_vcpu except on the same physical CPU that is in vcpu->cpu.  The 
state is not in memory, it is cached inside the physical CPU.

There is no easy solution to this without modifying KVM.  You can add a 
request bit to KVM's vcpu->requests field, kick the vcpu and do the 
check in vcpu_enter_guest.

Paolo

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] different IDTs of the same VCPU
  2014-03-13 15:15                         ` Paolo Bonzini
@ 2014-03-17 11:54                           ` Alexander Binun
  2014-03-17 12:20                             ` Paolo Bonzini
  0 siblings, 1 reply; 23+ messages in thread
From: Alexander Binun @ 2014-03-17 11:54 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: yagel, kahilm, Stefan Hajnoczi, qemu-devel, boaz.menuhin, markbl

Dear friends, great thanks!

To summarize: we are trying to monitor VCPU IDT changes that are done by external parties (e.g. rootkits) and not by intra-KVM machinery. Are there parameters that witness such changes ?

Best Regards, 
   The KVM Israeli team


On Thu 13 Mar 17:15 2014 Paolo Bonzini wrote:
> Il 13/03/2014 13:59, Alexander Binun ha scritto:
> > Dear Friends,
> >
> >    Thanks for your assistance!
> >
> > We would like to ask you a question about the KVM internals.
> >
> > Our module includes a timer which (once in every second) fetches the IDT value of every online VCPU in the system using the kvm_x86_ops->get_idt ; the code looks like:
> >
> >   struct kvm_vcpu *curr_vcpu;
> >   struct desc_ptr dt;
> >
> >   list_for_each_entry(kvm, vms_list, vm_list)
> >   {
> >     for (i = 0; i < kvm->online_vcpus.counter; i++)
> >        {
> >        curr_vcpu = kvm->vcpus[i];
> >        kvm_x86_ops->get_idt(curr_vcpu, &dt);
> >     }
> >   }
> >
> > We have noticed that get_idt returns DIFFERENT values for the same
> > VCPU (i.e. for the same value of i that refers to a given VCPU). We
> > cannot understand this issue; could you explain ?
> >
> > It is very strange since nobody changes the IDT value (as , for example, rootkits do).
> 
> At the very least, running nested virtualization would lead to different 
> IDT values.
> 
> But more simply, on Intel you can hardly do anything with kvm_x86_ops or 
> kvm_vcpu except on the same physical CPU that is in vcpu->cpu.  The 
> state is not in memory, it is cached inside the physical CPU.
> 
> There is no easy solution to this without modifying KVM.  You can add a 
> request bit to KVM's vcpu->requests field, kick the vcpu and do the 
> check in vcpu_enter_guest.
> 
> Paolo
> 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] different IDTs of the same VCPU
  2014-03-17 11:54                           ` Alexander Binun
@ 2014-03-17 12:20                             ` Paolo Bonzini
  0 siblings, 0 replies; 23+ messages in thread
From: Paolo Bonzini @ 2014-03-17 12:20 UTC (permalink / raw)
  To: Alexander Binun
  Cc: yagel, kahilm, Stefan Hajnoczi, qemu-devel, boaz.menuhin, markbl

Il 17/03/2014 12:54, Alexander Binun ha scritto:
> Dear friends, great thanks!
>
> To summarize: we are trying to monitor VCPU IDT changes that are done
> by external parties (e.g. rootkits) and not by intra-KVM machinery.
> Are there parameters that witness such changes ?

There is no way to intercept changes to the interrupt descriptor table.

You can:

* look at the IDTR values on every vmexit, including before injecting an 
interrupt, but that won't protect from hijacking software interrupts 
such as int $0x80;

* protect the IDT from writing using KVM's page table mechanisms, but 
that won't catch the case when the IDT is changed to a whole new page.

Paolo

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2014-03-17 12:20 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-10-07 14:47 [Qemu-devel] sniffing traffic between VMs Alexander Binun
2013-10-10  9:02 ` Stefan Hajnoczi
2013-10-10 11:00   ` [Qemu-devel] kvm binary is deprecated Alexander Binun
2013-10-11  9:05     ` Stefan Hajnoczi
2013-10-12 14:45       ` Alexander Binun
2013-10-14  9:12         ` Stefan Hajnoczi
2013-10-14 10:36           ` Alexander Binun
2013-10-14 14:16             ` Stefan Hajnoczi
2013-10-24  9:23               ` [Qemu-devel] kvm binary is deprecated - solved! Alexander Binun
2013-10-24  9:49                 ` Stefan Hajnoczi
2013-10-24  9:54                   ` [Qemu-devel] observing VM actions Alexander Binun
2013-12-18 11:53           ` [Qemu-devel] sniffing traffic between virtual machines Alexander Binun
2013-12-19  9:05             ` Stefan Hajnoczi
2014-03-05 16:35               ` [Qemu-devel] kill /destroy a VM - help Alexander Binun
2014-03-06 10:22                 ` Stefan Hajnoczi
2014-03-06 10:31                   ` Alexander Binun
2014-03-06 11:28                     ` Paolo Bonzini
2014-03-06 15:54                       ` [Qemu-devel] kill /destroy a VM - still hangs! Alexander Binun
2014-03-09 15:40                       ` [Qemu-devel] trying to kill a VM Alexander Binun
2014-03-13 12:59                       ` [Qemu-devel] different IDTs of the same VCPU Alexander Binun
2014-03-13 15:15                         ` Paolo Bonzini
2014-03-17 11:54                           ` Alexander Binun
2014-03-17 12:20                             ` Paolo Bonzini

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.