All of lore.kernel.org
 help / color / mirror / Atom feed
* Request for help: passing network statistics from netback driver to Xen scheduler.
@ 2014-07-31 14:37 Dlugajczyk, Marcin
  2014-07-31 17:38 ` George Dunlap
  0 siblings, 1 reply; 8+ messages in thread
From: Dlugajczyk, Marcin @ 2014-07-31 14:37 UTC (permalink / raw)
  To: xen-devel

Hi everyone!

This my first time on this mailing list. I’m working with Xen for my master thesis, and I’d like to ask you for an advice. I don’t have any previous experience with Xen or Linux development, so please bear with my if I’m doing something silly!

What I’m trying to accomplish is to build a Xen scheduler, which is scheduling domains based on their network intensity (I’m trying to implement one of the co-scheduling algorithms). So far, I’ve implemented simple round-robin scheduler which seems to work. Now I want to extend it to take into account the network traffic of each domain (higher the traffic, higher the priority).

I’ve found a patch for older version of Xen[1], which implemented something similar. Author of this patch added some statistics to share_info structure, and updated them in the netback driver. I’d like to do something similar.

I’ve modified the ubunut’s kernel code to add some statistics to shared_info:

-I’ve modified the shared_info structure in include/xen/interface/xen.h and added an array for storing statistics (up to 10 domains):
	unsigned long network_intensity[10]
-I’ve used the EXPORT_SYMBOL macro to make the HYPERVISOR_shared_info available in netback driver
- in the driver/net/xen-netback/netback.c file, I’ve modified function xenvif_rx_action, so that it’d increment the network intensity counter. I’ve also added a printk statement to verify if my code is working
- I’ve updated the shared_info structure in xen source code as well.

After compiling the kernel and booting it, I’ve verified that output from my printk statements was visible in the output from dmesg.

However, when I try to access statistics from shared_info in Xen scheduler, they’re always 0.

Could you please tell me what am I doing wrong? Do I have to somehow request synchronisation of the shared_info between kernel and Xen?
Or maybe is there some better way to get data on network intensity without modifying the kernel?

Thank you in advance for you help.

Kind regards,
Marcin Długajczyk

[1]: http://csl.cse.psu.edu/?q=node/54
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Request for help: passing network statistics from netback driver to Xen scheduler.
  2014-07-31 14:37 Request for help: passing network statistics from netback driver to Xen scheduler Dlugajczyk, Marcin
@ 2014-07-31 17:38 ` George Dunlap
  2014-07-31 19:12   ` Dlugajczyk, Marcin
  0 siblings, 1 reply; 8+ messages in thread
From: George Dunlap @ 2014-07-31 17:38 UTC (permalink / raw)
  To: Dlugajczyk, Marcin; +Cc: xen-devel

On Thu, Jul 31, 2014 at 10:37 AM, Dlugajczyk, Marcin
<j.j.dlugajczyk@cranfield.ac.uk> wrote:
> Hi everyone!
>
> This my first time on this mailing list. I’m working with Xen for my master thesis, and I’d like to ask you for an advice. I don’t have any previous experience with Xen or Linux development, so please bear with my if I’m doing something silly!
>
> What I’m trying to accomplish is to build a Xen scheduler, which is scheduling domains based on their network intensity (I’m trying to implement one of the co-scheduling algorithms). So far, I’ve implemented simple round-robin scheduler which seems to work. Now I want to extend it to take into account the network traffic of each domain (higher the traffic, higher the priority).

Just as a comment: I think a potential problem with this approach is
that you'll run into a positive feedback loop.  Processing network
traffic takes a lot of CPU time; and in particular, it needs the
ability to process packets in a *timely* manner.  Because most
connections are TCP, the ability to do work now *creates* work later
(in the form of more network packets).  Not having enough CPU, or
being delayed in when it can process packets, even by 50ms, can
significantly reduce the amount of traffic a VM gets.  So it's likely
that a domain that currently has a high priority will be able to
generate more traffic for itself, maintaining its high priority; and a
domain that currently has a low priority will not be able to send acks
fast enough, and will continue to receive low network traffic, thus
maintaining its low priority.

Something to watch out for, anyway. :-)

>
> I’ve found a patch for older version of Xen[1], which implemented something similar. Author of this patch added some statistics to share_info structure, and updated them in the netback driver. I’d like to do something similar.
>
> I’ve modified the ubunut’s kernel code to add some statistics to shared_info:
>
> -I’ve modified the shared_info structure in include/xen/interface/xen.h and added an array for storing statistics (up to 10 domains):
>         unsigned long network_intensity[10]
> -I’ve used the EXPORT_SYMBOL macro to make the HYPERVISOR_shared_info available in netback driver
> - in the driver/net/xen-netback/netback.c file, I’ve modified function xenvif_rx_action, so that it’d increment the network intensity counter. I’ve also added a printk statement to verify if my code is working
> - I’ve updated the shared_info structure in xen source code as well.
>
> After compiling the kernel and booting it, I’ve verified that output from my printk statements was visible in the output from dmesg.

Does the printk print the new value (i.e., "Intensity now [nnn] for
domain M"), or just print that it's trying to do something (i.e.,
"Incrementing network intensity")?

> However, when I try to access statistics from shared_info in Xen scheduler, they’re always 0.
>
> Could you please tell me what am I doing wrong? Do I have to somehow request synchronisation of the shared_info between kernel and Xen?
> Or maybe is there some better way to get data on network intensity without modifying the kernel?

Do you really need this information to be "live" on a ms granularity?
If not, you could have a process in dom0 wake up every several hundred
ms, read the information from the netback thread, and then make calls
to the scheduler to adjust priority.  It would be somewhat less
responsive, but much easier to change (as you could simply recompile
the dom0 process and restart it instead of having to reboot your
host).

If your goal is just to hack together something for your project, then
doing the shared page info is probably fine.  But if you want to
upstream anything, you'll probably have to take a different approach.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Request for help: passing network statistics from netback driver to Xen scheduler.
  2014-07-31 17:38 ` George Dunlap
@ 2014-07-31 19:12   ` Dlugajczyk, Marcin
  2014-08-01 11:29     ` Wei Liu
  0 siblings, 1 reply; 8+ messages in thread
From: Dlugajczyk, Marcin @ 2014-07-31 19:12 UTC (permalink / raw)
  To: George Dunlap; +Cc: xen-devel


On 31 Jul 2014, at 18:38, George Dunlap <George.Dunlap@eu.citrix.com> wrote:

> Just as a comment: I think a potential problem with this approach is
> that you'll run into a positive feedback loop.  Processing network
> traffic takes a lot of CPU time; and in particular, it needs the
> ability to process packets in a *timely* manner.  Because most
> connections are TCP, the ability to do work now *creates* work later
> (in the form of more network packets).  Not having enough CPU, or
> being delayed in when it can process packets, even by 50ms, can
> significantly reduce the amount of traffic a VM gets.  So it's likely
> that a domain that currently has a high priority will be able to
> generate more traffic for itself, maintaining its high priority; and a
> domain that currently has a low priority will not be able to send acks
> fast enough, and will continue to receive low network traffic, thus
> maintaining its low priority.
> 
> Something to watch out for, anyway. :-)

Thank you for your feedback! I’m aware of the potential problem. However, as I have a tight deadline, 
I’ll address the issue when it arises. 

> Does the printk print the new value (i.e., "Intensity now [nnn] for
> domain M"), or just print that it's trying to do something (i.e.,
> "Incrementing network intensity”)?

It’s printing the new value. The xenvif_rx_action function looks like that after my modification:

void xenvif_rx_action(struct xenvif *vif)
{
	s8 status;
	u16 flags;
	struct xen_netif_rx_response *resp;
	struct sk_buff_head rxq;
	struct sk_buff *skb;
	LIST_HEAD(notify);
	int ret;
	int nr_frags;
	int count;
	unsigned long offset;
	struct skb_cb_overlay *sco;
	int need_to_notify = 0;
	struct shared_info *shared_info = HYPERVISOR_shared_info;

	struct netrx_pending_operations npo = {
		.copy  = vif->grant_copy_op,
		.meta  = vif->meta,
	};

	skb_queue_head_init(&rxq);

	count = 0;

	while ((skb = skb_dequeue(&vif->rx_queue)) != NULL) {
		vif = netdev_priv(skb->dev);
		nr_frags = skb_shinfo(skb)->nr_frags;

		sco = (struct skb_cb_overlay *)skb->cb;
		sco->meta_slots_used = xenvif_gop_skb(skb, &npo);

		count += nr_frags + 1;

		__skb_queue_tail(&rxq, skb);

		/* Filled the batch queue? */
		/* XXX FIXME: RX path dependent on MAX_SKB_FRAGS */
		if (count + MAX_SKB_FRAGS >= XEN_NETIF_RX_RING_SIZE)
			break;
	}

	BUG_ON(npo.meta_prod > ARRAY_SIZE(vif->meta));

	if (!npo.copy_prod)
		return;

	BUG_ON(npo.copy_prod > MAX_GRANT_COPY_OPS);
	gnttab_batch_copy(vif->grant_copy_op, npo.copy_prod);

	while ((skb = __skb_dequeue(&rxq)) != NULL) {
		sco = (struct skb_cb_overlay *)skb->cb;

		vif = netdev_priv(skb->dev);

		if ((1 << vif->meta[npo.meta_cons].gso_type) &
		    vif->gso_prefix_mask) {
			resp = RING_GET_RESPONSE(&vif->rx,
						 vif->rx.rsp_prod_pvt++);

			resp->flags = XEN_NETRXF_gso_prefix | XEN_NETRXF_more_data;

			resp->offset = vif->meta[npo.meta_cons].gso_size;
			resp->id = vif->meta[npo.meta_cons].id;
			resp->status = sco->meta_slots_used;

			npo.meta_cons++;
			sco->meta_slots_used--;
		}


		vif->dev->stats.tx_bytes += skb->len;
		vif->dev->stats.tx_packets++;

		shared_info->network_intensity[vif->domid]++;
		printk(KERN_EMERG "RX ACTION: %d %ld\n", vif->domid, shared_info->network_intensity[vif->domid]);

		status = xenvif_check_gop(vif, sco->meta_slots_used, &npo);

		if (sco->meta_slots_used == 1)
			flags = 0;
		else
			flags = XEN_NETRXF_more_data;

		if (skb->ip_summed == CHECKSUM_PARTIAL) /* local packet? */
			flags |= XEN_NETRXF_csum_blank | XEN_NETRXF_data_validated;
		else if (skb->ip_summed == CHECKSUM_UNNECESSARY)
			/* remote but checksummed. */
			flags |= XEN_NETRXF_data_validated;

		offset = 0;
		resp = make_rx_response(vif, vif->meta[npo.meta_cons].id,
					status, offset,
					vif->meta[npo.meta_cons].size,
					flags);

		if ((1 << vif->meta[npo.meta_cons].gso_type) &
		    vif->gso_mask) {
			struct xen_netif_extra_info *gso =
				(struct xen_netif_extra_info *)
				RING_GET_RESPONSE(&vif->rx,
						  vif->rx.rsp_prod_pvt++);

			resp->flags |= XEN_NETRXF_extra_info;

			gso->u.gso.type = vif->meta[npo.meta_cons].gso_type;
			gso->u.gso.size = vif->meta[npo.meta_cons].gso_size;
			gso->u.gso.pad = 0;
			gso->u.gso.features = 0;

			gso->type = XEN_NETIF_EXTRA_TYPE_GSO;
			gso->flags = 0;
		}

		xenvif_add_frag_responses(vif, status,
					  vif->meta + npo.meta_cons + 1,
					  sco->meta_slots_used);

		RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&vif->rx, ret);

		if (ret)
			need_to_notify = 1;

		xenvif_notify_tx_completion(vif);

		npo.meta_cons += sco->meta_slots_used;
		dev_kfree_skb(skb);
	}

	if (need_to_notify)
		notify_remote_via_irq(vif->rx_irq);

	/* More work to do? */
	if (!skb_queue_empty(&vif->rx_queue))
		xenvif_kick_thread(vif);
}

> Do you really need this information to be "live" on a ms granularity?
> If not, you could have a process in dom0 wake up every several hundred
> ms, read the information from the netback thread, and then make calls
> to the scheduler to adjust priority.  It would be somewhat less
> responsive, but much easier to change (as you could simply recompile
> the dom0 process and restart it instead of having to reboot your
> host).

It doesn’t have to be “live”, but hundred ms is probably too slow. I don’t really mind rebooting 
the machine, as I’ve got a fairly automated setup for development. I’m looking for the simplest
solution that’d work :)

> 
> If your goal is just to hack together something for your project, then
> doing the shared page info is probably fine.  But if you want to
> upstream anything, you'll probably have to take a different approach.

I’m afraid my current version wouldn’t be merged, it’s more of a prototype.
However, If I’ll get it working with some reasonable results, and there’d be interest in this scheduler,
I’d be more than happy to clean up the code and use whatever approach is the right one.


Kind regards,
Marcin Długajczyk


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Request for help: passing network statistics from netback driver to Xen scheduler.
  2014-07-31 19:12   ` Dlugajczyk, Marcin
@ 2014-08-01 11:29     ` Wei Liu
  2014-08-01 16:32       ` Dlugajczyk, Marcin
  0 siblings, 1 reply; 8+ messages in thread
From: Wei Liu @ 2014-08-01 11:29 UTC (permalink / raw)
  To: Dlugajczyk, Marcin; +Cc: George Dunlap, wei.liu2, xen-devel

On Thu, Jul 31, 2014 at 07:12:55PM +0000, Dlugajczyk, Marcin wrote:
> 
> On 31 Jul 2014, at 18:38, George Dunlap <George.Dunlap@eu.citrix.com> wrote:
> 
> > Just as a comment: I think a potential problem with this approach is
> > that you'll run into a positive feedback loop.  Processing network
> > traffic takes a lot of CPU time; and in particular, it needs the
> > ability to process packets in a *timely* manner.  Because most
> > connections are TCP, the ability to do work now *creates* work later
> > (in the form of more network packets).  Not having enough CPU, or
> > being delayed in when it can process packets, even by 50ms, can
> > significantly reduce the amount of traffic a VM gets.  So it's likely
> > that a domain that currently has a high priority will be able to
> > generate more traffic for itself, maintaining its high priority; and a
> > domain that currently has a low priority will not be able to send acks
> > fast enough, and will continue to receive low network traffic, thus
> > maintaining its low priority.
> > 
> > Something to watch out for, anyway. :-)
> 
> Thank you for your feedback! I’m aware of the potential problem. However, as I have a tight deadline, 
> I’ll address the issue when it arises. 
> 
> > Does the printk print the new value (i.e., "Intensity now [nnn] for
> > domain M"), or just print that it's trying to do something (i.e.,
> > "Incrementing network intensity”)?
> 
> It’s printing the new value. The xenvif_rx_action function looks like that after my modification:
> 
> void xenvif_rx_action(struct xenvif *vif)
> {
...
> }
> 

It's better to just paste in the diff instead of the whole function.
Also you will need to state clearly what version the diff is based on.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Request for help: passing network statistics from netback driver to Xen scheduler.
  2014-08-01 11:29     ` Wei Liu
@ 2014-08-01 16:32       ` Dlugajczyk, Marcin
  2014-08-01 17:21         ` Wei Liu
  0 siblings, 1 reply; 8+ messages in thread
From: Dlugajczyk, Marcin @ 2014-08-01 16:32 UTC (permalink / raw)
  To: Wei Liu; +Cc: George Dunlap, xen-devel


On 01 Aug 2014, at 12:29, Wei Liu <wei.liu2@citrix.com> wrote:

> 
> It's better to just paste in the diff instead of the whole function.

My apologies for that! Here’s the diff:

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index fa6ade7..121e793 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -134,7 +134,7 @@ EXPORT_SYMBOL_GPL(xen_have_vector_callback);
  * page as soon as fixmap is up and running.
  */
 struct shared_info *HYPERVISOR_shared_info = &xen_dummy_shared_info;
-
+EXPORT_SYMBOL(HYPERVISOR_shared_info);
 /*
  * Flag to determine whether vcpu info placement is available on all
  * VCPUs.  We assume it is to start with, and then set it to zero on
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index b898c6b..4b3e9d8 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -44,9 +44,11 @@
 #include <xen/xen.h>
 #include <xen/events.h>
 #include <xen/interface/memory.h>
+#include <xen/interface/xen.h>
 
 #include <asm/xen/hypercall.h>
 #include <asm/xen/page.h>
+#include <asm/xen/hypervisor.h>
 
 /* Provide an option to disable split event channels at load time as
  * event channels are limited resource. Split event channels are
@@ -572,6 +574,7 @@ void xenvif_rx_action(struct xenvif *vif)
 	unsigned long offset;
 	struct skb_cb_overlay *sco;
 	int need_to_notify = 0;
+	struct shared_info *shared_info = HYPERVISOR_shared_info;
 
 	struct netrx_pending_operations npo = {
 		.copy  = vif->grant_copy_op,
@@ -631,6 +634,9 @@ void xenvif_rx_action(struct xenvif *vif)
 		vif->dev->stats.tx_bytes += skb->len;
 		vif->dev->stats.tx_packets++;
 
+		shared_info->network_intensity[vif->domid]++;
+              printk(KERN_EMERG "RX ACTION: %d %ld\n", vif->domid, shared_info->network_intensity[vif->domid]);
+
 		status = xenvif_check_gop(vif, sco->meta_slots_used, &npo);
 
 		if (sco->meta_slots_used == 1)
@@ -1628,6 +1634,7 @@ static int xenvif_tx_submit(struct xenvif *vif)
 	struct gnttab_copy *gop = vif->tx_copy_ops;
 	struct sk_buff *skb;
 	int work_done = 0;
+	struct shared_info *shared_info = HYPERVISOR_shared_info;
 
 	while ((skb = __skb_dequeue(&vif->tx_queue)) != NULL) {
 		struct xen_netif_tx_request *txp;
@@ -1687,6 +1694,9 @@ static int xenvif_tx_submit(struct xenvif *vif)
 		vif->dev->stats.rx_bytes += skb->len;
 		vif->dev->stats.rx_packets++;
 
+		shared_info->network_intensity[vif->domid]++;
+              printk(KERN_EMERG "TX ACTION: %d\n", vif->domid);
+
 		work_done++;
 
 		netif_receive_skb(skb);
diff --git a/include/xen/interface/xen.h b/include/xen/interface/xen.h
index 53ec416..11a3ef0 100644
--- a/include/xen/interface/xen.h
+++ b/include/xen/interface/xen.h
@@ -394,7 +394,7 @@ struct shared_info {
 	struct pvclock_wall_clock wc;
 
 	struct arch_shared_info arch;
-
+	unsigned long network_intensity[10];
 };
 
 /*

> Also you will need to state clearly what version the diff is based on.

I’ve used ubuntu’s kernel source hosted at: 

git://kernel.ubuntu.com/ubuntu/ubuntu-trusty.git

Last commit: b90e9899aad49b601a744f503edc8e484490b906

The problem is: dmesg shows increasing values of network_intensity counter, however when I try to access it from xen scheduler
it’s always 0.

Kind regards,
Marcin Długajczyk

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: Request for help: passing network statistics from netback driver to Xen scheduler.
  2014-08-01 16:32       ` Dlugajczyk, Marcin
@ 2014-08-01 17:21         ` Wei Liu
  2014-08-01 18:20           ` Wei Liu
  0 siblings, 1 reply; 8+ messages in thread
From: Wei Liu @ 2014-08-01 17:21 UTC (permalink / raw)
  To: Dlugajczyk, Marcin; +Cc: George Dunlap, Wei Liu, xen-devel

On Fri, Aug 01, 2014 at 04:32:25PM +0000, Dlugajczyk, Marcin wrote:
> 
[...]
>  /*
> 
> > Also you will need to state clearly what version the diff is based on.
> 
> I’ve used ubuntu’s kernel source hosted at: 
> 
> git://kernel.ubuntu.com/ubuntu/ubuntu-trusty.git
> 
> Last commit: b90e9899aad49b601a744f503edc8e484490b906
> 
> The problem is: dmesg shows increasing values of network_intensity counter, however when I try to access it from xen scheduler
> it’s always 0.
> 

The change to netback looks simple.

You need to make sure hypervisor is accessing the current shared_info,
that is, shared_info for Dom0 in your case.

Wei.

> Kind regards,
> Marcin Długajczyk
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Request for help: passing network statistics from netback driver to Xen scheduler.
  2014-08-01 17:21         ` Wei Liu
@ 2014-08-01 18:20           ` Wei Liu
  2014-08-01 19:14             ` Dlugajczyk, Marcin
  0 siblings, 1 reply; 8+ messages in thread
From: Wei Liu @ 2014-08-01 18:20 UTC (permalink / raw)
  To: Dlugajczyk, Marcin; +Cc: George Dunlap, Wei Liu, xen-devel

On Fri, Aug 01, 2014 at 06:21:20PM +0100, Wei Liu wrote:
> On Fri, Aug 01, 2014 at 04:32:25PM +0000, Dlugajczyk, Marcin wrote:
> > 
> [...]
> >  /*
> > 
> > > Also you will need to state clearly what version the diff is based on.
> > 
> > I’ve used ubuntu’s kernel source hosted at: 
> > 
> > git://kernel.ubuntu.com/ubuntu/ubuntu-trusty.git
> > 
> > Last commit: b90e9899aad49b601a744f503edc8e484490b906
> > 
> > The problem is: dmesg shows increasing values of network_intensity counter, however when I try to access it from xen scheduler
> > it’s always 0.
> > 
> 
> The change to netback looks simple.
> 
> You need to make sure hypervisor is accessing the current shared_info,

I mean "correct shared_info".

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Request for help: passing network statistics from netback driver to Xen scheduler.
  2014-08-01 18:20           ` Wei Liu
@ 2014-08-01 19:14             ` Dlugajczyk, Marcin
  0 siblings, 0 replies; 8+ messages in thread
From: Dlugajczyk, Marcin @ 2014-08-01 19:14 UTC (permalink / raw)
  To: Wei Liu; +Cc: George Dunlap, xen-devel


On 01 Aug 2014, at 19:20, Wei Liu <wei.liu2@citrix.com> wrote:

>> The change to netback looks simple.
>> 
>> You need to make sure hypervisor is accessing the current shared_info,
> 
> I mean "correct shared_info”.

I think I’m accessing shared_info from dom0. What I’m doing at the moment is as follows: in the do_schedule
function of my scheduler there’s a following loop:

if (current->domain->domain_id == 0)
{
	printk("MESSAGES: “);
	for ( i = 0; i < 10; i++)
	{
	    printk("%d ", shared_info(current->domain, network_intensity[i]));
	}
	printk("\n”);
}

And it always prints zeros. dmesg in dom0 shows increasing counter from kernel logs, and the Xen output to serial console shows 0s.

Any ideas what am I doing wrong?


Regards,
Marcin

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2014-08-01 19:14 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-07-31 14:37 Request for help: passing network statistics from netback driver to Xen scheduler Dlugajczyk, Marcin
2014-07-31 17:38 ` George Dunlap
2014-07-31 19:12   ` Dlugajczyk, Marcin
2014-08-01 11:29     ` Wei Liu
2014-08-01 16:32       ` Dlugajczyk, Marcin
2014-08-01 17:21         ` Wei Liu
2014-08-01 18:20           ` Wei Liu
2014-08-01 19:14             ` Dlugajczyk, Marcin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.