All of lore.kernel.org
 help / color / mirror / Atom feed
From: Scott Wood <scottwood@freescale.com>
To: Alexander Graf <agraf@suse.de>
Cc: Yoder Stuart-B08248 <B08248@freescale.com>,
	"kvm-ppc@vger.kernel.org" <kvm-ppc@vger.kernel.org>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>
Subject: Re: RFC: New API for PPC for vcpu mmu access
Date: Wed, 9 Feb 2011 17:09:28 -0600	[thread overview]
Message-ID: <20110209170928.6c629514@udp111988uds> (raw)
In-Reply-To: <220F22AA-31E5-4ACB-B0D5-557010096B91@suse.de>

On Wed, 9 Feb 2011 18:21:40 +0100
Alexander Graf <agraf@suse.de> wrote:

> 
> On 07.02.2011, at 21:15, Scott Wood wrote:
> 
> > That's pretty much what the proposed API does -- except it uses a void
> > pointer instead of uint64_t *.
> 
> Oh? Did I miss something there? The proposal looked as if it only transfers a single TLB entry at a time.

Right, I just meant in terms of avoiding a fixed reference to a hw-specific
type.

> > How about:
> > 
> > struct kvmppc_booke_tlb_entry {
> > 	union {
> > 		__u64 mas0_1;
> > 		struct {
> > 			__u32 mas0;
> > 			__u32 mas1;
> > 		};
> > 	};
> > 	__u64 mas2;
> > 	union {
> > 		__u64 mas7_3	
> > 		struct {
> > 			__u32 mas7;
> > 			__u32 mas3;
> > 		};
> > 	};
> > 	__u32 mas8;
> > 	__u32 pad;
> 
> Would it make sense to add some reserved fields or would we just bump up the mmu id?

I was thinking we'd just bump the ID.  I only stuck "pad" in there for
alignment.  And we're making a large array of it, so padding could hurt.

> > struct kvmppc_booke_tlb_params {
> > 	/*
> > 	 * book3e defines 4 TLBs.  Individual implementations may have
> > 	 * fewer.  TLBs that do not exist on the target must be configured
> > 	 * with a size of zero.  KVM will adjust TLBnCFG based on the sizes
> > 	 * configured here, though arrays greater than 2048 entries will
> > 	 * have TLBnCFG[NENTRY] set to zero.
> > 	 */
> > 	__u32 tlb_sizes[4];
> 
> Add some reserved fields?

MMU type ID also controls this, but could add some padding to make
extensions simpler (esp. since we're not making an array of it).  How much
would you recommend?

> > struct kvmppc_booke_tlb_search {
> 
> Search? I thought we agreed on having a search later, after the full get/set is settled?

We agreed on having a full array-like get/set... my preference was to keep
it all under one capability, which implies adding it at the same time.
But if we do KVM_TRANSLATE, we can probably drop KVM_SEARCH_TLB.  I'm
skeptical that array-only will not be a performance issue under any usage
pattern, but we can implement it and try it out before finalizing any of
this.

> > 	struct kvmppc_booke_tlb_entry entry;
> > 	union {
> > 		__u64 mas5_6;
> > 		struct {
> > 			__u64 mas5;
> > 			__u64 mas6;
> > 		};
> > 	};
> > };

The fields inside the struct should be __u32, of course. :-P

> > - An entry with MAS1[V] = 0 terminates the list early (but there will
> >   be no terminating entry if the full array is valid).  On a call to
> >   KVM_GET_TLB, the contents of elemnts after the terminator are undefined.
> >   On a call to KVM_SET_TLB, excess elements beyond the terminating
> >   entry may not be accessed by KVM.
> 
> Very implementation specific, but ok with me. 

I assumed most MMU types would have some straightforward way of marking an
entry invalid (if not, it can add a software field in the struct), and that
it would be MMU-specific code that is processing the list.

> It's constrained to the BOOKE implementation of that GET/SET anyway. Is
> this how the hardware works too?

Hardware doesn't process lists of entries.  But MAS1[V] is the valid
bit in hardware.

> > [Note: Once we implement sregs, Qemu can determine which TLBs are
> > implemented by reading MMUCFG/TLBnCFG -- but in no case should a TLB be
> > unsupported by KVM if its existence is implied by the target CPU]
> > 
> > KVM_SET_TLB
> > -----------
> > 
> > Capability: KVM_CAP_SW_TLB
> > Type: vcpu ioctl
> > Parameters: struct kvm_set_tlb (in)
> > Returns: 0 on success
> >         -1 on error
> > 
> > struct kvm_set_tlb {
> > 	__u64 params;
> > 	__u64 array;
> > 	__u32 mmu_type;
> > };
> > 
> > [Note: I used __u64 rather than void * to avoid the need for special
> > compat handling with 32-bit userspace on a 64-bit kernel -- if the other
> > way is preferred, that's fine with me]
> 
> Oh, now I understand what you were proposing :). Sorry. No, this way is sane.

What about the ioctls that take only a pointer?  The actual calling
mechanism should work without compat, but in order for _IOR and such to not
assign a different IOCTL number based on the size of void *, we'd need to
lie and use plain _IO().  It looks like some ioctls such as
KVM_SET_TSS_ADDR already do this.

If we drop KVM_SEARCH_TLB and struct-ize KVM_GET_TLB to fit in a buffer
size parameter, it's moot though.

> > KVM_GET_TLB
> > -----------
> > 
> > Capability: KVM_CAP_SW_TLB
> > Type: vcpu ioctl
> > Parameters: void pointer (out)
> > Returns: 0 on success
> >         -1 on error
> > 
> > Reads the TLB array from a virtual CPU.  A successful call to
> > KVM_SET_TLB must have been previously made on this vcpu.  The argument
> > must point to space for an array of the size and type of TLB entry
> > structs configured by the most recent successful call to KVM_SET_TLB.
> > 
> > For mmu types BOOKE_NOHV and BOOKE_HV, the array is of type "struct
> > kvmppc_booke_tlb_entry", and must hold a number of entries equal to
> > the sum of the elements of tlb_sizes in the most recent successful
> > TLB configuration call.
> 
> We should add some sort of safety net here. The caller really should pass in how big that pointer is.

The caller must have previously called KVM_SET_TLB (or KVM_GET_TLB will
return an error), so it implicitly told KVM how much data to send, based
on MMU type and params.

I'm OK with an explicit buffer size for added safety, though.

-Scott

WARNING: multiple messages have this Message-ID (diff)
From: Scott Wood <scottwood@freescale.com>
To: Alexander Graf <agraf@suse.de>
Cc: Yoder Stuart-B08248 <B08248@freescale.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	"kvm-ppc@vger.kernel.org" <kvm-ppc@vger.kernel.org>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>
Subject: [Qemu-devel] Re: RFC: New API for PPC for vcpu mmu access
Date: Wed, 9 Feb 2011 17:09:28 -0600	[thread overview]
Message-ID: <20110209170928.6c629514@udp111988uds> (raw)
In-Reply-To: <220F22AA-31E5-4ACB-B0D5-557010096B91@suse.de>

On Wed, 9 Feb 2011 18:21:40 +0100
Alexander Graf <agraf@suse.de> wrote:

> 
> On 07.02.2011, at 21:15, Scott Wood wrote:
> 
> > That's pretty much what the proposed API does -- except it uses a void
> > pointer instead of uint64_t *.
> 
> Oh? Did I miss something there? The proposal looked as if it only transfers a single TLB entry at a time.

Right, I just meant in terms of avoiding a fixed reference to a hw-specific
type.

> > How about:
> > 
> > struct kvmppc_booke_tlb_entry {
> > 	union {
> > 		__u64 mas0_1;
> > 		struct {
> > 			__u32 mas0;
> > 			__u32 mas1;
> > 		};
> > 	};
> > 	__u64 mas2;
> > 	union {
> > 		__u64 mas7_3	
> > 		struct {
> > 			__u32 mas7;
> > 			__u32 mas3;
> > 		};
> > 	};
> > 	__u32 mas8;
> > 	__u32 pad;
> 
> Would it make sense to add some reserved fields or would we just bump up the mmu id?

I was thinking we'd just bump the ID.  I only stuck "pad" in there for
alignment.  And we're making a large array of it, so padding could hurt.

> > struct kvmppc_booke_tlb_params {
> > 	/*
> > 	 * book3e defines 4 TLBs.  Individual implementations may have
> > 	 * fewer.  TLBs that do not exist on the target must be configured
> > 	 * with a size of zero.  KVM will adjust TLBnCFG based on the sizes
> > 	 * configured here, though arrays greater than 2048 entries will
> > 	 * have TLBnCFG[NENTRY] set to zero.
> > 	 */
> > 	__u32 tlb_sizes[4];
> 
> Add some reserved fields?

MMU type ID also controls this, but could add some padding to make
extensions simpler (esp. since we're not making an array of it).  How much
would you recommend?

> > struct kvmppc_booke_tlb_search {
> 
> Search? I thought we agreed on having a search later, after the full get/set is settled?

We agreed on having a full array-like get/set... my preference was to keep
it all under one capability, which implies adding it at the same time.
But if we do KVM_TRANSLATE, we can probably drop KVM_SEARCH_TLB.  I'm
skeptical that array-only will not be a performance issue under any usage
pattern, but we can implement it and try it out before finalizing any of
this.

> > 	struct kvmppc_booke_tlb_entry entry;
> > 	union {
> > 		__u64 mas5_6;
> > 		struct {
> > 			__u64 mas5;
> > 			__u64 mas6;
> > 		};
> > 	};
> > };

The fields inside the struct should be __u32, of course. :-P

> > - An entry with MAS1[V] = 0 terminates the list early (but there will
> >   be no terminating entry if the full array is valid).  On a call to
> >   KVM_GET_TLB, the contents of elemnts after the terminator are undefined.
> >   On a call to KVM_SET_TLB, excess elements beyond the terminating
> >   entry may not be accessed by KVM.
> 
> Very implementation specific, but ok with me. 

I assumed most MMU types would have some straightforward way of marking an
entry invalid (if not, it can add a software field in the struct), and that
it would be MMU-specific code that is processing the list.

> It's constrained to the BOOKE implementation of that GET/SET anyway. Is
> this how the hardware works too?

Hardware doesn't process lists of entries.  But MAS1[V] is the valid
bit in hardware.

> > [Note: Once we implement sregs, Qemu can determine which TLBs are
> > implemented by reading MMUCFG/TLBnCFG -- but in no case should a TLB be
> > unsupported by KVM if its existence is implied by the target CPU]
> > 
> > KVM_SET_TLB
> > -----------
> > 
> > Capability: KVM_CAP_SW_TLB
> > Type: vcpu ioctl
> > Parameters: struct kvm_set_tlb (in)
> > Returns: 0 on success
> >         -1 on error
> > 
> > struct kvm_set_tlb {
> > 	__u64 params;
> > 	__u64 array;
> > 	__u32 mmu_type;
> > };
> > 
> > [Note: I used __u64 rather than void * to avoid the need for special
> > compat handling with 32-bit userspace on a 64-bit kernel -- if the other
> > way is preferred, that's fine with me]
> 
> Oh, now I understand what you were proposing :). Sorry. No, this way is sane.

What about the ioctls that take only a pointer?  The actual calling
mechanism should work without compat, but in order for _IOR and such to not
assign a different IOCTL number based on the size of void *, we'd need to
lie and use plain _IO().  It looks like some ioctls such as
KVM_SET_TSS_ADDR already do this.

If we drop KVM_SEARCH_TLB and struct-ize KVM_GET_TLB to fit in a buffer
size parameter, it's moot though.

> > KVM_GET_TLB
> > -----------
> > 
> > Capability: KVM_CAP_SW_TLB
> > Type: vcpu ioctl
> > Parameters: void pointer (out)
> > Returns: 0 on success
> >         -1 on error
> > 
> > Reads the TLB array from a virtual CPU.  A successful call to
> > KVM_SET_TLB must have been previously made on this vcpu.  The argument
> > must point to space for an array of the size and type of TLB entry
> > structs configured by the most recent successful call to KVM_SET_TLB.
> > 
> > For mmu types BOOKE_NOHV and BOOKE_HV, the array is of type "struct
> > kvmppc_booke_tlb_entry", and must hold a number of entries equal to
> > the sum of the elements of tlb_sizes in the most recent successful
> > TLB configuration call.
> 
> We should add some sort of safety net here. The caller really should pass in how big that pointer is.

The caller must have previously called KVM_SET_TLB (or KVM_GET_TLB will
return an error), so it implicitly told KVM how much data to send, based
on MMU type and params.

I'm OK with an explicit buffer size for added safety, though.

-Scott

WARNING: multiple messages have this Message-ID (diff)
From: Scott Wood <scottwood@freescale.com>
To: Alexander Graf <agraf@suse.de>
Cc: Yoder Stuart-B08248 <B08248@freescale.com>,
	"kvm-ppc@vger.kernel.org" <kvm-ppc@vger.kernel.org>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>
Subject: Re: RFC: New API for PPC for vcpu mmu access
Date: Wed, 09 Feb 2011 23:09:28 +0000	[thread overview]
Message-ID: <20110209170928.6c629514@udp111988uds> (raw)
In-Reply-To: <220F22AA-31E5-4ACB-B0D5-557010096B91@suse.de>

On Wed, 9 Feb 2011 18:21:40 +0100
Alexander Graf <agraf@suse.de> wrote:

> 
> On 07.02.2011, at 21:15, Scott Wood wrote:
> 
> > That's pretty much what the proposed API does -- except it uses a void
> > pointer instead of uint64_t *.
> 
> Oh? Did I miss something there? The proposal looked as if it only transfers a single TLB entry at a time.

Right, I just meant in terms of avoiding a fixed reference to a hw-specific
type.

> > How about:
> > 
> > struct kvmppc_booke_tlb_entry {
> > 	union {
> > 		__u64 mas0_1;
> > 		struct {
> > 			__u32 mas0;
> > 			__u32 mas1;
> > 		};
> > 	};
> > 	__u64 mas2;
> > 	union {
> > 		__u64 mas7_3	
> > 		struct {
> > 			__u32 mas7;
> > 			__u32 mas3;
> > 		};
> > 	};
> > 	__u32 mas8;
> > 	__u32 pad;
> 
> Would it make sense to add some reserved fields or would we just bump up the mmu id?

I was thinking we'd just bump the ID.  I only stuck "pad" in there for
alignment.  And we're making a large array of it, so padding could hurt.

> > struct kvmppc_booke_tlb_params {
> > 	/*
> > 	 * book3e defines 4 TLBs.  Individual implementations may have
> > 	 * fewer.  TLBs that do not exist on the target must be configured
> > 	 * with a size of zero.  KVM will adjust TLBnCFG based on the sizes
> > 	 * configured here, though arrays greater than 2048 entries will
> > 	 * have TLBnCFG[NENTRY] set to zero.
> > 	 */
> > 	__u32 tlb_sizes[4];
> 
> Add some reserved fields?

MMU type ID also controls this, but could add some padding to make
extensions simpler (esp. since we're not making an array of it).  How much
would you recommend?

> > struct kvmppc_booke_tlb_search {
> 
> Search? I thought we agreed on having a search later, after the full get/set is settled?

We agreed on having a full array-like get/set... my preference was to keep
it all under one capability, which implies adding it at the same time.
But if we do KVM_TRANSLATE, we can probably drop KVM_SEARCH_TLB.  I'm
skeptical that array-only will not be a performance issue under any usage
pattern, but we can implement it and try it out before finalizing any of
this.

> > 	struct kvmppc_booke_tlb_entry entry;
> > 	union {
> > 		__u64 mas5_6;
> > 		struct {
> > 			__u64 mas5;
> > 			__u64 mas6;
> > 		};
> > 	};
> > };

The fields inside the struct should be __u32, of course. :-P

> > - An entry with MAS1[V] = 0 terminates the list early (but there will
> >   be no terminating entry if the full array is valid).  On a call to
> >   KVM_GET_TLB, the contents of elemnts after the terminator are undefined.
> >   On a call to KVM_SET_TLB, excess elements beyond the terminating
> >   entry may not be accessed by KVM.
> 
> Very implementation specific, but ok with me. 

I assumed most MMU types would have some straightforward way of marking an
entry invalid (if not, it can add a software field in the struct), and that
it would be MMU-specific code that is processing the list.

> It's constrained to the BOOKE implementation of that GET/SET anyway. Is
> this how the hardware works too?

Hardware doesn't process lists of entries.  But MAS1[V] is the valid
bit in hardware.

> > [Note: Once we implement sregs, Qemu can determine which TLBs are
> > implemented by reading MMUCFG/TLBnCFG -- but in no case should a TLB be
> > unsupported by KVM if its existence is implied by the target CPU]
> > 
> > KVM_SET_TLB
> > -----------
> > 
> > Capability: KVM_CAP_SW_TLB
> > Type: vcpu ioctl
> > Parameters: struct kvm_set_tlb (in)
> > Returns: 0 on success
> >         -1 on error
> > 
> > struct kvm_set_tlb {
> > 	__u64 params;
> > 	__u64 array;
> > 	__u32 mmu_type;
> > };
> > 
> > [Note: I used __u64 rather than void * to avoid the need for special
> > compat handling with 32-bit userspace on a 64-bit kernel -- if the other
> > way is preferred, that's fine with me]
> 
> Oh, now I understand what you were proposing :). Sorry. No, this way is sane.

What about the ioctls that take only a pointer?  The actual calling
mechanism should work without compat, but in order for _IOR and such to not
assign a different IOCTL number based on the size of void *, we'd need to
lie and use plain _IO().  It looks like some ioctls such as
KVM_SET_TSS_ADDR already do this.

If we drop KVM_SEARCH_TLB and struct-ize KVM_GET_TLB to fit in a buffer
size parameter, it's moot though.

> > KVM_GET_TLB
> > -----------
> > 
> > Capability: KVM_CAP_SW_TLB
> > Type: vcpu ioctl
> > Parameters: void pointer (out)
> > Returns: 0 on success
> >         -1 on error
> > 
> > Reads the TLB array from a virtual CPU.  A successful call to
> > KVM_SET_TLB must have been previously made on this vcpu.  The argument
> > must point to space for an array of the size and type of TLB entry
> > structs configured by the most recent successful call to KVM_SET_TLB.
> > 
> > For mmu types BOOKE_NOHV and BOOKE_HV, the array is of type "struct
> > kvmppc_booke_tlb_entry", and must hold a number of entries equal to
> > the sum of the elements of tlb_sizes in the most recent successful
> > TLB configuration call.
> 
> We should add some sort of safety net here. The caller really should pass in how big that pointer is.

The caller must have previously called KVM_SET_TLB (or KVM_GET_TLB will
return an error), so it implicitly told KVM how much data to send, based
on MMU type and params.

I'm OK with an explicit buffer size for added safety, though.

-Scott


  reply	other threads:[~2011-02-09 23:09 UTC|newest]

Thread overview: 112+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-02-02 20:33 RFC: New API for PPC for vcpu mmu access Yoder Stuart-B08248
2011-02-02 20:33 ` Yoder Stuart-B08248
2011-02-02 20:33 ` [Qemu-devel] " Yoder Stuart-B08248
2011-02-02 21:33 ` Alexander Graf
2011-02-02 21:33   ` Alexander Graf
2011-02-02 21:33   ` [Qemu-devel] " Alexander Graf
2011-02-02 22:08   ` Scott Wood
2011-02-02 22:08     ` Scott Wood
2011-02-02 22:08     ` [Qemu-devel] " Scott Wood
2011-02-03  9:19     ` Alexander Graf
2011-02-03  9:19       ` Alexander Graf
2011-02-03  9:19       ` [Qemu-devel] " Alexander Graf
2011-02-04 22:33       ` Scott Wood
2011-02-04 22:33         ` Scott Wood
2011-02-04 22:33         ` [Qemu-devel] " Scott Wood
2011-02-07 15:43         ` Alexander Graf
2011-02-07 15:43           ` Alexander Graf
2011-02-07 15:43           ` [Qemu-devel] " Alexander Graf
2011-02-07 16:40           ` Yoder Stuart-B08248
2011-02-07 16:40             ` Yoder Stuart-B08248
2011-02-07 16:40             ` [Qemu-devel] " Yoder Stuart-B08248
2011-02-07 16:49             ` Alexander Graf
2011-02-07 16:49               ` Alexander Graf
2011-02-07 16:49               ` [Qemu-devel] " Alexander Graf
2011-02-07 18:52               ` Scott Wood
2011-02-07 18:52                 ` Scott Wood
2011-02-07 18:52                 ` [Qemu-devel] " Scott Wood
2011-02-07 19:56                 ` Yoder Stuart-B08248
2011-02-07 19:56                   ` Yoder Stuart-B08248
2011-02-07 19:56                   ` [Qemu-devel] " Yoder Stuart-B08248
2011-02-09 17:03                   ` Alexander Graf
2011-02-09 17:03                     ` Alexander Graf
2011-02-09 17:03                     ` [Qemu-devel] " Alexander Graf
2011-02-07 20:15           ` Scott Wood
2011-02-07 20:15             ` Scott Wood
2011-02-07 20:15             ` [Qemu-devel] " Scott Wood
2011-02-09 17:21             ` Alexander Graf
2011-02-09 17:21               ` Alexander Graf
2011-02-09 17:21               ` [Qemu-devel] " Alexander Graf
2011-02-09 23:09               ` Scott Wood [this message]
2011-02-09 23:09                 ` Scott Wood
2011-02-09 23:09                 ` [Qemu-devel] " Scott Wood
2011-02-10 11:45                 ` Alexander Graf
2011-02-10 11:45                   ` Alexander Graf
2011-02-10 11:45                   ` [Qemu-devel] " Alexander Graf
2011-02-10 18:51                   ` Scott Wood
2011-02-10 18:51                     ` Scott Wood
2011-02-10 18:51                     ` [Qemu-devel] " Scott Wood
2011-02-11  0:20                     ` Alexander Graf
2011-02-11  0:20                       ` Alexander Graf
2011-02-11  0:20                       ` [Qemu-devel] " Alexander Graf
2011-02-11  0:22                       ` Alexander Graf
2011-02-11  0:22                         ` Alexander Graf
2011-02-11  0:22                         ` [Qemu-devel] " Alexander Graf
2011-02-11  1:41                         ` Alexander Graf
2011-02-11  1:41                           ` Alexander Graf
2011-02-11  1:41                           ` [Qemu-devel] " Alexander Graf
2011-02-11 20:53                           ` Scott Wood
2011-02-11 20:53                             ` Scott Wood
2011-02-11 20:53                             ` [Qemu-devel] " Scott Wood
2011-02-11 21:07                             ` Alexander Graf
2011-02-11 21:07                               ` Alexander Graf
2011-02-11 21:07                               ` [Qemu-devel] " Alexander Graf
2011-02-12  0:57                               ` Scott Wood
2011-02-12  0:57                                 ` Scott Wood
2011-02-12  0:57                                 ` [Qemu-devel] " Scott Wood
2011-02-13 22:43                                 ` Alexander Graf
2011-02-13 22:43                                   ` Alexander Graf
2011-02-13 22:43                                   ` [Qemu-devel] " Alexander Graf
2011-02-14 17:11                                   ` Scott Wood
2011-02-14 17:11                                     ` Scott Wood
2011-02-14 17:11                                     ` [Qemu-devel] " Scott Wood
2011-02-14 20:19                                     ` Alexander Graf
2011-02-14 20:19                                       ` Alexander Graf
2011-02-14 20:19                                       ` [Qemu-devel] " Alexander Graf
2011-02-14 21:16                                       ` Scott Wood
2011-02-14 21:16                                         ` Scott Wood
2011-02-14 21:16                                         ` [Qemu-devel] " Scott Wood
2011-02-14 23:39                                         ` Alexander Graf
2011-02-14 23:39                                           ` Alexander Graf
2011-02-14 23:39                                           ` [Qemu-devel] " Alexander Graf
2011-02-14 23:49                                           ` Scott Wood
2011-02-14 23:49                                             ` Scott Wood
2011-02-14 23:49                                             ` [Qemu-devel] " Scott Wood
2011-02-15  0:00                                             ` Alexander Graf
2011-02-15  0:00                                               ` Alexander Graf
2011-02-15  0:00                                               ` [Qemu-devel] " Alexander Graf
2011-02-07 17:13       ` Avi Kivity
2011-02-07 17:13         ` Avi Kivity
2011-02-07 17:13         ` [Qemu-devel] " Avi Kivity
2011-02-07 17:30         ` Yoder Stuart-B08248
2011-02-07 17:30           ` Yoder Stuart-B08248
2011-02-07 17:30           ` [Qemu-devel] " Yoder Stuart-B08248
2011-02-08  9:10           ` Avi Kivity
2011-02-08  9:10             ` Avi Kivity
2011-02-08  9:10             ` [Qemu-devel] " Avi Kivity
2011-02-10  0:04       ` Scott Wood
2011-02-10  0:04         ` Scott Wood
2011-02-10  0:04         ` [Qemu-devel] " Scott Wood
2011-02-10 11:55         ` Alexander Graf
2011-02-10 11:55           ` Alexander Graf
2011-02-10 11:55           ` [Qemu-devel] " Alexander Graf
2011-02-10 12:31           ` Edgar E. Iglesias
2011-02-10 12:31             ` Edgar E. Iglesias
2011-02-10 12:31             ` [Qemu-devel] " Edgar E. Iglesias
2011-02-02 22:34   ` Yoder Stuart-B08248
2011-02-02 22:34     ` Yoder Stuart-B08248
2011-02-02 22:34     ` [Qemu-devel] " Yoder Stuart-B08248
2011-02-03  9:29     ` Alexander Graf
2011-02-03  9:29       ` Alexander Graf
2011-02-03  9:29       ` [Qemu-devel] " Alexander Graf
  -- strict thread matches above, loose matches on Subject: below --
2011-02-02 20:30 Yoder Stuart-B08248

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20110209170928.6c629514@udp111988uds \
    --to=scottwood@freescale.com \
    --cc=B08248@freescale.com \
    --cc=agraf@suse.de \
    --cc=kvm-ppc@vger.kernel.org \
    --cc=kvm@vger.kernel.org \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.