linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/5] Make use of hardware reference and change bits in HPT
@ 2011-12-15 12:00 Paul Mackerras
  2011-12-15 12:01 ` [PATCH 1/5] KVM: PPC: Book3S HV: Keep HPTE locked when invalidating Paul Mackerras
                   ` (5 more replies)
  0 siblings, 6 replies; 12+ messages in thread
From: Paul Mackerras @ 2011-12-15 12:00 UTC (permalink / raw)
  To: Alexander Graf; +Cc: linuxppc-dev, kvm-ppc

This series of patches builds on top of my previous series and
modifies the Book3S HV memory management code to use the hardware
reference and change bits in the guest hashed page table.  This makes
kvm_age_hva() more efficient, lets us implement the dirty page
tracking properly (which in turn means that things like VGA emulation
in qemu can work), and also means that we can supply hardware
reference and change information to the guest -- not that Linux guests
currently use that information, but possibly they will want it in
future, and there is an interface defined in PAPR for it.

Paul.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 1/5] KVM: PPC: Book3S HV: Keep HPTE locked when invalidating
  2011-12-15 12:00 [PATCH 0/5] Make use of hardware reference and change bits in HPT Paul Mackerras
@ 2011-12-15 12:01 ` Paul Mackerras
  2011-12-15 12:02 ` [PATCH 2/5] KVM: PPC: Book3s HV: Maintain separate guest and host views of R and C bits Paul Mackerras
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 12+ messages in thread
From: Paul Mackerras @ 2011-12-15 12:01 UTC (permalink / raw)
  To: Alexander Graf; +Cc: linuxppc-dev, kvm-ppc

This reworks the implementations of the H_REMOVE and H_BULK_REMOVE
hcalls to make sure that we keep the HPTE locked and in the reverse-
mapping chain until we have finished invalidating it.  Previously
we would remove it from the chain and unlock it before invalidating
it, leaving a tiny window when the guest could access the page even
though we believe we have removed it from the guest (e.g.,
kvm_unmap_hva() has been called for the page and has found no HPTEs
in the chain).  In addition, we'll need this for future patches where
we will need to read the R and C bits in the HPTE after invalidating
it.

Doing this required restructuring kvmppc_h_bulk_remove() substantially.
Since we want to batch up the tlbies, we now need to keep several
HPTEs locked simultaneously.  In order to avoid possible deadlocks,
we don't spin on the HPTE bitlock for any except the first HPTE in
a batch.  If we can't acquire the HPTE bitlock for the second or
subsequent HPTE, we terminate the batch at that point, do the tlbies
that we have accumulated so far, unlock those HPTEs, and then start
a new batch to do the remaining invalidations.

Signed-off-by: Paul Mackerras <paulus@samba.org>
---
 arch/powerpc/kvm/book3s_hv_rm_mmu.c |  212 ++++++++++++++++++++--------------
 1 files changed, 125 insertions(+), 87 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_hv_rm_mmu.c b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
index 7c0fc99..823348d 100644
--- a/arch/powerpc/kvm/book3s_hv_rm_mmu.c
+++ b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
@@ -140,6 +140,12 @@ static pte_t lookup_linux_pte(struct kvm_vcpu *vcpu, unsigned long hva,
 	return kvmppc_read_update_linux_pte(ptep, writing);
 }
 
+static inline void unlock_hpte(unsigned long *hpte, unsigned long hpte_v)
+{
+	asm volatile(PPC_RELEASE_BARRIER "" : : : "memory");
+	hpte[0] = hpte_v;
+}
+
 long kvmppc_h_enter(struct kvm_vcpu *vcpu, unsigned long flags,
 		    long pte_index, unsigned long pteh, unsigned long ptel)
 {
@@ -356,6 +362,7 @@ long kvmppc_h_remove(struct kvm_vcpu *vcpu, unsigned long flags,
 	struct kvm *kvm = vcpu->kvm;
 	unsigned long *hpte;
 	unsigned long v, r, rb;
+	struct revmap_entry *rev;
 
 	if (pte_index >= HPT_NPTE)
 		return H_PARAMETER;
@@ -368,30 +375,32 @@ long kvmppc_h_remove(struct kvm_vcpu *vcpu, unsigned long flags,
 		hpte[0] &= ~HPTE_V_HVLOCK;
 		return H_NOT_FOUND;
 	}
-	if (atomic_read(&kvm->online_vcpus) == 1)
-		flags |= H_LOCAL;
-	vcpu->arch.gpr[4] = v = hpte[0] & ~HPTE_V_HVLOCK;
-	vcpu->arch.gpr[5] = r = hpte[1];
-	rb = compute_tlbie_rb(v, r, pte_index);
-	if (v & HPTE_V_VALID)
+
+	rev = real_vmalloc_addr(&kvm->arch.revmap[pte_index]);
+	v = hpte[0] & ~HPTE_V_HVLOCK;
+	if (v & HPTE_V_VALID) {
+		hpte[0] &= ~HPTE_V_VALID;
+		rb = compute_tlbie_rb(v, hpte[1], pte_index);
+		if (!(flags & H_LOCAL) && atomic_read(&kvm->online_vcpus) > 1) {
+			while (!try_lock_tlbie(&kvm->arch.tlbie_lock))
+				cpu_relax();
+			asm volatile("ptesync" : : : "memory");
+			asm volatile(PPC_TLBIE(%1,%0)"; eieio; tlbsync"
+				     : : "r" (rb), "r" (kvm->arch.lpid));
+			asm volatile("ptesync" : : : "memory");
+			kvm->arch.tlbie_lock = 0;
+		} else {
+			asm volatile("ptesync" : : : "memory");
+			asm volatile("tlbiel %0" : : "r" (rb));
+			asm volatile("ptesync" : : : "memory");
+		}
 		remove_revmap_chain(kvm, pte_index, v);
-	smp_wmb();
-	hpte[0] = 0;
-	if (!(v & HPTE_V_VALID))
-		return H_SUCCESS;
-	if (!(flags & H_LOCAL)) {
-		while (!try_lock_tlbie(&kvm->arch.tlbie_lock))
-			cpu_relax();
-		asm volatile("ptesync" : : : "memory");
-		asm volatile(PPC_TLBIE(%1,%0)"; eieio; tlbsync"
-			     : : "r" (rb), "r" (kvm->arch.lpid));
-		asm volatile("ptesync" : : : "memory");
-		kvm->arch.tlbie_lock = 0;
-	} else {
-		asm volatile("ptesync" : : : "memory");
-		asm volatile("tlbiel %0" : : "r" (rb));
-		asm volatile("ptesync" : : : "memory");
 	}
+	r = rev->guest_rpte;
+	unlock_hpte(hpte, 0);
+
+	vcpu->arch.gpr[4] = v;
+	vcpu->arch.gpr[5] = r;
 	return H_SUCCESS;
 }
 
@@ -399,82 +408,113 @@ long kvmppc_h_bulk_remove(struct kvm_vcpu *vcpu)
 {
 	struct kvm *kvm = vcpu->kvm;
 	unsigned long *args = &vcpu->arch.gpr[4];
-	unsigned long *hp, tlbrb[4];
-	long int i, found;
-	long int n_inval = 0;
-	unsigned long flags, req, pte_index;
+	unsigned long *hp, *hptes[4], tlbrb[4];
+	long int i, j, k, n, found, indexes[4];
+	unsigned long flags, req, pte_index, rcbits;
 	long int local = 0;
 	long int ret = H_SUCCESS;
+	struct revmap_entry *rev, *revs[4];
 
 	if (atomic_read(&kvm->online_vcpus) == 1)
 		local = 1;
-	for (i = 0; i < 4; ++i) {
-		pte_index = args[i * 2];
-		flags = pte_index >> 56;
-		pte_index &= ((1ul << 56) - 1);
-		req = flags >> 6;
-		flags &= 3;
-		if (req == 3)
-			break;
-		if (req != 1 || flags == 3 ||
-		    pte_index >= HPT_NPTE) {
-			/* parameter error */
-			args[i * 2] = ((0xa0 | flags) << 56) + pte_index;
-			ret = H_PARAMETER;
-			break;
-		}
-		hp = (unsigned long *)(kvm->arch.hpt_virt + (pte_index << 4));
-		while (!try_lock_hpte(hp, HPTE_V_HVLOCK))
-			cpu_relax();
-		found = 0;
-		if (hp[0] & (HPTE_V_ABSENT | HPTE_V_VALID)) {
-			switch (flags & 3) {
-			case 0:		/* absolute */
-				found = 1;
+	for (i = 0; i < 4 && ret == H_SUCCESS; ) {
+		n = 0;
+		for (; i < 4; ++i) {
+			j = i * 2;
+			pte_index = args[j];
+			flags = pte_index >> 56;
+			pte_index &= ((1ul << 56) - 1);
+			req = flags >> 6;
+			flags &= 3;
+			if (req == 3) {		/* no more requests */
+				i = 4;
 				break;
-			case 1:		/* andcond */
-				if (!(hp[0] & args[i * 2 + 1]))
-					found = 1;
+			}
+			if (req != 1 || flags == 3 || pte_index >= HPT_NPTE) {
+				/* parameter error */
+				args[j] = ((0xa0 | flags) << 56) + pte_index;
+				ret = H_PARAMETER;
 				break;
-			case 2:		/* AVPN */
-				if ((hp[0] & ~0x7fUL) == args[i * 2 + 1])
+			}
+			hp = (unsigned long *)
+				(kvm->arch.hpt_virt + (pte_index << 4));
+			/* to avoid deadlock, don't spin except for first */
+			if (!try_lock_hpte(hp, HPTE_V_HVLOCK)) {
+				if (n)
+					break;
+				while (!try_lock_hpte(hp, HPTE_V_HVLOCK))
+					cpu_relax();
+			}
+			found = 0;
+			if (hp[0] & (HPTE_V_ABSENT | HPTE_V_VALID)) {
+				switch (flags & 3) {
+				case 0:		/* absolute */
 					found = 1;
-				break;
+					break;
+				case 1:		/* andcond */
+					if (!(hp[0] & args[j + 1]))
+						found = 1;
+					break;
+				case 2:		/* AVPN */
+					if ((hp[0] & ~0x7fUL) == args[j + 1])
+						found = 1;
+					break;
+				}
+			}
+			if (!found) {
+				hp[0] &= ~HPTE_V_HVLOCK;
+				args[j] = ((0x90 | flags) << 56) + pte_index;
+				continue;
 			}
+
+			args[j] = ((0x80 | flags) << 56) + pte_index;
+			rev = real_vmalloc_addr(&kvm->arch.revmap[pte_index]);
+			/* insert R and C bits from guest PTE */
+			rcbits = rev->guest_rpte & (HPTE_R_R|HPTE_R_C);
+			args[j] |= rcbits << (56 - 5);
+
+			if (!(hp[0] & HPTE_V_VALID))
+				continue;
+
+			hp[0] &= ~HPTE_V_VALID;		/* leave it locked */
+			tlbrb[n] = compute_tlbie_rb(hp[0], hp[1], pte_index);
+			indexes[n] = j;
+			hptes[n] = hp;
+			revs[n] = rev;
+			++n;
 		}
-		if (!found) {
-			hp[0] &= ~HPTE_V_HVLOCK;
-			args[i * 2] = ((0x90 | flags) << 56) + pte_index;
-			continue;
+
+		if (!n)
+			break;
+
+		/* Now that we've collected a batch, do the tlbies */
+		if (!local) {
+			while(!try_lock_tlbie(&kvm->arch.tlbie_lock))
+				cpu_relax();
+			asm volatile("ptesync" : : : "memory");
+			for (k = 0; k < n; ++k)
+				asm volatile(PPC_TLBIE(%1,%0) : :
+					     "r" (tlbrb[k]),
+					     "r" (kvm->arch.lpid));
+			asm volatile("eieio; tlbsync; ptesync" : : : "memory");
+			kvm->arch.tlbie_lock = 0;
+		} else {
+			asm volatile("ptesync" : : : "memory");
+			for (k = 0; k < n; ++k)
+				asm volatile("tlbiel %0" : : "r" (tlbrb[k]));
+			asm volatile("ptesync" : : : "memory");
 		}
-		/* insert R and C bits from PTE */
-		flags |= (hp[1] >> 5) & 0x0c;
-		args[i * 2] = ((0x80 | flags) << 56) + pte_index;
-		if (hp[0] & HPTE_V_VALID) {
-			tlbrb[n_inval++] = compute_tlbie_rb(hp[0], hp[1], pte_index);
+
+		for (k = 0; k < n; ++k) {
+			j = indexes[k];
+			pte_index = args[j] & ((1ul << 56) - 1);
+			hp = hptes[k];
+			rev = revs[k];
 			remove_revmap_chain(kvm, pte_index, hp[0]);
+			unlock_hpte(hp, 0);
 		}
-		smp_wmb();
-		hp[0] = 0;
-	}
-	if (n_inval == 0)
-		return ret;
-
-	if (!local) {
-		while(!try_lock_tlbie(&kvm->arch.tlbie_lock))
-			cpu_relax();
-		asm volatile("ptesync" : : : "memory");
-		for (i = 0; i < n_inval; ++i)
-			asm volatile(PPC_TLBIE(%1,%0)
-				     : : "r" (tlbrb[i]), "r" (kvm->arch.lpid));
-		asm volatile("eieio; tlbsync; ptesync" : : : "memory");
-		kvm->arch.tlbie_lock = 0;
-	} else {
-		asm volatile("ptesync" : : : "memory");
-		for (i = 0; i < n_inval; ++i)
-			asm volatile("tlbiel %0" : : "r" (tlbrb[i]));
-		asm volatile("ptesync" : : : "memory");
 	}
+
 	return ret;
 }
 
@@ -720,9 +760,7 @@ long kvmppc_hpte_hv_fault(struct kvm_vcpu *vcpu, unsigned long addr,
 	rev = real_vmalloc_addr(&kvm->arch.revmap[index]);
 	gr = rev->guest_rpte;
 
-	/* Unlock the HPTE */
-	asm volatile("lwsync" : : : "memory");
-	hpte[0] = v;
+	unlock_hpte(hpte, v);
 
 	/* For not found, if the HPTE is valid by now, retry the instruction */
 	if ((status & DSISR_NOHPTE) && (v & HPTE_V_VALID))
-- 
1.7.7.3

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 2/5] KVM: PPC: Book3s HV: Maintain separate guest and host views of R and C bits
  2011-12-15 12:00 [PATCH 0/5] Make use of hardware reference and change bits in HPT Paul Mackerras
  2011-12-15 12:01 ` [PATCH 1/5] KVM: PPC: Book3S HV: Keep HPTE locked when invalidating Paul Mackerras
@ 2011-12-15 12:02 ` Paul Mackerras
  2011-12-15 12:02 ` [PATCH 3/5] KVM: PPC: Book3S HV: Use the hardware referenced bit for kvm_age_hva Paul Mackerras
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 12+ messages in thread
From: Paul Mackerras @ 2011-12-15 12:02 UTC (permalink / raw)
  To: Alexander Graf; +Cc: linuxppc-dev, kvm-ppc

This allows both the guest and the host to use the referenced (R) and
changed (C) bits in the guest hashed page table.  The guest has a view
of R and C that is maintained in the guest_rpte field of the revmap
entry for the HPTE, and the host has a view that is maintained in the
rmap entry for the associated gfn.

Both view are updated from the guest HPT.  If a bit (R or C) is zero
in either view, it will be initially set to zero in the HPTE (or HPTEs),
until set to 1 by hardware.  When an HPTE is removed for any reason,
the R and C bits from the HPTE are ORed into both views.  We have to
be careful to read the R and C bits from the HPTE after invalidating
it, but before unlocking it, in case of any late updates by the hardware.

Signed-off-by: Paul Mackerras <paulus@samba.org>
---
 arch/powerpc/include/asm/kvm_host.h |    5 ++-
 arch/powerpc/kvm/book3s_64_mmu_hv.c |   48 +++++++++++++++++++++-------------
 arch/powerpc/kvm/book3s_hv_rm_mmu.c |   45 +++++++++++++++++++--------------
 3 files changed, 59 insertions(+), 39 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index 968f3aa..1cb6e52 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -200,8 +200,9 @@ struct revmap_entry {
  * index in the guest HPT of a HPTE that points to the page.
  */
 #define KVMPPC_RMAP_LOCK_BIT	63
-#define KVMPPC_RMAP_REF_BIT	33
-#define KVMPPC_RMAP_REFERENCED	(1ul << KVMPPC_RMAP_REF_BIT)
+#define KVMPPC_RMAP_RC_SHIFT	32
+#define KVMPPC_RMAP_REFERENCED	(HPTE_R_R << KVMPPC_RMAP_RC_SHIFT)
+#define KVMPPC_RMAP_CHANGED	(HPTE_R_C << KVMPPC_RMAP_RC_SHIFT)
 #define KVMPPC_RMAP_PRESENT	0x100000000ul
 #define KVMPPC_RMAP_INDEX	0xfffffffful
 
diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c
index 66d6452..aa51dde 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
@@ -505,6 +505,7 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu,
 	unsigned long is_io;
 	unsigned int writing, write_ok;
 	struct vm_area_struct *vma;
+	unsigned long rcbits;
 
 	/*
 	 * Real-mode code has already searched the HPT and found the
@@ -640,11 +641,17 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu,
 		goto out_unlock;
 	}
 
+	/* Only set R/C in real HPTE if set in both *rmap and guest_rpte */
+	rcbits = *rmap >> KVMPPC_RMAP_RC_SHIFT;
+	r &= rcbits | ~(HPTE_R_R | HPTE_R_C);
+
 	if (hptep[0] & HPTE_V_VALID) {
 		/* HPTE was previously valid, so we need to invalidate it */
 		unlock_rmap(rmap);
 		hptep[0] |= HPTE_V_ABSENT;
 		kvmppc_invalidate_hpte(kvm, hptep, index);
+		/* don't lose previous R and C bits */
+		r |= hptep[1] & (HPTE_R_R | HPTE_R_C);
 	} else {
 		kvmppc_add_revmap_chain(kvm, rev, rmap, index, 0);
 	}
@@ -701,50 +708,55 @@ static int kvm_unmap_rmapp(struct kvm *kvm, unsigned long *rmapp,
 	struct revmap_entry *rev = kvm->arch.revmap;
 	unsigned long h, i, j;
 	unsigned long *hptep;
-	unsigned long ptel, psize;
+	unsigned long ptel, psize, rcbits;
 
 	for (;;) {
-		while (test_and_set_bit_lock(KVMPPC_RMAP_LOCK_BIT, rmapp))
-			cpu_relax();
+		lock_rmap(rmapp);
 		if (!(*rmapp & KVMPPC_RMAP_PRESENT)) {
-			__clear_bit_unlock(KVMPPC_RMAP_LOCK_BIT, rmapp);
+			unlock_rmap(rmapp);
 			break;
 		}
 
 		/*
 		 * To avoid an ABBA deadlock with the HPTE lock bit,
-		 * we have to unlock the rmap chain before locking the HPTE.
-		 * Thus we remove the first entry, unlock the rmap chain,
-		 * lock the HPTE and then check that it is for the
-		 * page we're unmapping before changing it to non-present.
+		 * we can't spin on the HPTE lock while holding the
+		 * rmap chain lock.
 		 */
 		i = *rmapp & KVMPPC_RMAP_INDEX;
+		hptep = (unsigned long *) (kvm->arch.hpt_virt + (i << 4));
+		if (!try_lock_hpte(hptep, HPTE_V_HVLOCK)) {
+			/* unlock rmap before spinning on the HPTE lock */
+			unlock_rmap(rmapp);
+			while (hptep[0] & HPTE_V_HVLOCK)
+				cpu_relax();
+			continue;
+		}
 		j = rev[i].forw;
 		if (j == i) {
 			/* chain is now empty */
-			j = 0;
+			*rmapp &= ~(KVMPPC_RMAP_PRESENT | KVMPPC_RMAP_INDEX);
 		} else {
 			/* remove i from chain */
 			h = rev[i].back;
 			rev[h].forw = j;
 			rev[j].back = h;
 			rev[i].forw = rev[i].back = i;
-			j |= KVMPPC_RMAP_PRESENT;
+			*rmapp = (*rmapp & ~KVMPPC_RMAP_INDEX) | j;
 		}
-		smp_wmb();
-		*rmapp = j | (1ul << KVMPPC_RMAP_REF_BIT);
 
-		/* Now lock, check and modify the HPTE */
-		hptep = (unsigned long *) (kvm->arch.hpt_virt + (i << 4));
-		while (!try_lock_hpte(hptep, HPTE_V_HVLOCK))
-			cpu_relax();
+		/* Now check and modify the HPTE */
 		ptel = rev[i].guest_rpte;
 		psize = hpte_page_size(hptep[0], ptel);
 		if ((hptep[0] & HPTE_V_VALID) &&
 		    hpte_rpn(ptel, psize) == gfn) {
-			kvmppc_invalidate_hpte(kvm, hptep, i);
 			hptep[0] |= HPTE_V_ABSENT;
+			kvmppc_invalidate_hpte(kvm, hptep, i);
+			/* Harvest R and C */
+			rcbits = hptep[1] & (HPTE_R_R | HPTE_R_C);
+			*rmapp |= rcbits << KVMPPC_RMAP_RC_SHIFT;
+			rev[i].guest_rpte = ptel | rcbits;
 		}
+		unlock_rmap(rmapp);
 		hptep[0] &= ~HPTE_V_HVLOCK;
 	}
 	return 0;
@@ -767,7 +779,7 @@ static int kvm_age_rmapp(struct kvm *kvm, unsigned long *rmapp,
 	kvm_unmap_rmapp(kvm, rmapp, gfn);
 	while (test_and_set_bit_lock(KVMPPC_RMAP_LOCK_BIT, rmapp))
 		cpu_relax();
-	__clear_bit(KVMPPC_RMAP_REF_BIT, rmapp);
+	*rmapp &= ~KVMPPC_RMAP_REFERENCED;
 	__clear_bit_unlock(KVMPPC_RMAP_LOCK_BIT, rmapp);
 	return 1;
 }
diff --git a/arch/powerpc/kvm/book3s_hv_rm_mmu.c b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
index 823348d..bcf6f92 100644
--- a/arch/powerpc/kvm/book3s_hv_rm_mmu.c
+++ b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
@@ -87,15 +87,17 @@ EXPORT_SYMBOL_GPL(kvmppc_add_revmap_chain);
 
 /* Remove this HPTE from the chain for a real page */
 static void remove_revmap_chain(struct kvm *kvm, long pte_index,
-				unsigned long hpte_v)
+				struct revmap_entry *rev,
+				unsigned long hpte_v, unsigned long hpte_r)
 {
-	struct revmap_entry *rev, *next, *prev;
+	struct revmap_entry *next, *prev;
 	unsigned long gfn, ptel, head;
 	struct kvm_memory_slot *memslot;
 	unsigned long *rmap;
+	unsigned long rcbits;
 
-	rev = real_vmalloc_addr(&kvm->arch.revmap[pte_index]);
-	ptel = rev->guest_rpte;
+	rcbits = hpte_r & (HPTE_R_R | HPTE_R_C);
+	ptel = rev->guest_rpte |= rcbits;
 	gfn = hpte_rpn(ptel, hpte_page_size(hpte_v, ptel));
 	memslot = builtin_gfn_to_memslot(kvm, gfn);
 	if (!memslot || (memslot->flags & KVM_MEMSLOT_INVALID))
@@ -116,6 +118,7 @@ static void remove_revmap_chain(struct kvm *kvm, long pte_index,
 		else
 			*rmap = (*rmap & ~KVMPPC_RMAP_INDEX) | head;
 	}
+	*rmap |= rcbits << KVMPPC_RMAP_RC_SHIFT;
 	unlock_rmap(rmap);
 }
 
@@ -162,6 +165,7 @@ long kvmppc_h_enter(struct kvm_vcpu *vcpu, unsigned long flags,
 	pte_t pte;
 	unsigned int writing;
 	unsigned long mmu_seq;
+	unsigned long rcbits;
 	bool realmode = vcpu->arch.vcore->vcore_state == VCORE_RUNNING;
 
 	psize = hpte_page_size(pteh, ptel);
@@ -320,6 +324,9 @@ long kvmppc_h_enter(struct kvm_vcpu *vcpu, unsigned long flags,
 		} else {
 			kvmppc_add_revmap_chain(kvm, rev, rmap, pte_index,
 						realmode);
+			/* Only set R/C in real HPTE if already set in *rmap */
+			rcbits = *rmap >> KVMPPC_RMAP_RC_SHIFT;
+			ptel &= rcbits | ~(HPTE_R_R | HPTE_R_C);
 		}
 	}
 
@@ -394,7 +401,8 @@ long kvmppc_h_remove(struct kvm_vcpu *vcpu, unsigned long flags,
 			asm volatile("tlbiel %0" : : "r" (rb));
 			asm volatile("ptesync" : : : "memory");
 		}
-		remove_revmap_chain(kvm, pte_index, v);
+		/* Read PTE low word after tlbie to get final R/C values */
+		remove_revmap_chain(kvm, pte_index, rev, v, hpte[1]);
 	}
 	r = rev->guest_rpte;
 	unlock_hpte(hpte, 0);
@@ -469,12 +477,13 @@ long kvmppc_h_bulk_remove(struct kvm_vcpu *vcpu)
 
 			args[j] = ((0x80 | flags) << 56) + pte_index;
 			rev = real_vmalloc_addr(&kvm->arch.revmap[pte_index]);
-			/* insert R and C bits from guest PTE */
-			rcbits = rev->guest_rpte & (HPTE_R_R|HPTE_R_C);
-			args[j] |= rcbits << (56 - 5);
 
-			if (!(hp[0] & HPTE_V_VALID))
+			if (!(hp[0] & HPTE_V_VALID)) {
+				/* insert R and C bits from PTE */
+				rcbits = rev->guest_rpte & (HPTE_R_R|HPTE_R_C);
+				args[j] |= rcbits << (56 - 5);
 				continue;
+			}
 
 			hp[0] &= ~HPTE_V_VALID;		/* leave it locked */
 			tlbrb[n] = compute_tlbie_rb(hp[0], hp[1], pte_index);
@@ -505,13 +514,16 @@ long kvmppc_h_bulk_remove(struct kvm_vcpu *vcpu)
 			asm volatile("ptesync" : : : "memory");
 		}
 
+		/* Read PTE low words after tlbie to get final R/C values */
 		for (k = 0; k < n; ++k) {
 			j = indexes[k];
 			pte_index = args[j] & ((1ul << 56) - 1);
 			hp = hptes[k];
 			rev = revs[k];
-			remove_revmap_chain(kvm, pte_index, hp[0]);
-			unlock_hpte(hp, 0);
+			remove_revmap_chain(kvm, pte_index, rev, hp[0], hp[1]);
+			rcbits = rev->guest_rpte & (HPTE_R_R|HPTE_R_C);
+			args[j] |= rcbits << (56 - 5);
+			hp[0] = 0;
 		}
 	}
 
@@ -595,8 +607,7 @@ long kvmppc_h_read(struct kvm_vcpu *vcpu, unsigned long flags,
 		pte_index &= ~3;
 		n = 4;
 	}
-	if (flags & H_R_XLATE)
-		rev = real_vmalloc_addr(&kvm->arch.revmap[pte_index]);
+	rev = real_vmalloc_addr(&kvm->arch.revmap[pte_index]);
 	for (i = 0; i < n; ++i, ++pte_index) {
 		hpte = (unsigned long *)(kvm->arch.hpt_virt + (pte_index << 4));
 		v = hpte[0] & ~HPTE_V_HVLOCK;
@@ -605,12 +616,8 @@ long kvmppc_h_read(struct kvm_vcpu *vcpu, unsigned long flags,
 			v &= ~HPTE_V_ABSENT;
 			v |= HPTE_V_VALID;
 		}
-		if (v & HPTE_V_VALID) {
-			if (rev)
-				r = rev[i].guest_rpte;
-			else
-				r = hpte[1] | HPTE_R_RPN;
-		}
+		if (v & HPTE_V_VALID)
+			r = rev[i].guest_rpte | (r & (HPTE_R_R | HPTE_R_C));
 		vcpu->arch.gpr[4 + i * 2] = v;
 		vcpu->arch.gpr[5 + i * 2] = r;
 	}
-- 
1.7.7.3

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 3/5] KVM: PPC: Book3S HV: Use the hardware referenced bit for kvm_age_hva
  2011-12-15 12:00 [PATCH 0/5] Make use of hardware reference and change bits in HPT Paul Mackerras
  2011-12-15 12:01 ` [PATCH 1/5] KVM: PPC: Book3S HV: Keep HPTE locked when invalidating Paul Mackerras
  2011-12-15 12:02 ` [PATCH 2/5] KVM: PPC: Book3s HV: Maintain separate guest and host views of R and C bits Paul Mackerras
@ 2011-12-15 12:02 ` Paul Mackerras
  2011-12-15 12:03 ` [PATCH 4/5] KVM: PPC: Book3s HV: Implement get_dirty_log using hardware changed bit Paul Mackerras
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 12+ messages in thread
From: Paul Mackerras @ 2011-12-15 12:02 UTC (permalink / raw)
  To: Alexander Graf; +Cc: linuxppc-dev, kvm-ppc

This uses the host view of the hardware R (referenced) bit to speed
up kvm_age_hva() and kvm_test_age_hva().  Instead of removing all
the relevant HPTEs in kvm_age_hva(), we now just reset their R bits
if set.  Also, kvm_test_age_hva() now scans the relevant HPTEs to
see if any of them have R set.

Signed-off-by: Paul Mackerras <paulus@samba.org>
---
 arch/powerpc/include/asm/kvm_book3s.h |    2 +
 arch/powerpc/kvm/book3s_64_mmu_hv.c   |   81 ++++++++++++++++++++++++++++-----
 arch/powerpc/kvm/book3s_hv_rm_mmu.c   |   19 ++++++++
 3 files changed, 91 insertions(+), 11 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index ea9539c..6ececb4 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -149,6 +149,8 @@ extern void kvmppc_add_revmap_chain(struct kvm *kvm, struct revmap_entry *rev,
 			unsigned long *rmap, long pte_index, int realmode);
 extern void kvmppc_invalidate_hpte(struct kvm *kvm, unsigned long *hptep,
 			unsigned long pte_index);
+void kvmppc_clear_ref_hpte(struct kvm *kvm, unsigned long *hptep,
+			unsigned long pte_index);
 extern void *kvmppc_pin_guest_page(struct kvm *kvm, unsigned long addr,
 			unsigned long *nb_ret);
 extern void kvmppc_unpin_guest_page(struct kvm *kvm, void *addr);
diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c
index aa51dde..926e2b9 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
@@ -772,16 +772,50 @@ int kvm_unmap_hva(struct kvm *kvm, unsigned long hva)
 static int kvm_age_rmapp(struct kvm *kvm, unsigned long *rmapp,
 			 unsigned long gfn)
 {
-	if (!kvm->arch.using_mmu_notifiers)
-		return 0;
-	if (!(*rmapp & KVMPPC_RMAP_REFERENCED))
-		return 0;
-	kvm_unmap_rmapp(kvm, rmapp, gfn);
-	while (test_and_set_bit_lock(KVMPPC_RMAP_LOCK_BIT, rmapp))
-		cpu_relax();
-	*rmapp &= ~KVMPPC_RMAP_REFERENCED;
-	__clear_bit_unlock(KVMPPC_RMAP_LOCK_BIT, rmapp);
-	return 1;
+	struct revmap_entry *rev = kvm->arch.revmap;
+	unsigned long head, i, j;
+	unsigned long *hptep;
+	int ret = 0;
+
+ retry:
+	lock_rmap(rmapp);
+	if (*rmapp & KVMPPC_RMAP_REFERENCED) {
+		*rmapp &= ~KVMPPC_RMAP_REFERENCED;
+		ret = 1;
+	}
+	if (!(*rmapp & KVMPPC_RMAP_PRESENT)) {
+		unlock_rmap(rmapp);
+		return ret;
+	}
+
+	i = head = *rmapp & KVMPPC_RMAP_INDEX;
+	do {
+		hptep = (unsigned long *) (kvm->arch.hpt_virt + (i << 4));
+		j = rev[i].forw;
+
+		/* If this HPTE isn't referenced, ignore it */
+		if (!(hptep[1] & HPTE_R_R))
+			continue;
+
+		if (!try_lock_hpte(hptep, HPTE_V_HVLOCK)) {
+			/* unlock rmap before spinning on the HPTE lock */
+			unlock_rmap(rmapp);
+			while (hptep[0] & HPTE_V_HVLOCK)
+				cpu_relax();
+			goto retry;
+		}
+
+		/* Now check and modify the HPTE */
+		if ((hptep[0] & HPTE_V_VALID) && (hptep[1] & HPTE_R_R)) {
+			kvmppc_clear_ref_hpte(kvm, hptep, i);
+			rev[i].guest_rpte |= HPTE_R_R;
+			ret = 1;
+		}
+		hptep[0] &= ~HPTE_V_HVLOCK;
+	} while ((i = j) != head);
+
+	unlock_rmap(rmapp);
+	return ret;
 }
 
 int kvm_age_hva(struct kvm *kvm, unsigned long hva)
@@ -794,7 +828,32 @@ int kvm_age_hva(struct kvm *kvm, unsigned long hva)
 static int kvm_test_age_rmapp(struct kvm *kvm, unsigned long *rmapp,
 			      unsigned long gfn)
 {
-	return !!(*rmapp & KVMPPC_RMAP_REFERENCED);
+	struct revmap_entry *rev = kvm->arch.revmap;
+	unsigned long head, i, j;
+	unsigned long *hp;
+	int ret = 1;
+
+	if (*rmapp & KVMPPC_RMAP_REFERENCED)
+		return 1;
+
+	lock_rmap(rmapp);
+	if (*rmapp & KVMPPC_RMAP_REFERENCED)
+		goto out;
+
+	if (*rmapp & KVMPPC_RMAP_PRESENT) {
+		i = head = *rmapp & KVMPPC_RMAP_INDEX;
+		do {
+			hp = (unsigned long *)(kvm->arch.hpt_virt + (i << 4));
+			j = rev[i].forw;
+			if (hp[1] & HPTE_R_R)
+				goto out;
+		} while ((i = j) != head);
+	}
+	ret = 0;
+
+ out:
+	unlock_rmap(rmapp);
+	return ret;
 }
 
 int kvm_test_age_hva(struct kvm *kvm, unsigned long hva)
diff --git a/arch/powerpc/kvm/book3s_hv_rm_mmu.c b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
index bcf6f92..76864a8 100644
--- a/arch/powerpc/kvm/book3s_hv_rm_mmu.c
+++ b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
@@ -641,6 +641,25 @@ void kvmppc_invalidate_hpte(struct kvm *kvm, unsigned long *hptep,
 }
 EXPORT_SYMBOL_GPL(kvmppc_invalidate_hpte);
 
+void kvmppc_clear_ref_hpte(struct kvm *kvm, unsigned long *hptep,
+			   unsigned long pte_index)
+{
+	unsigned long rb;
+	unsigned char rbyte;
+
+	rb = compute_tlbie_rb(hptep[0], hptep[1], pte_index);
+	rbyte = (hptep[1] & ~HPTE_R_R) >> 8;
+	/* modify only the second-last byte, which contains the ref bit */
+	*((char *)hptep + 14) = rbyte;
+	while (!try_lock_tlbie(&kvm->arch.tlbie_lock))
+		cpu_relax();
+	asm volatile(PPC_TLBIE(%1,%0)"; eieio; tlbsync"
+		     : : "r" (rb), "r" (kvm->arch.lpid));
+	asm volatile("ptesync" : : : "memory");
+	kvm->arch.tlbie_lock = 0;
+}
+EXPORT_SYMBOL_GPL(kvmppc_clear_ref_hpte);
+
 static int slb_base_page_shift[4] = {
 	24,	/* 16M */
 	16,	/* 64k */
-- 
1.7.7.3

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 4/5] KVM: PPC: Book3s HV: Implement get_dirty_log using hardware changed bit
  2011-12-15 12:00 [PATCH 0/5] Make use of hardware reference and change bits in HPT Paul Mackerras
                   ` (2 preceding siblings ...)
  2011-12-15 12:02 ` [PATCH 3/5] KVM: PPC: Book3S HV: Use the hardware referenced bit for kvm_age_hva Paul Mackerras
@ 2011-12-15 12:03 ` Paul Mackerras
  2011-12-23 13:23   ` Alexander Graf
  2011-12-15 12:04 ` [PATCH 5/5] KVM: PPC: Book3s HV: Implement H_CLEAR_REF and H_CLEAR_MOD hcalls Paul Mackerras
  2011-12-23 13:36 ` [PATCH 0/5] Make use of hardware reference and change bits in HPT Alexander Graf
  5 siblings, 1 reply; 12+ messages in thread
From: Paul Mackerras @ 2011-12-15 12:03 UTC (permalink / raw)
  To: Alexander Graf; +Cc: linuxppc-dev, kvm-ppc

This changes the implementation of kvm_vm_ioctl_get_dirty_log() for
Book3s HV guests to use the hardware C (changed) bits in the guest
hashed page table.  Since this makes the implementation quite different
from the Book3s PR case, this moves the existing implementation from
book3s.c to book3s_pr.c and creates a new implementation in book3s_hv.c.
That implementation calls kvmppc_hv_get_dirty_log() to do the actual
work by calling kvm_test_clear_dirty on each page.  It iterates over
the HPTEs, clearing the C bit if set, and returns 1 if any C bit was
set (including the saved C bit in the rmap entry).

Signed-off-by: Paul Mackerras <paulus@samba.org>
---
 arch/powerpc/include/asm/kvm_book3s.h |    2 +
 arch/powerpc/kvm/book3s.c             |   39 ------------------
 arch/powerpc/kvm/book3s_64_mmu_hv.c   |   69 +++++++++++++++++++++++++++++++++
 arch/powerpc/kvm/book3s_hv.c          |   37 +++++++++++++++++
 arch/powerpc/kvm/book3s_pr.c          |   39 ++++++++++++++++++
 5 files changed, 147 insertions(+), 39 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h
index 6ececb4..aa795cc 100644
--- a/arch/powerpc/include/asm/kvm_book3s.h
+++ b/arch/powerpc/include/asm/kvm_book3s.h
@@ -158,6 +158,8 @@ extern long kvmppc_virtmode_h_enter(struct kvm_vcpu *vcpu, unsigned long flags,
 			long pte_index, unsigned long pteh, unsigned long ptel);
 extern long kvmppc_h_enter(struct kvm_vcpu *vcpu, unsigned long flags,
 			long pte_index, unsigned long pteh, unsigned long ptel);
+extern long kvmppc_hv_get_dirty_log(struct kvm *kvm,
+			struct kvm_memory_slot *memslot);
 
 extern void kvmppc_entry_trampoline(void);
 extern void kvmppc_hv_entry_trampoline(void);
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 6bf7e05..7d54f4e 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -477,45 +477,6 @@ int kvm_arch_vcpu_ioctl_translate(struct kvm_vcpu *vcpu,
 	return 0;
 }
 
-/*
- * Get (and clear) the dirty memory log for a memory slot.
- */
-int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm,
-				      struct kvm_dirty_log *log)
-{
-	struct kvm_memory_slot *memslot;
-	struct kvm_vcpu *vcpu;
-	ulong ga, ga_end;
-	int is_dirty = 0;
-	int r;
-	unsigned long n;
-
-	mutex_lock(&kvm->slots_lock);
-
-	r = kvm_get_dirty_log(kvm, log, &is_dirty);
-	if (r)
-		goto out;
-
-	/* If nothing is dirty, don't bother messing with page tables. */
-	if (is_dirty) {
-		memslot = id_to_memslot(kvm->memslots, log->slot);
-
-		ga = memslot->base_gfn << PAGE_SHIFT;
-		ga_end = ga + (memslot->npages << PAGE_SHIFT);
-
-		kvm_for_each_vcpu(n, vcpu, kvm)
-			kvmppc_mmu_pte_pflush(vcpu, ga, ga_end);
-
-		n = kvm_dirty_bitmap_bytes(memslot);
-		memset(memslot->dirty_bitmap, 0, n);
-	}
-
-	r = 0;
-out:
-	mutex_unlock(&kvm->slots_lock);
-	return r;
-}
-
 void kvmppc_decrementer_func(unsigned long data)
 {
 	struct kvm_vcpu *vcpu = (struct kvm_vcpu *)data;
diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c
index 926e2b9..783cd35 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
@@ -870,6 +870,75 @@ void kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte)
 	kvm_handle_hva(kvm, hva, kvm_unmap_rmapp);
 }
 
+static int kvm_test_clear_dirty(struct kvm *kvm, unsigned long *rmapp)
+{
+	struct revmap_entry *rev = kvm->arch.revmap;
+	unsigned long head, i, j;
+	unsigned long *hptep;
+	int ret = 0;
+
+ retry:
+	lock_rmap(rmapp);
+	if (*rmapp & KVMPPC_RMAP_CHANGED) {
+		*rmapp &= ~KVMPPC_RMAP_CHANGED;
+		ret = 1;
+	}
+	if (!(*rmapp & KVMPPC_RMAP_PRESENT)) {
+		unlock_rmap(rmapp);
+		return ret;
+	}
+
+	i = head = *rmapp & KVMPPC_RMAP_INDEX;
+	do {
+		hptep = (unsigned long *) (kvm->arch.hpt_virt + (i << 4));
+		j = rev[i].forw;
+
+		if (!(hptep[1] & HPTE_R_C))
+			continue;
+
+		if (!try_lock_hpte(hptep, HPTE_V_HVLOCK)) {
+			/* unlock rmap before spinning on the HPTE lock */
+			unlock_rmap(rmapp);
+			while (hptep[0] & HPTE_V_HVLOCK)
+				cpu_relax();
+			goto retry;
+		}
+
+		/* Now check and modify the HPTE */
+		if ((hptep[0] & HPTE_V_VALID) && (hptep[1] & HPTE_R_C)) {
+			/* need to make it temporarily absent to clear C */
+			hptep[0] |= HPTE_V_ABSENT;
+			kvmppc_invalidate_hpte(kvm, hptep, i);
+			hptep[1] &= ~HPTE_R_C;
+			eieio();
+			hptep[0] = (hptep[0] & ~HPTE_V_ABSENT) | HPTE_V_VALID;
+			rev[i].guest_rpte |= HPTE_R_C;
+			ret = 1;
+		}
+		hptep[0] &= ~HPTE_V_HVLOCK;
+	} while ((i = j) != head);
+
+	unlock_rmap(rmapp);
+	return ret;
+}
+
+long kvmppc_hv_get_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot)
+{
+	unsigned long i;
+	unsigned long *rmapp, *map;
+
+	preempt_disable();
+	rmapp = memslot->rmap;
+	map = memslot->dirty_bitmap;
+	for (i = 0; i < memslot->npages; ++i) {
+		if (kvm_test_clear_dirty(kvm, rmapp))
+			__set_bit_le(i, map);
+		++rmapp;
+	}
+	preempt_enable();
+	return 0;
+}
+
 void *kvmppc_pin_guest_page(struct kvm *kvm, unsigned long gpa,
 			    unsigned long *nb_ret)
 {
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index cb8e15f..c11d960 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -1105,6 +1105,43 @@ long kvm_vm_ioctl_allocate_rma(struct kvm *kvm, struct kvm_allocate_rma *ret)
 	return fd;
 }
 
+/*
+ * Get (and clear) the dirty memory log for a memory slot.
+ */
+int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log)
+{
+	struct kvm_memory_slot *memslot;
+	int r;
+	unsigned long n;
+
+	mutex_lock(&kvm->slots_lock);
+
+	r = -EINVAL;
+	if (log->slot >= KVM_MEMORY_SLOTS)
+		goto out;
+
+	memslot = id_to_memslot(kvm->memslots, log->slot);
+	r = -ENOENT;
+	if (!memslot->dirty_bitmap)
+		goto out;
+
+	n = kvm_dirty_bitmap_bytes(memslot);
+	memset(memslot->dirty_bitmap, 0, n);
+
+	r = kvmppc_hv_get_dirty_log(kvm, memslot);
+	if (r)
+		goto out;
+
+	r = -EFAULT;
+	if (copy_to_user(log->dirty_bitmap, memslot->dirty_bitmap, n))
+		goto out;
+
+	r = 0;
+out:
+	mutex_unlock(&kvm->slots_lock);
+	return r;
+}
+
 static unsigned long slb_pgsize_encoding(unsigned long psize)
 {
 	unsigned long senc = 0;
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index ddd92a5..dfb52dc 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -1069,6 +1069,45 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
 	return ret;
 }
 
+/*
+ * Get (and clear) the dirty memory log for a memory slot.
+ */
+int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm,
+				      struct kvm_dirty_log *log)
+{
+	struct kvm_memory_slot *memslot;
+	struct kvm_vcpu *vcpu;
+	ulong ga, ga_end;
+	int is_dirty = 0;
+	int r;
+	unsigned long n;
+
+	mutex_lock(&kvm->slots_lock);
+
+	r = kvm_get_dirty_log(kvm, log, &is_dirty);
+	if (r)
+		goto out;
+
+	/* If nothing is dirty, don't bother messing with page tables. */
+	if (is_dirty) {
+		memslot = id_to_memslot(kvm->memslots, log->slot);
+
+		ga = memslot->base_gfn << PAGE_SHIFT;
+		ga_end = ga + (memslot->npages << PAGE_SHIFT);
+
+		kvm_for_each_vcpu(n, vcpu, kvm)
+			kvmppc_mmu_pte_pflush(vcpu, ga, ga_end);
+
+		n = kvm_dirty_bitmap_bytes(memslot);
+		memset(memslot->dirty_bitmap, 0, n);
+	}
+
+	r = 0;
+out:
+	mutex_unlock(&kvm->slots_lock);
+	return r;
+}
+
 int kvmppc_core_prepare_memory_region(struct kvm *kvm,
 				      struct kvm_userspace_memory_region *mem)
 {
-- 
1.7.7.3

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 5/5] KVM: PPC: Book3s HV: Implement H_CLEAR_REF and H_CLEAR_MOD hcalls
  2011-12-15 12:00 [PATCH 0/5] Make use of hardware reference and change bits in HPT Paul Mackerras
                   ` (3 preceding siblings ...)
  2011-12-15 12:03 ` [PATCH 4/5] KVM: PPC: Book3s HV: Implement get_dirty_log using hardware changed bit Paul Mackerras
@ 2011-12-15 12:04 ` Paul Mackerras
  2011-12-23 13:26   ` Alexander Graf
  2011-12-23 13:36 ` [PATCH 0/5] Make use of hardware reference and change bits in HPT Alexander Graf
  5 siblings, 1 reply; 12+ messages in thread
From: Paul Mackerras @ 2011-12-15 12:04 UTC (permalink / raw)
  To: Alexander Graf; +Cc: linuxppc-dev, kvm-ppc

This adds implementations for the H_CLEAR_REF (test and clear reference
bit) and H_CLEAR_MOD (test and clear changed bit) hypercalls.  These
hypercalls are not used by Linux guests at this stage, and these
implementations are only compile tested.

Signed-off-by: Paul Mackerras <paulus@samba.org>
---
 arch/powerpc/kvm/book3s_hv_rm_mmu.c     |   69 +++++++++++++++++++++++++++++++
 arch/powerpc/kvm/book3s_hv_rmhandlers.S |    4 +-
 2 files changed, 71 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_hv_rm_mmu.c b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
index 76864a8..718b5a7 100644
--- a/arch/powerpc/kvm/book3s_hv_rm_mmu.c
+++ b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
@@ -624,6 +624,75 @@ long kvmppc_h_read(struct kvm_vcpu *vcpu, unsigned long flags,
 	return H_SUCCESS;
 }
 
+long kvmppc_h_clear_ref(struct kvm_vcpu *vcpu, unsigned long flags,
+			unsigned long pte_index)
+{
+	struct kvm *kvm = vcpu->kvm;
+	unsigned long *hpte, v, r, gr;
+	struct revmap_entry *rev;
+
+	if (pte_index >= HPT_NPTE)
+		return H_PARAMETER;
+
+	rev = real_vmalloc_addr(&kvm->arch.revmap[pte_index]);
+	hpte = (unsigned long *)(kvm->arch.hpt_virt + (pte_index << 4));
+	while (!try_lock_hpte(hpte, HPTE_V_HVLOCK))
+		cpu_relax();
+	v = hpte[0];
+	r = hpte[1];
+	gr = rev->guest_rpte;
+	rev->guest_rpte &= ~HPTE_R_R;
+	if (v & HPTE_V_VALID) {
+		gr |= r & (HPTE_R_R | HPTE_R_C);
+		if (r & HPTE_R_R)
+			kvmppc_clear_ref_hpte(kvm, hpte, pte_index);
+	}
+	unlock_hpte(hpte, v & ~HPTE_V_HVLOCK);
+
+	if (!(v & (HPTE_V_VALID | HPTE_V_ABSENT)))
+		return H_NOT_FOUND;
+
+	vcpu->arch.gpr[4] = gr;
+	return H_SUCCESS;
+}
+
+long kvmppc_h_clear_mod(struct kvm_vcpu *vcpu, unsigned long flags,
+			unsigned long pte_index)
+{
+	struct kvm *kvm = vcpu->kvm;
+	unsigned long *hpte, v, r, gr;
+	struct revmap_entry *rev;
+
+	if (pte_index >= HPT_NPTE)
+		return H_PARAMETER;
+
+	rev = real_vmalloc_addr(&kvm->arch.revmap[pte_index]);
+	hpte = (unsigned long *)(kvm->arch.hpt_virt + (pte_index << 4));
+	while (!try_lock_hpte(hpte, HPTE_V_HVLOCK))
+		cpu_relax();
+	v = hpte[0];
+	r = hpte[1];
+	gr = rev->guest_rpte;
+	rev->guest_rpte &= ~HPTE_R_C;
+	if (v & HPTE_V_VALID) {
+		gr |= r & (HPTE_R_R | HPTE_R_C);
+		if (r & HPTE_R_C) {
+			hpte[0] |= HPTE_V_ABSENT;
+			kvmppc_invalidate_hpte(kvm, hpte, pte_index);
+			hpte[1] &= ~HPTE_R_C;
+			eieio();
+			hpte[0] = v;
+		}
+	}
+	unlock_hpte(hpte, v & ~HPTE_V_HVLOCK);
+
+	if (!(v & (HPTE_V_VALID | HPTE_V_ABSENT)))
+		return H_NOT_FOUND;
+
+	vcpu->arch.gpr[4] = gr;
+	return H_SUCCESS;
+}
+
 void kvmppc_invalidate_hpte(struct kvm *kvm, unsigned long *hptep,
 			unsigned long pte_index)
 {
diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
index b70bf22..4c52d6d 100644
--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
@@ -1276,8 +1276,8 @@ hcall_real_table:
 	.long	.kvmppc_h_remove - hcall_real_table
 	.long	.kvmppc_h_enter - hcall_real_table
 	.long	.kvmppc_h_read - hcall_real_table
-	.long	0		/* 0x10 - H_CLEAR_MOD */
-	.long	0		/* 0x14 - H_CLEAR_REF */
+	.long	.kvmppc_h_clear_mod - hcall_real_table
+	.long	.kvmppc_h_clear_ref - hcall_real_table
 	.long	.kvmppc_h_protect - hcall_real_table
 	.long	0		/* 0x1c - H_GET_TCE */
 	.long	.kvmppc_h_put_tce - hcall_real_table
-- 
1.7.7.3

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH 4/5] KVM: PPC: Book3s HV: Implement get_dirty_log using hardware changed bit
  2011-12-15 12:03 ` [PATCH 4/5] KVM: PPC: Book3s HV: Implement get_dirty_log using hardware changed bit Paul Mackerras
@ 2011-12-23 13:23   ` Alexander Graf
  2011-12-25 23:35     ` Paul Mackerras
  0 siblings, 1 reply; 12+ messages in thread
From: Alexander Graf @ 2011-12-23 13:23 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: linuxppc-dev, KVM list, kvm-ppc


On 15.12.2011, at 13:03, Paul Mackerras wrote:

> This changes the implementation of kvm_vm_ioctl_get_dirty_log() for
> Book3s HV guests to use the hardware C (changed) bits in the guest
> hashed page table.  Since this makes the implementation quite =
different
> from the Book3s PR case, this moves the existing implementation from
> book3s.c to book3s_pr.c and creates a new implementation in =
book3s_hv.c.
> That implementation calls kvmppc_hv_get_dirty_log() to do the actual
> work by calling kvm_test_clear_dirty on each page.  It iterates over
> the HPTEs, clearing the C bit if set, and returns 1 if any C bit was
> set (including the saved C bit in the rmap entry).
>=20
> Signed-off-by: Paul Mackerras <paulus@samba.org>
> ---
> arch/powerpc/include/asm/kvm_book3s.h |    2 +
> arch/powerpc/kvm/book3s.c             |   39 ------------------
> arch/powerpc/kvm/book3s_64_mmu_hv.c   |   69 =
+++++++++++++++++++++++++++++++++
> arch/powerpc/kvm/book3s_hv.c          |   37 +++++++++++++++++
> arch/powerpc/kvm/book3s_pr.c          |   39 ++++++++++++++++++
> 5 files changed, 147 insertions(+), 39 deletions(-)
>=20
> diff --git a/arch/powerpc/include/asm/kvm_book3s.h =
b/arch/powerpc/include/asm/kvm_book3s.h
> index 6ececb4..aa795cc 100644
> --- a/arch/powerpc/include/asm/kvm_book3s.h
> +++ b/arch/powerpc/include/asm/kvm_book3s.h
> @@ -158,6 +158,8 @@ extern long kvmppc_virtmode_h_enter(struct =
kvm_vcpu *vcpu, unsigned long flags,
> 			long pte_index, unsigned long pteh, unsigned =
long ptel);
> extern long kvmppc_h_enter(struct kvm_vcpu *vcpu, unsigned long flags,
> 			long pte_index, unsigned long pteh, unsigned =
long ptel);
> +extern long kvmppc_hv_get_dirty_log(struct kvm *kvm,
> +			struct kvm_memory_slot *memslot);
>=20
> extern void kvmppc_entry_trampoline(void);
> extern void kvmppc_hv_entry_trampoline(void);
> diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
> index 6bf7e05..7d54f4e 100644
> --- a/arch/powerpc/kvm/book3s.c
> +++ b/arch/powerpc/kvm/book3s.c
> @@ -477,45 +477,6 @@ int kvm_arch_vcpu_ioctl_translate(struct kvm_vcpu =
*vcpu,
> 	return 0;
> }
>=20
> -/*
> - * Get (and clear) the dirty memory log for a memory slot.
> - */
> -int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm,
> -				      struct kvm_dirty_log *log)
> -{
> -	struct kvm_memory_slot *memslot;
> -	struct kvm_vcpu *vcpu;
> -	ulong ga, ga_end;
> -	int is_dirty =3D 0;
> -	int r;
> -	unsigned long n;
> -
> -	mutex_lock(&kvm->slots_lock);
> -
> -	r =3D kvm_get_dirty_log(kvm, log, &is_dirty);
> -	if (r)
> -		goto out;
> -
> -	/* If nothing is dirty, don't bother messing with page tables. =
*/
> -	if (is_dirty) {
> -		memslot =3D id_to_memslot(kvm->memslots, log->slot);
> -
> -		ga =3D memslot->base_gfn << PAGE_SHIFT;
> -		ga_end =3D ga + (memslot->npages << PAGE_SHIFT);
> -
> -		kvm_for_each_vcpu(n, vcpu, kvm)
> -			kvmppc_mmu_pte_pflush(vcpu, ga, ga_end);
> -
> -		n =3D kvm_dirty_bitmap_bytes(memslot);
> -		memset(memslot->dirty_bitmap, 0, n);
> -	}
> -
> -	r =3D 0;
> -out:
> -	mutex_unlock(&kvm->slots_lock);
> -	return r;
> -}
> -
> void kvmppc_decrementer_func(unsigned long data)
> {
> 	struct kvm_vcpu *vcpu =3D (struct kvm_vcpu *)data;
> diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c =
b/arch/powerpc/kvm/book3s_64_mmu_hv.c
> index 926e2b9..783cd35 100644
> --- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
> +++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
> @@ -870,6 +870,75 @@ void kvm_set_spte_hva(struct kvm *kvm, unsigned =
long hva, pte_t pte)
> 	kvm_handle_hva(kvm, hva, kvm_unmap_rmapp);
> }
>=20
> +static int kvm_test_clear_dirty(struct kvm *kvm, unsigned long =
*rmapp)
> +{
> +	struct revmap_entry *rev =3D kvm->arch.revmap;
> +	unsigned long head, i, j;
> +	unsigned long *hptep;
> +	int ret =3D 0;
> +
> + retry:
> +	lock_rmap(rmapp);
> +	if (*rmapp & KVMPPC_RMAP_CHANGED) {
> +		*rmapp &=3D ~KVMPPC_RMAP_CHANGED;
> +		ret =3D 1;
> +	}
> +	if (!(*rmapp & KVMPPC_RMAP_PRESENT)) {
> +		unlock_rmap(rmapp);
> +		return ret;
> +	}
> +
> +	i =3D head =3D *rmapp & KVMPPC_RMAP_INDEX;
> +	do {
> +		hptep =3D (unsigned long *) (kvm->arch.hpt_virt + (i << =
4));
> +		j =3D rev[i].forw;
> +
> +		if (!(hptep[1] & HPTE_R_C))
> +			continue;
> +
> +		if (!try_lock_hpte(hptep, HPTE_V_HVLOCK)) {
> +			/* unlock rmap before spinning on the HPTE lock =
*/
> +			unlock_rmap(rmapp);
> +			while (hptep[0] & HPTE_V_HVLOCK)
> +				cpu_relax();
> +			goto retry;
> +		}
> +
> +		/* Now check and modify the HPTE */
> +		if ((hptep[0] & HPTE_V_VALID) && (hptep[1] & HPTE_R_C)) =
{
> +			/* need to make it temporarily absent to clear C =
*/
> +			hptep[0] |=3D HPTE_V_ABSENT;
> +			kvmppc_invalidate_hpte(kvm, hptep, i);
> +			hptep[1] &=3D ~HPTE_R_C;
> +			eieio();
> +			hptep[0] =3D (hptep[0] & ~HPTE_V_ABSENT) | =
HPTE_V_VALID;
> +			rev[i].guest_rpte |=3D HPTE_R_C;
> +			ret =3D 1;
> +		}
> +		hptep[0] &=3D ~HPTE_V_HVLOCK;
> +	} while ((i =3D j) !=3D head);
> +
> +	unlock_rmap(rmapp);
> +	return ret;
> +}
> +
> +long kvmppc_hv_get_dirty_log(struct kvm *kvm, struct kvm_memory_slot =
*memslot)
> +{
> +	unsigned long i;
> +	unsigned long *rmapp, *map;
> +
> +	preempt_disable();
> +	rmapp =3D memslot->rmap;
> +	map =3D memslot->dirty_bitmap;
> +	for (i =3D 0; i < memslot->npages; ++i) {
> +		if (kvm_test_clear_dirty(kvm, rmapp))
> +			__set_bit_le(i, map);

So if I read things correctly, this is the only case you're setting =
pages as dirty. What if you have the following:

  guest adds HTAB entry x
  guest writes to page mapped by x
  guest removes HTAB entry x
  host fetches dirty log

You can replace "removes" by "is overwritten by another mapping" if you =
like.


Alex

PS: Always CC kvm@vger for stuff that other might want to review =
(basically all patches)

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 5/5] KVM: PPC: Book3s HV: Implement H_CLEAR_REF and H_CLEAR_MOD hcalls
  2011-12-15 12:04 ` [PATCH 5/5] KVM: PPC: Book3s HV: Implement H_CLEAR_REF and H_CLEAR_MOD hcalls Paul Mackerras
@ 2011-12-23 13:26   ` Alexander Graf
  0 siblings, 0 replies; 12+ messages in thread
From: Alexander Graf @ 2011-12-23 13:26 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: linuxppc-dev, KVM list, kvm-ppc


On 15.12.2011, at 13:04, Paul Mackerras wrote:

> This adds implementations for the H_CLEAR_REF (test and clear =
reference
> bit) and H_CLEAR_MOD (test and clear changed bit) hypercalls.  These
> hypercalls are not used by Linux guests at this stage, and these
> implementations are only compile tested.

Do we need them then? Are they mandatory in PAPR? I don't feel all that =
great having unused / untested code accessible from the guest.

Alex

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 0/5] Make use of hardware reference and change bits in HPT
  2011-12-15 12:00 [PATCH 0/5] Make use of hardware reference and change bits in HPT Paul Mackerras
                   ` (4 preceding siblings ...)
  2011-12-15 12:04 ` [PATCH 5/5] KVM: PPC: Book3s HV: Implement H_CLEAR_REF and H_CLEAR_MOD hcalls Paul Mackerras
@ 2011-12-23 13:36 ` Alexander Graf
  5 siblings, 0 replies; 12+ messages in thread
From: Alexander Graf @ 2011-12-23 13:36 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: linuxppc-dev, kvm-ppc


On 15.12.2011, at 13:00, Paul Mackerras wrote:

> This series of patches builds on top of my previous series and
> modifies the Book3S HV memory management code to use the hardware
> reference and change bits in the guest hashed page table.  This makes
> kvm_age_hva() more efficient, lets us implement the dirty page
> tracking properly (which in turn means that things like VGA emulation
> in qemu can work), and also means that we can supply hardware
> reference and change information to the guest -- not that Linux guests
> currently use that information, but possibly they will want it in
> future, and there is an interface defined in PAPR for it.

I applied Patches 1-4 to kvm-ppc-next.


Alex

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 4/5] KVM: PPC: Book3s HV: Implement get_dirty_log using hardware changed bit
  2011-12-23 13:23   ` Alexander Graf
@ 2011-12-25 23:35     ` Paul Mackerras
  2011-12-26  5:05       ` Takuya Yoshikawa
  0 siblings, 1 reply; 12+ messages in thread
From: Paul Mackerras @ 2011-12-25 23:35 UTC (permalink / raw)
  To: Alexander Graf; +Cc: linuxppc-dev, KVM list, kvm-ppc

On Fri, Dec 23, 2011 at 02:23:30PM +0100, Alexander Graf wrote:

> So if I read things correctly, this is the only case you're setting
> pages as dirty. What if you have the following:
> 
>   guest adds HTAB entry x
>   guest writes to page mapped by x
>   guest removes HTAB entry x
>   host fetches dirty log

In that case the dirtiness is preserved in the setting of the
KVMPPC_RMAP_CHANGED bit in the rmap entry.  kvm_test_clear_dirty()
returns 1 if that bit is set (and clears it).  Using the rmap entry
for this is convenient because (a) we also use it for saving the
referenced bit when a HTAB entry is removed, and we can transfer both
R and C over in one operation; (b) we need to be able to save away the
C bit in real mode, and we already need to get the real-mode address
of the rmap entry -- if we wanted to save it in a dirty bitmap we'd
have to do an extra translation to get the real-mode address of the
dirty bitmap word; (c) to avoid SMP races, if we were asynchronously
setting bits in the dirty bitmap we'd have to do the double-buffering
thing that x86 does, which seems more complicated than using the rmap
entry (which we already have a lock bit for).

> PS: Always CC kvm@vger for stuff that other might want to review
> (basically all patches)

So why do we have a separate kvm-ppc list then? :)

Paul.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 4/5] KVM: PPC: Book3s HV: Implement get_dirty_log using hardware changed bit
  2011-12-25 23:35     ` Paul Mackerras
@ 2011-12-26  5:05       ` Takuya Yoshikawa
  2011-12-31  0:44         ` Paul Mackerras
  0 siblings, 1 reply; 12+ messages in thread
From: Takuya Yoshikawa @ 2011-12-26  5:05 UTC (permalink / raw)
  To: Paul Mackerras
  Cc: linuxppc-dev, Takuya Yoshikawa, Alexander Graf, kvm-ppc, KVM list

(2011/12/26 8:35), Paul Mackerras wrote:
> On Fri, Dec 23, 2011 at 02:23:30PM +0100, Alexander Graf wrote:
>
>> So if I read things correctly, this is the only case you're setting
>> pages as dirty. What if you have the following:
>>
>>    guest adds HTAB entry x
>>    guest writes to page mapped by x
>>    guest removes HTAB entry x
>>    host fetches dirty log
>
> In that case the dirtiness is preserved in the setting of the
> KVMPPC_RMAP_CHANGED bit in the rmap entry.  kvm_test_clear_dirty()
> returns 1 if that bit is set (and clears it).  Using the rmap entry
> for this is convenient because (a) we also use it for saving the
> referenced bit when a HTAB entry is removed, and we can transfer both
> R and C over in one operation; (b) we need to be able to save away the
> C bit in real mode, and we already need to get the real-mode address
> of the rmap entry -- if we wanted to save it in a dirty bitmap we'd
> have to do an extra translation to get the real-mode address of the
> dirty bitmap word; (c) to avoid SMP races, if we were asynchronously
> setting bits in the dirty bitmap we'd have to do the double-buffering
> thing that x86 does, which seems more complicated than using the rmap
> entry (which we already have a lock bit for).

 From my x86 dirty logging experience I have some concern about your code:
your code looks slow even when there is no/few dirty pages in the slot.

+	for (i = 0; i < memslot->npages; ++i) {
+		if (kvm_test_clear_dirty(kvm, rmapp))
+			__set_bit_le(i, map);
+		++rmapp;
+	}

The check is being done for each page and this can be very expensive because
the number of pages is not small.

	When we scan the dirty_bitmap 64 pages are checked at once and
	the problem is not so significant.

Though I do not know well what kvm-ppc's dirty logging is aiming at, I guess
reporting cleanliness without noticeable delay to the user-space is important.

	E.g. for VGA most of the cases are clean.  For live migration, the
	chance of seeing complete clean slot is small but almost all cases
	are sparse.

>
>> PS: Always CC kvm@vger for stuff that other might want to review
>> (basically all patches)

(Though I sometimes check kvm-ppc on the archives,)

GET_DIRTY_LOG thing will be welcome.

	Takuya

>
> So why do we have a separate kvm-ppc list then? :)

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 4/5] KVM: PPC: Book3s HV: Implement get_dirty_log using hardware changed bit
  2011-12-26  5:05       ` Takuya Yoshikawa
@ 2011-12-31  0:44         ` Paul Mackerras
  0 siblings, 0 replies; 12+ messages in thread
From: Paul Mackerras @ 2011-12-31  0:44 UTC (permalink / raw)
  To: Takuya Yoshikawa
  Cc: linuxppc-dev, Takuya Yoshikawa, Alexander Graf, kvm-ppc, KVM list

On Mon, Dec 26, 2011 at 02:05:20PM +0900, Takuya Yoshikawa wrote:

> From my x86 dirty logging experience I have some concern about your code:
> your code looks slow even when there is no/few dirty pages in the slot.
> 
> +	for (i = 0; i < memslot->npages; ++i) {
> +		if (kvm_test_clear_dirty(kvm, rmapp))
> +			__set_bit_le(i, map);
> +		++rmapp;
> +	}
> 
> The check is being done for each page and this can be very expensive because
> the number of pages is not small.
> 
> 	When we scan the dirty_bitmap 64 pages are checked at once and
> 	the problem is not so significant.
> 
> Though I do not know well what kvm-ppc's dirty logging is aiming at, I guess
> reporting cleanliness without noticeable delay to the user-space is important.
> 
> 	E.g. for VGA most of the cases are clean.  For live migration, the
> 	chance of seeing complete clean slot is small but almost all cases
> 	are sparse.

The alternative approach is not to use the hardware changed bit but
instead to install read-only HPTEs when the guest requests a
read/write mapping, and then when the guest writes to the page we
intercept the protection fault, mark the page dirty and change the
HPTE to allow writing.  Then when harvesting the dirty bits we have to
change any dirty page back to a read-only HPTE.

That is all quite doable, but I was worried about the performance
impact of the extra faults.  We intend to do some performance studies
to see whether the alternative approach would give better performance.
There is a trade-off in that the alternative approach would slow down
normal operation a little in order to speed up the harvesting of the
dirty log.  That may in fact be worthwhile.

For now, the patch I posted at least gets the dirty page tracking
working, so we can use VGA emulation.

Paul.

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2011-12-31  0:44 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-12-15 12:00 [PATCH 0/5] Make use of hardware reference and change bits in HPT Paul Mackerras
2011-12-15 12:01 ` [PATCH 1/5] KVM: PPC: Book3S HV: Keep HPTE locked when invalidating Paul Mackerras
2011-12-15 12:02 ` [PATCH 2/5] KVM: PPC: Book3s HV: Maintain separate guest and host views of R and C bits Paul Mackerras
2011-12-15 12:02 ` [PATCH 3/5] KVM: PPC: Book3S HV: Use the hardware referenced bit for kvm_age_hva Paul Mackerras
2011-12-15 12:03 ` [PATCH 4/5] KVM: PPC: Book3s HV: Implement get_dirty_log using hardware changed bit Paul Mackerras
2011-12-23 13:23   ` Alexander Graf
2011-12-25 23:35     ` Paul Mackerras
2011-12-26  5:05       ` Takuya Yoshikawa
2011-12-31  0:44         ` Paul Mackerras
2011-12-15 12:04 ` [PATCH 5/5] KVM: PPC: Book3s HV: Implement H_CLEAR_REF and H_CLEAR_MOD hcalls Paul Mackerras
2011-12-23 13:26   ` Alexander Graf
2011-12-23 13:36 ` [PATCH 0/5] Make use of hardware reference and change bits in HPT Alexander Graf

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).