All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/7] MIPS: Add extended ASID support
@ 2016-05-06 13:36 James Hogan
  2016-05-06 13:36 ` [PATCH 1/7] MIPS: KVM/locore.S: Don't preserve host ASID around vcpu_run James Hogan
                   ` (7 more replies)
  0 siblings, 8 replies; 18+ messages in thread
From: James Hogan @ 2016-05-06 13:36 UTC (permalink / raw)
  To: Ralf Baechle
  Cc: Paul Burton, James Hogan, Manuel Lauss, Jayachandran C.,
	Paolo Bonzini, Radim Krčmář,
	linux-mips, kvm

This patchset is based on v4.6-rc4 and adds support for the optional
extended ASIDs present since revision 3.5 of the MIPS32/MIPS64
architecture, which extends the TLB ASIDs from 8 bits to 10 bits. These
are known to be implemented in XLP and I6400 cores.

Along the way a few cleanups are made, particularly for KVM which
manipulates ASIDs from assembly code.

Patch 6 lays most of the groundwork by abstracting asid masks so they
can be variable, and patch 7 adds the actual support for extended ASIDs.

Patches 1-5 do some preliminary clean up around ASID handling, and in
KVM's locore.S to allow patch 7 to support extended ASIDs.

The use of extended ASIDs can be observed by using the 'x' sysrq to dump
TLB values, e.g. by repeatedly running this command:
$(echo x > /proc/sysrq-trigger); dmesg -c | grep asid

James Hogan (4):
  MIPS: KVM/locore.S: Don't preserve host ASID around vcpu_run
  MIPS: Add & use CP0_EntryHi ASID definitions
  MIPS: KVM/locore.S: Only preserve callee saved registers
  MIPS: KVM/locore.S: Relax noat

Paul Burton (3):
  MIPS: KVM: Abstract guest ASID mask
  MIPS: Retrieve ASID masks using function accepting struct cpuinfo_mips
  MIPS: Support extended ASIDs

 arch/mips/Kconfig                   | 17 +++++++
 arch/mips/include/asm/cpu-info.h    | 24 ++++++++++
 arch/mips/include/asm/kvm_host.h    |  5 +-
 arch/mips/include/asm/mipsregs.h    |  2 +
 arch/mips/include/asm/mmu_context.h | 41 +++++++---------
 arch/mips/kernel/asm-offsets.c      | 10 ++++
 arch/mips/kernel/cpu-probe.c        | 13 +++++
 arch/mips/kernel/genex.S            |  2 +-
 arch/mips/kernel/traps.c            |  2 +-
 arch/mips/kvm/emulate.c             | 25 +++++-----
 arch/mips/kvm/locore.S              | 94 +++++++++----------------------------
 arch/mips/kvm/tlb.c                 | 33 ++++++++-----
 arch/mips/lib/dump_tlb.c            | 10 ++--
 arch/mips/lib/r3k_dump_tlb.c        |  9 ++--
 arch/mips/mm/tlb-r3k.c              | 24 ++++++----
 arch/mips/mm/tlb-r4k.c              |  2 +-
 arch/mips/mm/tlb-r8k.c              |  2 +-
 arch/mips/pci/pci-alchemy.c         |  2 +-
 18 files changed, 173 insertions(+), 144 deletions(-)

Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Manuel Lauss <manuel.lauss@gmail.com>
Cc: Jayachandran C. <jchandra@broadcom.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
-- 
2.4.10

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH 1/7] MIPS: KVM/locore.S: Don't preserve host ASID around vcpu_run
  2016-05-06 13:36 [PATCH 0/7] MIPS: Add extended ASID support James Hogan
@ 2016-05-06 13:36 ` James Hogan
  2016-05-09 14:22   ` Paolo Bonzini
  2016-05-06 13:36 ` [PATCH 2/7] MIPS: Add & use CP0_EntryHi ASID definitions James Hogan
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 18+ messages in thread
From: James Hogan @ 2016-05-06 13:36 UTC (permalink / raw)
  To: Ralf Baechle
  Cc: Paul Burton, James Hogan, Paolo Bonzini,
	Radim Krčmář,
	linux-mips, kvm

MIPS KVM uses different ASIDs for guest execution than for the host.
The host ASID is saved on the stack when entering the guest with
__kvm_mips_vcpu_run(), and restored again before returning back to the
caller (exit to userland).

- This does not take into account that pre-emption may have taken place
  during that time, which may have started a new ASID cycle and resulted
  in that process' ASID being invalidated and reused.

- This does not take into account that the process may have migrated to
  a different CPU during that time, with a different ASID assignment
  since they are managed per-CPU.

- It is actually redundant, since the host ASID will be restored
  correctly by kvm_arch_vcpu_put(), which is called almost immediately
  after kvm_arch_vcpu_ioctl_run() returns.

Therefore drop this code from locore.S

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
---
 arch/mips/kvm/locore.S | 12 ------------
 1 file changed, 12 deletions(-)

diff --git a/arch/mips/kvm/locore.S b/arch/mips/kvm/locore.S
index 81687ab1b523..c24facc85357 100644
--- a/arch/mips/kvm/locore.S
+++ b/arch/mips/kvm/locore.S
@@ -32,7 +32,6 @@
     EXPORT(x);
 
 /* Overload, Danger Will Robinson!! */
-#define PT_HOST_ASID        PT_BVADDR
 #define PT_HOST_USERLOCAL   PT_EPC
 
 #define CP0_DDATA_LO        $28,3
@@ -104,11 +103,6 @@ FEXPORT(__kvm_mips_vcpu_run)
 	mfc0	v0, CP0_STATUS
 	LONG_S	v0, PT_STATUS(k1)
 
-	/* Save host ASID, shove it into the BVADDR location */
-	mfc0	v1, CP0_ENTRYHI
-	andi	v1, 0xff
-	LONG_S	v1, PT_HOST_ASID(k1)
-
 	/* Save DDATA_LO, will be used to store pointer to vcpu */
 	mfc0	v1, CP0_DDATA_LO
 	LONG_S	v1, PT_HOST_USERLOCAL(k1)
@@ -551,12 +545,6 @@ __kvm_mips_return_to_host:
 	LONG_L	k0, PT_HOST_USERLOCAL(k1)
 	mtc0	k0, CP0_DDATA_LO
 
-	/* Restore host ASID */
-	LONG_L	k0, PT_HOST_ASID(sp)
-	andi	k0, 0xff
-	mtc0	k0,CP0_ENTRYHI
-	ehb
-
 	/* Load context saved on the host stack */
 	LONG_L	$0, PT_R0(k1)
 	LONG_L	$1, PT_R1(k1)
-- 
2.4.10

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 2/7] MIPS: Add & use CP0_EntryHi ASID definitions
  2016-05-06 13:36 [PATCH 0/7] MIPS: Add extended ASID support James Hogan
  2016-05-06 13:36 ` [PATCH 1/7] MIPS: KVM/locore.S: Don't preserve host ASID around vcpu_run James Hogan
@ 2016-05-06 13:36 ` James Hogan
  2016-05-06 13:36 ` [PATCH 3/7] MIPS: KVM: Abstract guest ASID mask James Hogan
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 18+ messages in thread
From: James Hogan @ 2016-05-06 13:36 UTC (permalink / raw)
  To: Ralf Baechle
  Cc: Paul Burton, James Hogan, Manuel Lauss, Paolo Bonzini,
	Radim Krčmář,
	linux-mips, kvm

Add definitions for the ASID field in CP0_EntryHi (along with the soon
to be used ASIDX field), and use them in a few previously hardcoded
cases.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Manuel Lauss <manuel.lauss@gmail.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
---
 arch/mips/include/asm/mipsregs.h | 2 ++
 arch/mips/kernel/genex.S         | 2 +-
 arch/mips/kvm/locore.S           | 4 ++--
 arch/mips/pci/pci-alchemy.c      | 2 +-
 4 files changed, 6 insertions(+), 4 deletions(-)

diff --git a/arch/mips/include/asm/mipsregs.h b/arch/mips/include/asm/mipsregs.h
index 3ad19ad04d8a..d4c76e7f9a56 100644
--- a/arch/mips/include/asm/mipsregs.h
+++ b/arch/mips/include/asm/mipsregs.h
@@ -229,6 +229,8 @@
 
 /* MIPS32/64 EntryHI bit definitions */
 #define MIPS_ENTRYHI_EHINV	(_ULCAST_(1) << 10)
+#define MIPS_ENTRYHI_ASIDX	(_ULCAST_(0x3) << 8)
+#define MIPS_ENTRYHI_ASID	(_ULCAST_(0xff) << 0)
 
 /*
  * R4x00 interrupt enable / cause bits
diff --git a/arch/mips/kernel/genex.S b/arch/mips/kernel/genex.S
index baa7b6fc0a60..17374aef6f00 100644
--- a/arch/mips/kernel/genex.S
+++ b/arch/mips/kernel/genex.S
@@ -455,7 +455,7 @@ NESTED(nmi_handler, PT_SIZE, sp)
 	.set	noreorder
 	/* check if TLB contains a entry for EPC */
 	MFC0	k1, CP0_ENTRYHI
-	andi	k1, 0xff	/* ASID_MASK */
+	andi	k1, MIPS_ENTRYHI_ASID
 	MFC0	k0, CP0_EPC
 	PTR_SRL k0, _PAGE_SHIFT + 1
 	PTR_SLL k0, _PAGE_SHIFT + 1
diff --git a/arch/mips/kvm/locore.S b/arch/mips/kvm/locore.S
index c24facc85357..308706493fd5 100644
--- a/arch/mips/kvm/locore.S
+++ b/arch/mips/kvm/locore.S
@@ -164,7 +164,7 @@ FEXPORT(__kvm_mips_load_asid)
 	INT_SLL	t2, t2, 2                   /* x4 */
 	REG_ADDU t3, t1, t2
 	LONG_L	k0, (t3)
-	andi	k0, k0, 0xff
+	andi	k0, k0, MIPS_ENTRYHI_ASID
 	mtc0	k0, CP0_ENTRYHI
 	ehb
 
@@ -483,7 +483,7 @@ __kvm_mips_return_to_guest:
 	INT_SLL	t2, t2, 2		/* x4 */
 	REG_ADDU t3, t1, t2
 	LONG_L	k0, (t3)
-	andi	k0, k0, 0xff
+	andi	k0, k0, MIPS_ENTRYHI_ASID
 	mtc0	k0, CP0_ENTRYHI
 	ehb
 
diff --git a/arch/mips/pci/pci-alchemy.c b/arch/mips/pci/pci-alchemy.c
index 28952637a862..c8994c156e2d 100644
--- a/arch/mips/pci/pci-alchemy.c
+++ b/arch/mips/pci/pci-alchemy.c
@@ -76,7 +76,7 @@ static void mod_wired_entry(int entry, unsigned long entrylo0,
 	unsigned long old_ctx;
 
 	/* Save old context and create impossible VPN2 value */
-	old_ctx = read_c0_entryhi() & 0xff;
+	old_ctx = read_c0_entryhi() & MIPS_ENTRYHI_ASID;
 	old_pagemask = read_c0_pagemask();
 	write_c0_index(entry);
 	write_c0_pagemask(pagemask);
-- 
2.4.10

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 3/7] MIPS: KVM: Abstract guest ASID mask
  2016-05-06 13:36 [PATCH 0/7] MIPS: Add extended ASID support James Hogan
  2016-05-06 13:36 ` [PATCH 1/7] MIPS: KVM/locore.S: Don't preserve host ASID around vcpu_run James Hogan
  2016-05-06 13:36 ` [PATCH 2/7] MIPS: Add & use CP0_EntryHi ASID definitions James Hogan
@ 2016-05-06 13:36 ` James Hogan
  2016-05-06 13:36 ` [PATCH 4/7] MIPS: KVM/locore.S: Only preserve callee saved registers James Hogan
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 18+ messages in thread
From: James Hogan @ 2016-05-06 13:36 UTC (permalink / raw)
  To: Ralf Baechle
  Cc: Paul Burton, James Hogan, Paolo Bonzini,
	Radim Krčmář,
	linux-mips, kvm

From: Paul Burton <paul.burton@imgtec.com>

In preparation for supporting varied widths of ASID mask in the kernel
in general, switch KVM's guest ASIDs to a new KVM_ENTRYHI_ASID
definition based on the 8-bit MIPS_ENTRYHI_ASID instead of ASID_MASK.

It could potentially be used to support extended guest ASIDs in the
future.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
---
 arch/mips/include/asm/kvm_host.h |  5 +++--
 arch/mips/kvm/emulate.c          | 25 +++++++++++++------------
 arch/mips/kvm/tlb.c              |  3 ++-
 3 files changed, 18 insertions(+), 15 deletions(-)

diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h
index f6b12790716c..b76e132c87e4 100644
--- a/arch/mips/include/asm/kvm_host.h
+++ b/arch/mips/include/asm/kvm_host.h
@@ -311,17 +311,18 @@ enum emulation_result {
 #define MIPS3_PG_FRAME		0x3fffffc0
 
 #define VPN2_MASK		0xffffe000
+#define KVM_ENTRYHI_ASID	MIPS_ENTRYHI_ASID
 #define TLB_IS_GLOBAL(x)	(((x).tlb_lo0 & MIPS3_PG_G) &&		\
 				 ((x).tlb_lo1 & MIPS3_PG_G))
 #define TLB_VPN2(x)		((x).tlb_hi & VPN2_MASK)
-#define TLB_ASID(x)		((x).tlb_hi & ASID_MASK)
+#define TLB_ASID(x)		((x).tlb_hi & KVM_ENTRYHI_ASID)
 #define TLB_IS_VALID(x, va)	(((va) & (1 << PAGE_SHIFT))		\
 				 ? ((x).tlb_lo1 & MIPS3_PG_V)		\
 				 : ((x).tlb_lo0 & MIPS3_PG_V))
 #define TLB_HI_VPN2_HIT(x, y)	((TLB_VPN2(x) & ~(x).tlb_mask) ==	\
 				 ((y) & VPN2_MASK & ~(x).tlb_mask))
 #define TLB_HI_ASID_HIT(x, y)	(TLB_IS_GLOBAL(x) ||			\
-				 TLB_ASID(x) == ((y) & ASID_MASK))
+				 TLB_ASID(x) == ((y) & KVM_ENTRYHI_ASID))
 
 struct kvm_mips_tlb {
 	long tlb_mask;
diff --git a/arch/mips/kvm/emulate.c b/arch/mips/kvm/emulate.c
index b37954cc880d..8e945e866a73 100644
--- a/arch/mips/kvm/emulate.c
+++ b/arch/mips/kvm/emulate.c
@@ -1068,15 +1068,15 @@ enum emulation_result kvm_mips_emulate_CP0(uint32_t inst, uint32_t *opc,
 					kvm_read_c0_guest_ebase(cop0));
 			} else if (rd == MIPS_CP0_TLB_HI && sel == 0) {
 				uint32_t nasid =
-					vcpu->arch.gprs[rt] & ASID_MASK;
+					vcpu->arch.gprs[rt] & KVM_ENTRYHI_ASID;
 				if ((KSEGX(vcpu->arch.gprs[rt]) != CKSEG0) &&
 				    ((kvm_read_c0_guest_entryhi(cop0) &
-				      ASID_MASK) != nasid)) {
+				      KVM_ENTRYHI_ASID) != nasid)) {
 					kvm_debug("MTCz, change ASID from %#lx to %#lx\n",
 						kvm_read_c0_guest_entryhi(cop0)
-						& ASID_MASK,
+						& KVM_ENTRYHI_ASID,
 						vcpu->arch.gprs[rt]
-						& ASID_MASK);
+						& KVM_ENTRYHI_ASID);
 
 					/* Blow away the shadow host TLBs */
 					kvm_mips_flush_host_tlb(1);
@@ -1620,7 +1620,7 @@ enum emulation_result kvm_mips_emulate_cache(uint32_t inst, uint32_t *opc,
 		 */
 		index = kvm_mips_guest_tlb_lookup(vcpu, (va & VPN2_MASK) |
 						  (kvm_read_c0_guest_entryhi
-						   (cop0) & ASID_MASK));
+						   (cop0) & KVM_ENTRYHI_ASID));
 
 		if (index < 0) {
 			vcpu->arch.host_cp0_entryhi = (va & VPN2_MASK);
@@ -1786,7 +1786,7 @@ enum emulation_result kvm_mips_emulate_tlbmiss_ld(unsigned long cause,
 	struct mips_coproc *cop0 = vcpu->arch.cop0;
 	struct kvm_vcpu_arch *arch = &vcpu->arch;
 	unsigned long entryhi = (vcpu->arch.  host_cp0_badvaddr & VPN2_MASK) |
-				(kvm_read_c0_guest_entryhi(cop0) & ASID_MASK);
+			(kvm_read_c0_guest_entryhi(cop0) & KVM_ENTRYHI_ASID);
 
 	if ((kvm_read_c0_guest_status(cop0) & ST0_EXL) == 0) {
 		/* save old pc */
@@ -1833,7 +1833,7 @@ enum emulation_result kvm_mips_emulate_tlbinv_ld(unsigned long cause,
 	struct kvm_vcpu_arch *arch = &vcpu->arch;
 	unsigned long entryhi =
 		(vcpu->arch.host_cp0_badvaddr & VPN2_MASK) |
-		(kvm_read_c0_guest_entryhi(cop0) & ASID_MASK);
+		(kvm_read_c0_guest_entryhi(cop0) & KVM_ENTRYHI_ASID);
 
 	if ((kvm_read_c0_guest_status(cop0) & ST0_EXL) == 0) {
 		/* save old pc */
@@ -1878,7 +1878,7 @@ enum emulation_result kvm_mips_emulate_tlbmiss_st(unsigned long cause,
 	struct mips_coproc *cop0 = vcpu->arch.cop0;
 	struct kvm_vcpu_arch *arch = &vcpu->arch;
 	unsigned long entryhi = (vcpu->arch.host_cp0_badvaddr & VPN2_MASK) |
-				(kvm_read_c0_guest_entryhi(cop0) & ASID_MASK);
+			(kvm_read_c0_guest_entryhi(cop0) & KVM_ENTRYHI_ASID);
 
 	if ((kvm_read_c0_guest_status(cop0) & ST0_EXL) == 0) {
 		/* save old pc */
@@ -1922,7 +1922,7 @@ enum emulation_result kvm_mips_emulate_tlbinv_st(unsigned long cause,
 	struct mips_coproc *cop0 = vcpu->arch.cop0;
 	struct kvm_vcpu_arch *arch = &vcpu->arch;
 	unsigned long entryhi = (vcpu->arch.host_cp0_badvaddr & VPN2_MASK) |
-		(kvm_read_c0_guest_entryhi(cop0) & ASID_MASK);
+		(kvm_read_c0_guest_entryhi(cop0) & KVM_ENTRYHI_ASID);
 
 	if ((kvm_read_c0_guest_status(cop0) & ST0_EXL) == 0) {
 		/* save old pc */
@@ -1967,7 +1967,7 @@ enum emulation_result kvm_mips_handle_tlbmod(unsigned long cause, uint32_t *opc,
 #ifdef DEBUG
 	struct mips_coproc *cop0 = vcpu->arch.cop0;
 	unsigned long entryhi = (vcpu->arch.host_cp0_badvaddr & VPN2_MASK) |
-				(kvm_read_c0_guest_entryhi(cop0) & ASID_MASK);
+			(kvm_read_c0_guest_entryhi(cop0) & KVM_ENTRYHI_ASID);
 	int index;
 
 	/* If address not in the guest TLB, then we are in trouble */
@@ -1994,7 +1994,7 @@ enum emulation_result kvm_mips_emulate_tlbmod(unsigned long cause,
 {
 	struct mips_coproc *cop0 = vcpu->arch.cop0;
 	unsigned long entryhi = (vcpu->arch.host_cp0_badvaddr & VPN2_MASK) |
-				(kvm_read_c0_guest_entryhi(cop0) & ASID_MASK);
+			(kvm_read_c0_guest_entryhi(cop0) & KVM_ENTRYHI_ASID);
 	struct kvm_vcpu_arch *arch = &vcpu->arch;
 
 	if ((kvm_read_c0_guest_status(cop0) & ST0_EXL) == 0) {
@@ -2569,7 +2569,8 @@ enum emulation_result kvm_mips_handle_tlbmiss(unsigned long cause,
 	 */
 	index = kvm_mips_guest_tlb_lookup(vcpu,
 		      (va & VPN2_MASK) |
-		      (kvm_read_c0_guest_entryhi(vcpu->arch.cop0) & ASID_MASK));
+		      (kvm_read_c0_guest_entryhi(vcpu->arch.cop0) &
+		       KVM_ENTRYHI_ASID));
 	if (index < 0) {
 		if (exccode == EXCCODE_TLBL) {
 			er = kvm_mips_emulate_tlbmiss_ld(cause, opc, run, vcpu);
diff --git a/arch/mips/kvm/tlb.c b/arch/mips/kvm/tlb.c
index e0e1d0a611fc..52d87280f865 100644
--- a/arch/mips/kvm/tlb.c
+++ b/arch/mips/kvm/tlb.c
@@ -748,7 +748,8 @@ uint32_t kvm_get_inst(uint32_t *opc, struct kvm_vcpu *vcpu)
 			inst = *(opc);
 		} else {
 			vpn2 = (unsigned long) opc & VPN2_MASK;
-			asid = kvm_read_c0_guest_entryhi(cop0) & ASID_MASK;
+			asid = kvm_read_c0_guest_entryhi(cop0) &
+						KVM_ENTRYHI_ASID;
 			index = kvm_mips_guest_tlb_lookup(vcpu, vpn2 | asid);
 			if (index < 0) {
 				kvm_err("%s: get_user_failed for %p, vcpu: %p, ASID: %#lx\n",
-- 
2.4.10

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 4/7] MIPS: KVM/locore.S: Only preserve callee saved registers
  2016-05-06 13:36 [PATCH 0/7] MIPS: Add extended ASID support James Hogan
                   ` (2 preceding siblings ...)
  2016-05-06 13:36 ` [PATCH 3/7] MIPS: KVM: Abstract guest ASID mask James Hogan
@ 2016-05-06 13:36 ` James Hogan
  2016-05-06 13:36 ` [PATCH 5/7] MIPS: KVM/locore.S: Relax noat James Hogan
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 18+ messages in thread
From: James Hogan @ 2016-05-06 13:36 UTC (permalink / raw)
  To: Ralf Baechle
  Cc: Paul Burton, James Hogan, Paolo Bonzini,
	Radim Krčmář,
	linux-mips, kvm

Update __kvm_mips_vcpu_run() to only save and restore callee saved
registers. It is always called using the standard ABIs, so the caller
will preserve any other registers that need preserving.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
---
 arch/mips/kvm/locore.S | 48 +-----------------------------------------------
 1 file changed, 1 insertion(+), 47 deletions(-)

diff --git a/arch/mips/kvm/locore.S b/arch/mips/kvm/locore.S
index 308706493fd5..3ea522e4954b 100644
--- a/arch/mips/kvm/locore.S
+++ b/arch/mips/kvm/locore.S
@@ -53,40 +53,14 @@
 FEXPORT(__kvm_mips_vcpu_run)
 	/* k0/k1 not being used in host kernel context */
 	INT_ADDIU k1, sp, -PT_SIZE
-	LONG_S	$0, PT_R0(k1)
-	LONG_S	$1, PT_R1(k1)
-	LONG_S	$2, PT_R2(k1)
-	LONG_S	$3, PT_R3(k1)
-
-	LONG_S	$4, PT_R4(k1)
-	LONG_S	$5, PT_R5(k1)
-	LONG_S	$6, PT_R6(k1)
-	LONG_S	$7, PT_R7(k1)
-
-	LONG_S	$8,  PT_R8(k1)
-	LONG_S	$9,  PT_R9(k1)
-	LONG_S	$10, PT_R10(k1)
-	LONG_S	$11, PT_R11(k1)
-	LONG_S	$12, PT_R12(k1)
-	LONG_S	$13, PT_R13(k1)
-	LONG_S	$14, PT_R14(k1)
-	LONG_S	$15, PT_R15(k1)
 	LONG_S	$16, PT_R16(k1)
 	LONG_S	$17, PT_R17(k1)
-
 	LONG_S	$18, PT_R18(k1)
 	LONG_S	$19, PT_R19(k1)
 	LONG_S	$20, PT_R20(k1)
 	LONG_S	$21, PT_R21(k1)
 	LONG_S	$22, PT_R22(k1)
 	LONG_S	$23, PT_R23(k1)
-	LONG_S	$24, PT_R24(k1)
-	LONG_S	$25, PT_R25(k1)
-
-	/*
-	 * XXXKYMA k0/k1 not saved, not being used if we got here through
-	 * an ioctl()
-	 */
 
 	LONG_S	$28, PT_R28(k1)
 	LONG_S	$29, PT_R29(k1)
@@ -545,10 +519,6 @@ __kvm_mips_return_to_host:
 	LONG_L	k0, PT_HOST_USERLOCAL(k1)
 	mtc0	k0, CP0_DDATA_LO
 
-	/* Load context saved on the host stack */
-	LONG_L	$0, PT_R0(k1)
-	LONG_L	$1, PT_R1(k1)
-
 	/*
 	 * r2/v0 is the return code, shift it down by 2 (arithmetic)
 	 * to recover the err code
@@ -556,19 +526,7 @@ __kvm_mips_return_to_host:
 	INT_SRA	k0, v0, 2
 	move	$2, k0
 
-	LONG_L	$3, PT_R3(k1)
-	LONG_L	$4, PT_R4(k1)
-	LONG_L	$5, PT_R5(k1)
-	LONG_L	$6, PT_R6(k1)
-	LONG_L	$7, PT_R7(k1)
-	LONG_L	$8, PT_R8(k1)
-	LONG_L	$9, PT_R9(k1)
-	LONG_L	$10, PT_R10(k1)
-	LONG_L	$11, PT_R11(k1)
-	LONG_L	$12, PT_R12(k1)
-	LONG_L	$13, PT_R13(k1)
-	LONG_L	$14, PT_R14(k1)
-	LONG_L	$15, PT_R15(k1)
+	/* Load context saved on the host stack */
 	LONG_L	$16, PT_R16(k1)
 	LONG_L	$17, PT_R17(k1)
 	LONG_L	$18, PT_R18(k1)
@@ -577,10 +535,6 @@ __kvm_mips_return_to_host:
 	LONG_L	$21, PT_R21(k1)
 	LONG_L	$22, PT_R22(k1)
 	LONG_L	$23, PT_R23(k1)
-	LONG_L	$24, PT_R24(k1)
-	LONG_L	$25, PT_R25(k1)
-
-	/* Host k0/k1 were not saved */
 
 	LONG_L	$28, PT_R28(k1)
 	LONG_L	$29, PT_R29(k1)
-- 
2.4.10

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 5/7] MIPS: KVM/locore.S: Relax noat
  2016-05-06 13:36 [PATCH 0/7] MIPS: Add extended ASID support James Hogan
                   ` (3 preceding siblings ...)
  2016-05-06 13:36 ` [PATCH 4/7] MIPS: KVM/locore.S: Only preserve callee saved registers James Hogan
@ 2016-05-06 13:36 ` James Hogan
  2016-05-06 13:36 ` [PATCH 6/7] MIPS: Retrieve ASID masks using function accepting struct cpuinfo_mips James Hogan
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 18+ messages in thread
From: James Hogan @ 2016-05-06 13:36 UTC (permalink / raw)
  To: Ralf Baechle
  Cc: Paul Burton, James Hogan, Paolo Bonzini,
	Radim Krčmář,
	linux-mips, kvm

Now that the at register ($1) is no longer saved by
__kvm_mips_vcpu_run(), relax the noat assembler directive so that it
only applies around code where at is restored before entering guest, and
saved after exiting guest.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
---
 arch/mips/kvm/locore.S | 16 +++++-----------
 1 file changed, 5 insertions(+), 11 deletions(-)

diff --git a/arch/mips/kvm/locore.S b/arch/mips/kvm/locore.S
index 3ea522e4954b..1f2167bc847d 100644
--- a/arch/mips/kvm/locore.S
+++ b/arch/mips/kvm/locore.S
@@ -48,7 +48,6 @@
  * a1: vcpu
  */
 	.set	noreorder
-	.set	noat
 
 FEXPORT(__kvm_mips_vcpu_run)
 	/* k0/k1 not being used in host kernel context */
@@ -145,6 +144,7 @@ FEXPORT(__kvm_mips_load_asid)
 	/* Disable RDHWR access */
 	mtc0	zero, CP0_HWRENA
 
+	.set	noat
 	/* Now load up the Guest Context from VCPU */
 	LONG_L	$1, VCPU_R1(k1)
 	LONG_L	$2, VCPU_R2(k1)
@@ -256,6 +256,8 @@ NESTED (MIPSX(GuestException), CALLFRAME_SIZ, ra)
 	LONG_S	$30, VCPU_R30(k1)
 	LONG_S	$31, VCPU_R31(k1)
 
+	.set at
+
 	/* We need to save hi/lo and restore them on the way out */
 	mfhi	t0
 	LONG_S	t0, VCPU_HI(k1)
@@ -307,9 +309,7 @@ NESTED (MIPSX(GuestException), CALLFRAME_SIZ, ra)
 	/* load up the host EBASE */
 	mfc0	v0, CP0_STATUS
 
-	.set	at
 	or	k0, v0, ST0_BEV
-	.set	noat
 
 	mtc0	k0, CP0_STATUS
 	ehb
@@ -321,7 +321,6 @@ NESTED (MIPSX(GuestException), CALLFRAME_SIZ, ra)
 	 * If FPU is enabled, save FCR31 and clear it so that later ctc1's don't
 	 * trigger FPE for pending exceptions.
 	 */
-	.set	at
 	and	v1, v0, ST0_CU1
 	beqz	v1, 1f
 	 nop
@@ -331,7 +330,6 @@ NESTED (MIPSX(GuestException), CALLFRAME_SIZ, ra)
 	sw	t0, VCPU_FCR31(k1)
 	ctc1	zero,fcr31
 	.set	pop
-	.set	noat
 1:
 
 #ifdef CONFIG_CPU_HAS_MSA
@@ -354,10 +352,8 @@ NESTED (MIPSX(GuestException), CALLFRAME_SIZ, ra)
 #endif
 
 	/* Now that the new EBASE has been loaded, unset BEV and KSU_USER */
-	.set	at
 	and	v0, v0, ~(ST0_EXL | KSU_USER | ST0_IE)
 	or	v0, v0, ST0_CU0
-	.set	noat
 	mtc0	v0, CP0_STATUS
 	ehb
 
@@ -424,18 +420,14 @@ __kvm_mips_return_to_guest:
 
 	/* Switch EBASE back to the one used by KVM */
 	mfc0	v1, CP0_STATUS
-	.set	at
 	or	k0, v1, ST0_BEV
-	.set	noat
 	mtc0	k0, CP0_STATUS
 	ehb
 	mtc0	t0, CP0_EBASE
 
 	/* Setup status register for running guest in UM */
-	.set	at
 	or	v1, v1, (ST0_EXL | KSU_USER | ST0_IE)
 	and	v1, v1, ~(ST0_CU0 | ST0_MX)
-	.set	noat
 	mtc0	v1, CP0_STATUS
 	ehb
 
@@ -464,6 +456,7 @@ __kvm_mips_return_to_guest:
 	/* Disable RDHWR access */
 	mtc0	zero, CP0_HWRENA
 
+	.set	noat
 	/* load the guest context from VCPU and return */
 	LONG_L	$0, VCPU_R0(k1)
 	LONG_L	$1, VCPU_R1(k1)
@@ -509,6 +502,7 @@ FEXPORT(__kvm_mips_skip_guest_restore)
 	LONG_L	k1, VCPU_R27(k1)
 
 	eret
+	.set	at
 
 __kvm_mips_return_to_host:
 	/* EBASE is already pointing to Linux */
-- 
2.4.10

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 6/7] MIPS: Retrieve ASID masks using function accepting struct cpuinfo_mips
  2016-05-06 13:36 [PATCH 0/7] MIPS: Add extended ASID support James Hogan
                   ` (4 preceding siblings ...)
  2016-05-06 13:36 ` [PATCH 5/7] MIPS: KVM/locore.S: Relax noat James Hogan
@ 2016-05-06 13:36 ` James Hogan
  2016-05-06 13:36 ` [PATCH 7/7] MIPS: Support extended ASIDs James Hogan
  2016-05-09 13:23 ` [PATCH 0/7] MIPS: Add extended ASID support Ralf Baechle
  7 siblings, 0 replies; 18+ messages in thread
From: James Hogan @ 2016-05-06 13:36 UTC (permalink / raw)
  To: Ralf Baechle
  Cc: Paul Burton, James Hogan, Paolo Bonzini,
	Radim Krčmář,
	linux-mips, kvm

From: Paul Burton <paul.burton@imgtec.com>

In preparation for supporting variable ASID masks, retrieve ASID masks
using functions in asm/cpu-info.h which accept struct cpuinfo_mips. This
will allow those functions to determine the ASID mask based upon the CPU
in a later patch. This also allows for the r3k & r8k cases to be handled
in Kconfig, which is arguably cleaner than the previous #ifdefs.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
---
 arch/mips/Kconfig                   | 11 ++++++++++
 arch/mips/include/asm/cpu-info.h    | 10 +++++++++
 arch/mips/include/asm/mmu_context.h | 41 ++++++++++++++++---------------------
 arch/mips/kernel/traps.c            |  2 +-
 arch/mips/kvm/tlb.c                 | 30 +++++++++++++++++----------
 arch/mips/lib/dump_tlb.c            | 10 +++++----
 arch/mips/lib/r3k_dump_tlb.c        |  9 ++++----
 arch/mips/mm/tlb-r3k.c              | 24 +++++++++++++---------
 arch/mips/mm/tlb-r4k.c              |  2 +-
 arch/mips/mm/tlb-r8k.c              |  2 +-
 10 files changed, 86 insertions(+), 55 deletions(-)

diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
index 2018c2b0e078..132d1c68befc 100644
--- a/arch/mips/Kconfig
+++ b/arch/mips/Kconfig
@@ -2401,6 +2401,17 @@ config CPU_R4000_WORKAROUNDS
 config CPU_R4400_WORKAROUNDS
 	bool
 
+config MIPS_ASID_SHIFT
+	int
+	default 6 if CPU_R3000 || CPU_TX39XX
+	default 4 if CPU_R8000
+	default 0
+
+config MIPS_ASID_BITS
+	int
+	default 6 if CPU_R3000 || CPU_TX39XX
+	default 8
+
 #
 # - Highmem only makes sense for the 32-bit kernel.
 # - The current highmem code will only work properly on physically indexed
diff --git a/arch/mips/include/asm/cpu-info.h b/arch/mips/include/asm/cpu-info.h
index af12c1f9f1a8..4cb3cdadc41e 100644
--- a/arch/mips/include/asm/cpu-info.h
+++ b/arch/mips/include/asm/cpu-info.h
@@ -131,4 +131,14 @@ struct proc_cpuinfo_notifier_args {
 # define cpu_vpe_id(cpuinfo)	({ (void)cpuinfo; 0; })
 #endif
 
+static inline unsigned long cpu_asid_inc(void)
+{
+	return 1 << CONFIG_MIPS_ASID_SHIFT;
+}
+
+static inline unsigned long cpu_asid_mask(struct cpuinfo_mips *cpuinfo)
+{
+	return ((1 << CONFIG_MIPS_ASID_BITS) - 1) << CONFIG_MIPS_ASID_SHIFT;
+}
+
 #endif /* __ASM_CPU_INFO_H */
diff --git a/arch/mips/include/asm/mmu_context.h b/arch/mips/include/asm/mmu_context.h
index 45914b59824c..fc57e135cb0a 100644
--- a/arch/mips/include/asm/mmu_context.h
+++ b/arch/mips/include/asm/mmu_context.h
@@ -65,37 +65,32 @@ extern unsigned long pgd_current[];
 	back_to_back_c0_hazard();					\
 	TLBMISS_HANDLER_SETUP_PGD(swapper_pg_dir)
 #endif /* CONFIG_MIPS_PGD_C0_CONTEXT*/
-#if defined(CONFIG_CPU_R3000) || defined(CONFIG_CPU_TX39XX)
 
-#define ASID_INC	0x40
-#define ASID_MASK	0xfc0
+/*
+ *  All unused by hardware upper bits will be considered
+ *  as a software asid extension.
+ */
+static unsigned long asid_version_mask(unsigned int cpu)
+{
+	unsigned long asid_mask = cpu_asid_mask(&cpu_data[cpu]);
 
-#elif defined(CONFIG_CPU_R8000)
+	return ~(asid_mask | (asid_mask - 1));
+}
 
-#define ASID_INC	0x10
-#define ASID_MASK	0xff0
-
-#else /* FIXME: not correct for R6000 */
-
-#define ASID_INC	0x1
-#define ASID_MASK	0xff
-
-#endif
+static unsigned long asid_first_version(unsigned int cpu)
+{
+	return ~asid_version_mask(cpu) + 1;
+}
 
 #define cpu_context(cpu, mm)	((mm)->context.asid[cpu])
-#define cpu_asid(cpu, mm)	(cpu_context((cpu), (mm)) & ASID_MASK)
 #define asid_cache(cpu)		(cpu_data[cpu].asid_cache)
+#define cpu_asid(cpu, mm) \
+	(cpu_context((cpu), (mm)) & cpu_asid_mask(&cpu_data[cpu]))
 
 static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
 {
 }
 
-/*
- *  All unused by hardware upper bits will be considered
- *  as a software asid extension.
- */
-#define ASID_VERSION_MASK  ((unsigned long)~(ASID_MASK|(ASID_MASK-1)))
-#define ASID_FIRST_VERSION ((unsigned long)(~ASID_VERSION_MASK) + 1)
 
 /* Normal, classic MIPS get_new_mmu_context */
 static inline void
@@ -104,7 +99,7 @@ get_new_mmu_context(struct mm_struct *mm, unsigned long cpu)
 	extern void kvm_local_flush_tlb_all(void);
 	unsigned long asid = asid_cache(cpu);
 
-	if (! ((asid += ASID_INC) & ASID_MASK) ) {
+	if (!((asid += cpu_asid_inc()) & cpu_asid_mask(&cpu_data[cpu]))) {
 		if (cpu_has_vtag_icache)
 			flush_icache_all();
 #ifdef CONFIG_KVM
@@ -113,7 +108,7 @@ get_new_mmu_context(struct mm_struct *mm, unsigned long cpu)
 		local_flush_tlb_all();	/* start new asid cycle */
 #endif
 		if (!asid)		/* fix version if needed */
-			asid = ASID_FIRST_VERSION;
+			asid = asid_first_version(cpu);
 	}
 
 	cpu_context(cpu, mm) = asid_cache(cpu) = asid;
@@ -145,7 +140,7 @@ static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next,
 
 	htw_stop();
 	/* Check if our ASID is of an older version and thus invalid */
-	if ((cpu_context(cpu, next) ^ asid_cache(cpu)) & ASID_VERSION_MASK)
+	if ((cpu_context(cpu, next) ^ asid_cache(cpu)) & asid_version_mask(cpu))
 		get_new_mmu_context(next, cpu);
 	write_c0_entryhi(cpu_asid(cpu, next));
 	TLBMISS_HANDLER_SETUP_PGD(next->pgd);
diff --git a/arch/mips/kernel/traps.c b/arch/mips/kernel/traps.c
index ae0c89d23ad7..1dd4198f25fb 100644
--- a/arch/mips/kernel/traps.c
+++ b/arch/mips/kernel/traps.c
@@ -2134,7 +2134,7 @@ void per_cpu_trap_init(bool is_boot_cpu)
 	}
 
 	if (!cpu_data[cpu].asid_cache)
-		cpu_data[cpu].asid_cache = ASID_FIRST_VERSION;
+		cpu_data[cpu].asid_cache = asid_first_version(cpu);
 
 	atomic_inc(&init_mm.mm_count);
 	current->active_mm = &init_mm;
diff --git a/arch/mips/kvm/tlb.c b/arch/mips/kvm/tlb.c
index 52d87280f865..b9c52c1d35d6 100644
--- a/arch/mips/kvm/tlb.c
+++ b/arch/mips/kvm/tlb.c
@@ -49,12 +49,18 @@ EXPORT_SYMBOL_GPL(kvm_mips_is_error_pfn);
 
 uint32_t kvm_mips_get_kernel_asid(struct kvm_vcpu *vcpu)
 {
-	return vcpu->arch.guest_kernel_asid[smp_processor_id()] & ASID_MASK;
+	int cpu = smp_processor_id();
+
+	return vcpu->arch.guest_kernel_asid[cpu] &
+			cpu_asid_mask(&cpu_data[cpu]);
 }
 
 uint32_t kvm_mips_get_user_asid(struct kvm_vcpu *vcpu)
 {
-	return vcpu->arch.guest_user_asid[smp_processor_id()] & ASID_MASK;
+	int cpu = smp_processor_id();
+
+	return vcpu->arch.guest_user_asid[cpu] &
+			cpu_asid_mask(&cpu_data[cpu]);
 }
 
 inline uint32_t kvm_mips_get_commpage_asid(struct kvm_vcpu *vcpu)
@@ -78,7 +84,8 @@ void kvm_mips_dump_host_tlbs(void)
 	old_pagemask = read_c0_pagemask();
 
 	kvm_info("HOST TLBs:\n");
-	kvm_info("ASID: %#lx\n", read_c0_entryhi() & ASID_MASK);
+	kvm_info("ASID: %#lx\n", read_c0_entryhi() &
+		 cpu_asid_mask(&current_cpu_data));
 
 	for (i = 0; i < current_cpu_data.tlbsize; i++) {
 		write_c0_index(i);
@@ -564,15 +571,15 @@ void kvm_get_new_mmu_context(struct mm_struct *mm, unsigned long cpu,
 {
 	unsigned long asid = asid_cache(cpu);
 
-	asid += ASID_INC;
-	if (!(asid & ASID_MASK)) {
+	asid += cpu_asid_inc();
+	if (!(asid & cpu_asid_mask(&cpu_data[cpu]))) {
 		if (cpu_has_vtag_icache)
 			flush_icache_all();
 
 		kvm_local_flush_tlb_all();      /* start new asid cycle */
 
 		if (!asid)      /* fix version if needed */
-			asid = ASID_FIRST_VERSION;
+			asid = asid_first_version(cpu);
 	}
 
 	cpu_context(cpu, mm) = asid_cache(cpu) = asid;
@@ -627,6 +634,7 @@ static void kvm_mips_migrate_count(struct kvm_vcpu *vcpu)
 /* Restore ASID once we are scheduled back after preemption */
 void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 {
+	unsigned long asid_mask = cpu_asid_mask(&cpu_data[cpu]);
 	unsigned long flags;
 	int newasid = 0;
 
@@ -637,7 +645,7 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 	local_irq_save(flags);
 
 	if ((vcpu->arch.guest_kernel_asid[cpu] ^ asid_cache(cpu)) &
-							ASID_VERSION_MASK) {
+						asid_version_mask(cpu)) {
 		kvm_get_new_mmu_context(&vcpu->arch.guest_kernel_mm, cpu, vcpu);
 		vcpu->arch.guest_kernel_asid[cpu] =
 		    vcpu->arch.guest_kernel_mm.context.asid[cpu];
@@ -672,7 +680,7 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 		 */
 		if (current->flags & PF_VCPU) {
 			write_c0_entryhi(vcpu->arch.
-					 preempt_entryhi & ASID_MASK);
+					 preempt_entryhi & asid_mask);
 			ehb();
 		}
 	} else {
@@ -687,11 +695,11 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 			if (KVM_GUEST_KERNEL_MODE(vcpu))
 				write_c0_entryhi(vcpu->arch.
 						 guest_kernel_asid[cpu] &
-						 ASID_MASK);
+						 asid_mask);
 			else
 				write_c0_entryhi(vcpu->arch.
 						 guest_user_asid[cpu] &
-						 ASID_MASK);
+						 asid_mask);
 			ehb();
 		}
 	}
@@ -721,7 +729,7 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
 	kvm_mips_callbacks->vcpu_get_regs(vcpu);
 
 	if (((cpu_context(cpu, current->mm) ^ asid_cache(cpu)) &
-	     ASID_VERSION_MASK)) {
+	     asid_version_mask(cpu))) {
 		kvm_debug("%s: Dropping MMU Context:  %#lx\n", __func__,
 			  cpu_context(cpu, current->mm));
 		drop_mmu_context(current->mm, cpu);
diff --git a/arch/mips/lib/dump_tlb.c b/arch/mips/lib/dump_tlb.c
index 92a37319efbe..3283aa7423e4 100644
--- a/arch/mips/lib/dump_tlb.c
+++ b/arch/mips/lib/dump_tlb.c
@@ -73,6 +73,8 @@ static void dump_tlb(int first, int last)
 	unsigned long s_entryhi, entryhi, asid;
 	unsigned long long entrylo0, entrylo1, pa;
 	unsigned int s_index, s_pagemask, pagemask, c0, c1, i;
+	unsigned long asidmask = cpu_asid_mask(&current_cpu_data);
+	int asidwidth = DIV_ROUND_UP(ilog2(asidmask) + 1, 4);
 #ifdef CONFIG_32BIT
 	bool xpa = cpu_has_xpa && (read_c0_pagegrain() & PG_ELPA);
 	int pwidth = xpa ? 11 : 8;
@@ -86,7 +88,7 @@ static void dump_tlb(int first, int last)
 	s_pagemask = read_c0_pagemask();
 	s_entryhi = read_c0_entryhi();
 	s_index = read_c0_index();
-	asid = s_entryhi & 0xff;
+	asid = s_entryhi & asidmask;
 
 	for (i = first; i <= last; i++) {
 		write_c0_index(i);
@@ -115,7 +117,7 @@ static void dump_tlb(int first, int last)
 		 * due to duplicate TLB entry.
 		 */
 		if (!((entrylo0 | entrylo1) & ENTRYLO_G) &&
-		    (entryhi & 0xff) != asid)
+		    (entryhi & asidmask) != asid)
 			continue;
 
 		/*
@@ -126,9 +128,9 @@ static void dump_tlb(int first, int last)
 		c0 = (entrylo0 & ENTRYLO_C) >> ENTRYLO_C_SHIFT;
 		c1 = (entrylo1 & ENTRYLO_C) >> ENTRYLO_C_SHIFT;
 
-		printk("va=%0*lx asid=%02lx\n",
+		printk("va=%0*lx asid=%0*lx\n",
 		       vwidth, (entryhi & ~0x1fffUL),
-		       entryhi & 0xff);
+		       asidwidth, entryhi & asidmask);
 		/* RI/XI are in awkward places, so mask them off separately */
 		pa = entrylo0 & ~(MIPS_ENTRYLO_RI | MIPS_ENTRYLO_XI);
 		if (xpa)
diff --git a/arch/mips/lib/r3k_dump_tlb.c b/arch/mips/lib/r3k_dump_tlb.c
index cfcbb5218b59..744f4a7bc49d 100644
--- a/arch/mips/lib/r3k_dump_tlb.c
+++ b/arch/mips/lib/r3k_dump_tlb.c
@@ -29,9 +29,10 @@ static void dump_tlb(int first, int last)
 {
 	int	i;
 	unsigned int asid;
-	unsigned long entryhi, entrylo0;
+	unsigned long entryhi, entrylo0, asid_mask;
 
-	asid = read_c0_entryhi() & ASID_MASK;
+	asid_mask = cpu_asid_mask(&current_cpu_data);
+	asid = read_c0_entryhi() & asid_mask;
 
 	for (i = first; i <= last; i++) {
 		write_c0_index(i<<8);
@@ -46,7 +47,7 @@ static void dump_tlb(int first, int last)
 		/* Unused entries have a virtual address of KSEG0.  */
 		if ((entryhi & PAGE_MASK) != KSEG0 &&
 		    (entrylo0 & R3K_ENTRYLO_G ||
-		     (entryhi & ASID_MASK) == asid)) {
+		     (entryhi & asid_mask) == asid)) {
 			/*
 			 * Only print entries in use
 			 */
@@ -55,7 +56,7 @@ static void dump_tlb(int first, int last)
 			printk("va=%08lx asid=%08lx"
 			       "  [pa=%06lx n=%d d=%d v=%d g=%d]",
 			       entryhi & PAGE_MASK,
-			       entryhi & ASID_MASK,
+			       entryhi & asid_mask,
 			       entrylo0 & PAGE_MASK,
 			       (entrylo0 & R3K_ENTRYLO_N) ? 1 : 0,
 			       (entrylo0 & R3K_ENTRYLO_D) ? 1 : 0,
diff --git a/arch/mips/mm/tlb-r3k.c b/arch/mips/mm/tlb-r3k.c
index b4f366f7c0f5..1290b995695d 100644
--- a/arch/mips/mm/tlb-r3k.c
+++ b/arch/mips/mm/tlb-r3k.c
@@ -43,7 +43,7 @@ static void local_flush_tlb_from(int entry)
 {
 	unsigned long old_ctx;
 
-	old_ctx = read_c0_entryhi() & ASID_MASK;
+	old_ctx = read_c0_entryhi() & cpu_asid_mask(&current_cpu_data);
 	write_c0_entrylo0(0);
 	while (entry < current_cpu_data.tlbsize) {
 		write_c0_index(entry << 8);
@@ -81,6 +81,7 @@ void local_flush_tlb_mm(struct mm_struct *mm)
 void local_flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
 			   unsigned long end)
 {
+	unsigned long asid_mask = cpu_asid_mask(&current_cpu_data);
 	struct mm_struct *mm = vma->vm_mm;
 	int cpu = smp_processor_id();
 
@@ -89,13 +90,13 @@ void local_flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
 
 #ifdef DEBUG_TLB
 		printk("[tlbrange<%lu,0x%08lx,0x%08lx>]",
-			cpu_context(cpu, mm) & ASID_MASK, start, end);
+			cpu_context(cpu, mm) & asid_mask, start, end);
 #endif
 		local_irq_save(flags);
 		size = (end - start + (PAGE_SIZE - 1)) >> PAGE_SHIFT;
 		if (size <= current_cpu_data.tlbsize) {
-			int oldpid = read_c0_entryhi() & ASID_MASK;
-			int newpid = cpu_context(cpu, mm) & ASID_MASK;
+			int oldpid = read_c0_entryhi() & asid_mask;
+			int newpid = cpu_context(cpu, mm) & asid_mask;
 
 			start &= PAGE_MASK;
 			end += PAGE_SIZE - 1;
@@ -159,6 +160,7 @@ void local_flush_tlb_kernel_range(unsigned long start, unsigned long end)
 
 void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
 {
+	unsigned long asid_mask = cpu_asid_mask(&current_cpu_data);
 	int cpu = smp_processor_id();
 
 	if (cpu_context(cpu, vma->vm_mm) != 0) {
@@ -168,10 +170,10 @@ void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
 #ifdef DEBUG_TLB
 		printk("[tlbpage<%lu,0x%08lx>]", cpu_context(cpu, vma->vm_mm), page);
 #endif
-		newpid = cpu_context(cpu, vma->vm_mm) & ASID_MASK;
+		newpid = cpu_context(cpu, vma->vm_mm) & asid_mask;
 		page &= PAGE_MASK;
 		local_irq_save(flags);
-		oldpid = read_c0_entryhi() & ASID_MASK;
+		oldpid = read_c0_entryhi() & asid_mask;
 		write_c0_entryhi(page | newpid);
 		BARRIER;
 		tlb_probe();
@@ -190,6 +192,7 @@ finish:
 
 void __update_tlb(struct vm_area_struct *vma, unsigned long address, pte_t pte)
 {
+	unsigned long asid_mask = cpu_asid_mask(&current_cpu_data);
 	unsigned long flags;
 	int idx, pid;
 
@@ -199,10 +202,10 @@ void __update_tlb(struct vm_area_struct *vma, unsigned long address, pte_t pte)
 	if (current->active_mm != vma->vm_mm)
 		return;
 
-	pid = read_c0_entryhi() & ASID_MASK;
+	pid = read_c0_entryhi() & asid_mask;
 
 #ifdef DEBUG_TLB
-	if ((pid != (cpu_context(cpu, vma->vm_mm) & ASID_MASK)) || (cpu_context(cpu, vma->vm_mm) == 0)) {
+	if ((pid != (cpu_context(cpu, vma->vm_mm) & asid_mask)) || (cpu_context(cpu, vma->vm_mm) == 0)) {
 		printk("update_mmu_cache: Wheee, bogus tlbpid mmpid=%lu tlbpid=%d\n",
 		       (cpu_context(cpu, vma->vm_mm)), pid);
 	}
@@ -228,6 +231,7 @@ void __update_tlb(struct vm_area_struct *vma, unsigned long address, pte_t pte)
 void add_wired_entry(unsigned long entrylo0, unsigned long entrylo1,
 		     unsigned long entryhi, unsigned long pagemask)
 {
+	unsigned long asid_mask = cpu_asid_mask(&current_cpu_data);
 	unsigned long flags;
 	unsigned long old_ctx;
 	static unsigned long wired = 0;
@@ -243,7 +247,7 @@ void add_wired_entry(unsigned long entrylo0, unsigned long entrylo1,
 
 		local_irq_save(flags);
 		/* Save old context and create impossible VPN2 value */
-		old_ctx = read_c0_entryhi() & ASID_MASK;
+		old_ctx = read_c0_entryhi() & asid_mask;
 		old_pagemask = read_c0_pagemask();
 		w = read_c0_wired();
 		write_c0_wired(w + 1);
@@ -266,7 +270,7 @@ void add_wired_entry(unsigned long entrylo0, unsigned long entrylo1,
 #endif
 
 		local_irq_save(flags);
-		old_ctx = read_c0_entryhi() & ASID_MASK;
+		old_ctx = read_c0_entryhi() & asid_mask;
 		write_c0_entrylo0(entrylo0);
 		write_c0_entryhi(entryhi);
 		write_c0_index(wired);
diff --git a/arch/mips/mm/tlb-r4k.c b/arch/mips/mm/tlb-r4k.c
index c17d7627f872..0063daa8b679 100644
--- a/arch/mips/mm/tlb-r4k.c
+++ b/arch/mips/mm/tlb-r4k.c
@@ -301,7 +301,7 @@ void __update_tlb(struct vm_area_struct * vma, unsigned long address, pte_t pte)
 	local_irq_save(flags);
 
 	htw_stop();
-	pid = read_c0_entryhi() & ASID_MASK;
+	pid = read_c0_entryhi() & cpu_asid_mask(&current_cpu_data);
 	address &= (PAGE_MASK << 1);
 	write_c0_entryhi(address | pid);
 	pgdp = pgd_offset(vma->vm_mm, address);
diff --git a/arch/mips/mm/tlb-r8k.c b/arch/mips/mm/tlb-r8k.c
index 138a2ec7cc6b..e86e2e55ad3e 100644
--- a/arch/mips/mm/tlb-r8k.c
+++ b/arch/mips/mm/tlb-r8k.c
@@ -194,7 +194,7 @@ void __update_tlb(struct vm_area_struct * vma, unsigned long address, pte_t pte)
 	if (current->active_mm != vma->vm_mm)
 		return;
 
-	pid = read_c0_entryhi() & ASID_MASK;
+	pid = read_c0_entryhi() & cpu_asid_mask(&current_cpu_data);
 
 	local_irq_save(flags);
 	address &= PAGE_MASK;
-- 
2.4.10

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 7/7] MIPS: Support extended ASIDs
  2016-05-06 13:36 [PATCH 0/7] MIPS: Add extended ASID support James Hogan
                   ` (5 preceding siblings ...)
  2016-05-06 13:36 ` [PATCH 6/7] MIPS: Retrieve ASID masks using function accepting struct cpuinfo_mips James Hogan
@ 2016-05-06 13:36 ` James Hogan
  2016-05-09 13:23 ` [PATCH 0/7] MIPS: Add extended ASID support Ralf Baechle
  7 siblings, 0 replies; 18+ messages in thread
From: James Hogan @ 2016-05-06 13:36 UTC (permalink / raw)
  To: Ralf Baechle
  Cc: Paul Burton, James Hogan, Jayachandran C.,
	Paolo Bonzini, Radim Krčmář,
	linux-mips, kvm

From: Paul Burton <paul.burton@imgtec.com>

Add support for extended ASIDs as determined by the Config4.AE bit.
Since the only supported CPUs known to implement this are Netlogic XLP
and MIPS I6400, select this variable ASID support based upon
CONFIG_CPU_XLP and CONFIG_CPU_MIPSR6.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Jayachandran C. <jchandra@broadcom.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
---
 arch/mips/Kconfig                |  6 ++++++
 arch/mips/include/asm/cpu-info.h | 14 ++++++++++++++
 arch/mips/kernel/asm-offsets.c   | 10 ++++++++++
 arch/mips/kernel/cpu-probe.c     | 13 +++++++++++++
 arch/mips/kernel/genex.S         |  2 +-
 arch/mips/kvm/locore.S           | 14 ++++++++++++++
 6 files changed, 58 insertions(+), 1 deletion(-)

diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
index 132d1c68befc..55ca8fab4f4a 100644
--- a/arch/mips/Kconfig
+++ b/arch/mips/Kconfig
@@ -1673,6 +1673,7 @@ config CPU_XLP
 	select CPU_HAS_PREFETCH
 	select CPU_MIPSR2
 	select CPU_SUPPORTS_HUGEPAGES
+	select MIPS_ASID_BITS_VARIABLE
 	help
 	  Netlogic Microsystems XLP processors.
 endchoice
@@ -1966,6 +1967,7 @@ config CPU_MIPSR2
 config CPU_MIPSR6
 	bool
 	default y if CPU_MIPS32_R6 || CPU_MIPS64_R6
+	select MIPS_ASID_BITS_VARIABLE
 	select MIPS_SPRAM
 
 config EVA
@@ -2409,9 +2411,13 @@ config MIPS_ASID_SHIFT
 
 config MIPS_ASID_BITS
 	int
+	default 0 if MIPS_ASID_BITS_VARIABLE
 	default 6 if CPU_R3000 || CPU_TX39XX
 	default 8
 
+config MIPS_ASID_BITS_VARIABLE
+	bool
+
 #
 # - Highmem only makes sense for the 32-bit kernel.
 # - The current highmem code will only work properly on physically indexed
diff --git a/arch/mips/include/asm/cpu-info.h b/arch/mips/include/asm/cpu-info.h
index 4cb3cdadc41e..ed7fc82ed29f 100644
--- a/arch/mips/include/asm/cpu-info.h
+++ b/arch/mips/include/asm/cpu-info.h
@@ -40,6 +40,9 @@ struct cache_desc {
 
 struct cpuinfo_mips {
 	unsigned long		asid_cache;
+#ifdef CONFIG_MIPS_ASID_BITS_VARIABLE
+	unsigned long		asid_mask;
+#endif
 
 	/*
 	 * Capability and feature descriptor structure for MIPS CPU
@@ -138,7 +141,18 @@ static inline unsigned long cpu_asid_inc(void)
 
 static inline unsigned long cpu_asid_mask(struct cpuinfo_mips *cpuinfo)
 {
+#ifdef CONFIG_MIPS_ASID_BITS_VARIABLE
+	return cpuinfo->asid_mask;
+#endif
 	return ((1 << CONFIG_MIPS_ASID_BITS) - 1) << CONFIG_MIPS_ASID_SHIFT;
 }
 
+static inline void set_cpu_asid_mask(struct cpuinfo_mips *cpuinfo,
+				     unsigned long asid_mask)
+{
+#ifdef CONFIG_MIPS_ASID_BITS_VARIABLE
+	cpuinfo->asid_mask = asid_mask;
+#endif
+}
+
 #endif /* __ASM_CPU_INFO_H */
diff --git a/arch/mips/kernel/asm-offsets.c b/arch/mips/kernel/asm-offsets.c
index 154e2039ea5e..1ea973b2abb1 100644
--- a/arch/mips/kernel/asm-offsets.c
+++ b/arch/mips/kernel/asm-offsets.c
@@ -14,6 +14,7 @@
 #include <linux/mm.h>
 #include <linux/kbuild.h>
 #include <linux/suspend.h>
+#include <asm/cpu-info.h>
 #include <asm/pm.h>
 #include <asm/ptrace.h>
 #include <asm/processor.h>
@@ -338,6 +339,15 @@ void output_pm_defines(void)
 }
 #endif
 
+void output_cpuinfo_defines(void)
+{
+	COMMENT(" MIPS cpuinfo offsets. ");
+	DEFINE(CPUINFO_SIZE, sizeof(struct cpuinfo_mips));
+#ifdef CONFIG_MIPS_ASID_BITS_VARIABLE
+	OFFSET(CPUINFO_ASID_MASK, cpuinfo_mips, asid_mask);
+#endif
+}
+
 void output_kvm_defines(void)
 {
 	COMMENT(" KVM/MIPS Specfic offsets. ");
diff --git a/arch/mips/kernel/cpu-probe.c b/arch/mips/kernel/cpu-probe.c
index b725b713b9f8..fb3fd2d3e565 100644
--- a/arch/mips/kernel/cpu-probe.c
+++ b/arch/mips/kernel/cpu-probe.c
@@ -715,6 +715,7 @@ static inline unsigned int decode_config4(struct cpuinfo_mips *c)
 	unsigned int newcf4;
 	unsigned int mmuextdef;
 	unsigned int ftlb_page = MIPS_CONF4_FTLBPAGESIZE;
+	unsigned long asid_mask;
 
 	config4 = read_c0_config4();
 
@@ -775,6 +776,18 @@ static inline unsigned int decode_config4(struct cpuinfo_mips *c)
 
 	c->kscratch_mask = (config4 >> 16) & 0xff;
 
+	asid_mask = MIPS_ENTRYHI_ASID;
+	if (config4 & MIPS_CONF4_AE)
+		asid_mask |= MIPS_ENTRYHI_ASIDX;
+	set_cpu_asid_mask(c, asid_mask);
+
+	/*
+	 * Warn if the computed ASID mask doesn't match the mask the kernel
+	 * is built for. This may indicate either a serious problem or an
+	 * easy optimisation opportunity, but either way should be addressed.
+	 */
+	WARN_ON(asid_mask != cpu_asid_mask(c));
+
 	return config4 & MIPS_CONF_M;
 }
 
diff --git a/arch/mips/kernel/genex.S b/arch/mips/kernel/genex.S
index 17374aef6f00..bff9644b9ad1 100644
--- a/arch/mips/kernel/genex.S
+++ b/arch/mips/kernel/genex.S
@@ -455,7 +455,7 @@ NESTED(nmi_handler, PT_SIZE, sp)
 	.set	noreorder
 	/* check if TLB contains a entry for EPC */
 	MFC0	k1, CP0_ENTRYHI
-	andi	k1, MIPS_ENTRYHI_ASID
+	andi	k1, MIPS_ENTRYHI_ASID | MIPS_ENTRYHI_ASIDX
 	MFC0	k0, CP0_EPC
 	PTR_SRL k0, _PAGE_SHIFT + 1
 	PTR_SLL k0, _PAGE_SHIFT + 1
diff --git a/arch/mips/kvm/locore.S b/arch/mips/kvm/locore.S
index 1f2167bc847d..3ef03009de5f 100644
--- a/arch/mips/kvm/locore.S
+++ b/arch/mips/kvm/locore.S
@@ -137,7 +137,14 @@ FEXPORT(__kvm_mips_load_asid)
 	INT_SLL	t2, t2, 2                   /* x4 */
 	REG_ADDU t3, t1, t2
 	LONG_L	k0, (t3)
+#ifdef CONFIG_MIPS_ASID_BITS_VARIABLE
+	li	t3, CPUINFO_SIZE/4
+	mul	t2, t2, t3		/* x sizeof(struct cpuinfo_mips)/4 */
+	LONG_L	t2, (cpu_data + CPUINFO_ASID_MASK)(t2)
+	and	k0, k0, t2
+#else
 	andi	k0, k0, MIPS_ENTRYHI_ASID
+#endif
 	mtc0	k0, CP0_ENTRYHI
 	ehb
 
@@ -449,7 +456,14 @@ __kvm_mips_return_to_guest:
 	INT_SLL	t2, t2, 2		/* x4 */
 	REG_ADDU t3, t1, t2
 	LONG_L	k0, (t3)
+#ifdef CONFIG_MIPS_ASID_BITS_VARIABLE
+	li	t3, CPUINFO_SIZE/4
+	mul	t2, t2, t3		/* x sizeof(struct cpuinfo_mips)/4 */
+	LONG_L	t2, (cpu_data + CPUINFO_ASID_MASK)(t2)
+	and	k0, k0, t2
+#else
 	andi	k0, k0, MIPS_ENTRYHI_ASID
+#endif
 	mtc0	k0, CP0_ENTRYHI
 	ehb
 
-- 
2.4.10

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH 0/7] MIPS: Add extended ASID support
  2016-05-06 13:36 [PATCH 0/7] MIPS: Add extended ASID support James Hogan
                   ` (6 preceding siblings ...)
  2016-05-06 13:36 ` [PATCH 7/7] MIPS: Support extended ASIDs James Hogan
@ 2016-05-09 13:23 ` Ralf Baechle
  2016-05-09 17:01   ` Maciej W. Rozycki
  7 siblings, 1 reply; 18+ messages in thread
From: Ralf Baechle @ 2016-05-09 13:23 UTC (permalink / raw)
  To: James Hogan
  Cc: Paul Burton, Manuel Lauss, Jayachandran C.,
	Paolo Bonzini, Radim Krčmář,
	linux-mips, kvm

On Fri, May 06, 2016 at 02:36:17PM +0100, James Hogan wrote:

> This patchset is based on v4.6-rc4 and adds support for the optional
> extended ASIDs present since revision 3.5 of the MIPS32/MIPS64
> architecture, which extends the TLB ASIDs from 8 bits to 10 bits. These
> are known to be implemented in XLP and I6400 cores.
> 
> Along the way a few cleanups are made, particularly for KVM which
> manipulates ASIDs from assembly code.
> 
> Patch 6 lays most of the groundwork by abstracting asid masks so they
> can be variable, and patch 7 adds the actual support for extended ASIDs.
> 
> Patches 1-5 do some preliminary clean up around ASID handling, and in
> KVM's locore.S to allow patch 7 to support extended ASIDs.
> 
> The use of extended ASIDs can be observed by using the 'x' sysrq to dump
> TLB values, e.g. by repeatedly running this command:
> $(echo x > /proc/sysrq-trigger); dmesg -c | grep asid

Oh beloved ASIDs ...

Already PMC-Sierra's RM9000 / E9000 core had an extended ASID field, of
12 bits for 4096 ASID contexts.  Afaics this was an extension derived
in-house back in the wild days before everything had to be sanctioned by
the architecture folks, so there is nothing in a config register to test
for it.

PMCS simply extended the ASID field to 12 bits; no of the EntryHi bits
which today would conflict doing so did exist back then.

Afair there was yet another core with such a non-standard extension of the
ASID field.  R6000 and R8000 were weird, too.

Until commit f67e4ffc79905482c3b9b8c8dd65197bac7eb508 ("My proposal for
non-generic kernels:") we used to runtime patch the kernel (That's the
cowboy patch the commit message is refering to) to allow for variable
size of the ASID field and position of the ASID field in the EntryHi
register.

  Ralf

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 1/7] MIPS: KVM/locore.S: Don't preserve host ASID around vcpu_run
  2016-05-06 13:36 ` [PATCH 1/7] MIPS: KVM/locore.S: Don't preserve host ASID around vcpu_run James Hogan
@ 2016-05-09 14:22   ` Paolo Bonzini
  2016-05-09 15:30     ` Ralf Baechle
  0 siblings, 1 reply; 18+ messages in thread
From: Paolo Bonzini @ 2016-05-09 14:22 UTC (permalink / raw)
  To: James Hogan, Ralf Baechle
  Cc: Paul Burton, Radim Krčmář, linux-mips, kvm



On 06/05/2016 15:36, James Hogan wrote:
> - It is actually redundant, since the host ASID will be restored
>   correctly by kvm_arch_vcpu_put(), which is called almost immediately
>   after kvm_arch_vcpu_ioctl_run() returns.

What happens if the guest does a rogue access to the area where the host
kernel resides?  Would that cause a wrong entry in the TLB?

Paolo

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 1/7] MIPS: KVM/locore.S: Don't preserve host ASID around vcpu_run
  2016-05-09 14:22   ` Paolo Bonzini
@ 2016-05-09 15:30     ` Ralf Baechle
  2016-05-09 19:42       ` James Hogan
  0 siblings, 1 reply; 18+ messages in thread
From: Ralf Baechle @ 2016-05-09 15:30 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: James Hogan, Paul Burton, Radim Krčmář, linux-mips, kvm

On Mon, May 09, 2016 at 04:22:33PM +0200, Paolo Bonzini wrote:

> On 06/05/2016 15:36, James Hogan wrote:
> > - It is actually redundant, since the host ASID will be restored
> >   correctly by kvm_arch_vcpu_put(), which is called almost immediately
> >   after kvm_arch_vcpu_ioctl_run() returns.
> 
> What happens if the guest does a rogue access to the area where the host
> kernel resides?  Would that cause a wrong entry in the TLB?

The kernel and lowmem reside in KSEG0/XKPYS which are "unmapped segments".
Unmapped means, the TLB isn't accessed at all nor does the ASID matter
in the address translation process in one of these unmapped segments.

  Ralf

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 0/7] MIPS: Add extended ASID support
  2016-05-09 13:23 ` [PATCH 0/7] MIPS: Add extended ASID support Ralf Baechle
@ 2016-05-09 17:01   ` Maciej W. Rozycki
  2016-05-09 19:04     ` James Hogan
  0 siblings, 1 reply; 18+ messages in thread
From: Maciej W. Rozycki @ 2016-05-09 17:01 UTC (permalink / raw)
  To: Ralf Baechle
  Cc: James Hogan, Paul Burton, Manuel Lauss, Jayachandran C.,
	Paolo Bonzini, Radim Krčmář,
	linux-mips, kvm

On Mon, 9 May 2016, Ralf Baechle wrote:

> Already PMC-Sierra's RM9000 / E9000 core had an extended ASID field, of
> 12 bits for 4096 ASID contexts.  Afaics this was an extension derived
> in-house back in the wild days before everything had to be sanctioned by
> the architecture folks, so there is nothing in a config register to test
> for it.

 Couldn't you just probe it in EntryHi directly, by writing all-ones, 
reading back and seeing how many bits have stuck?

> PMCS simply extended the ASID field to 12 bits; no of the EntryHi bits
> which today would conflict doing so did exist back then.

 Especially as it was as simple like that -- bits 12:8 were hardwired to 
zeros in the usual implementations back then.

  Maciej

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 0/7] MIPS: Add extended ASID support
  2016-05-09 17:01   ` Maciej W. Rozycki
@ 2016-05-09 19:04     ` James Hogan
  2016-05-09 19:56       ` Maciej W. Rozycki
  0 siblings, 1 reply; 18+ messages in thread
From: James Hogan @ 2016-05-09 19:04 UTC (permalink / raw)
  To: Maciej W. Rozycki
  Cc: Ralf Baechle, Paul Burton, Manuel Lauss, Jayachandran C.,
	Paolo Bonzini, Radim Krčmář,
	linux-mips, kvm

[-- Attachment #1: Type: text/plain, Size: 1126 bytes --]

On Mon, May 09, 2016 at 06:01:27PM +0100, Maciej W. Rozycki wrote:
> On Mon, 9 May 2016, Ralf Baechle wrote:
> 
> > Already PMC-Sierra's RM9000 / E9000 core had an extended ASID field, of
> > 12 bits for 4096 ASID contexts.  Afaics this was an extension derived
> > in-house back in the wild days before everything had to be sanctioned by
> > the architecture folks, so there is nothing in a config register to test
> > for it.
> 
>  Couldn't you just probe it in EntryHi directly, by writing all-ones, 
> reading back and seeing how many bits have stuck?

Note, the tlbinv feature in recent versions of MIPS32/MIPS64 arch has
EHINV bit in bit 10 (if I remember right) of EntryHi, which marks whole
tlb entry as invalid, and the small pages feature (for 1k pages) extends
VPN field downwards to bit 11.

Cheers
James

> 
> > PMCS simply extended the ASID field to 12 bits; no of the EntryHi bits
> > which today would conflict doing so did exist back then.
> 
>  Especially as it was as simple like that -- bits 12:8 were hardwired to 
> zeros in the usual implementations back then.
> 
>   Maciej

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 1/7] MIPS: KVM/locore.S: Don't preserve host ASID around vcpu_run
  2016-05-09 15:30     ` Ralf Baechle
@ 2016-05-09 19:42       ` James Hogan
  0 siblings, 0 replies; 18+ messages in thread
From: James Hogan @ 2016-05-09 19:42 UTC (permalink / raw)
  To: Ralf Baechle
  Cc: Paolo Bonzini, Paul Burton, Radim Krčmář, linux-mips, kvm

[-- Attachment #1: Type: text/plain, Size: 1281 bytes --]

On Mon, May 09, 2016 at 05:30:44PM +0200, Ralf Baechle wrote:
> On Mon, May 09, 2016 at 04:22:33PM +0200, Paolo Bonzini wrote:
> 
> > On 06/05/2016 15:36, James Hogan wrote:
> > > - It is actually redundant, since the host ASID will be restored
> > >   correctly by kvm_arch_vcpu_put(), which is called almost immediately
> > >   after kvm_arch_vcpu_ioctl_run() returns.
> > 
> > What happens if the guest does a rogue access to the area where the host
> > kernel resides?  Would that cause a wrong entry in the TLB?
> 
> The kernel and lowmem reside in KSEG0/XKPYS which are "unmapped segments".
> Unmapped means, the TLB isn't accessed at all nor does the ASID matter
> in the address translation process in one of these unmapped segments.

Yes, although kernel modules (KVM can be built as module) do run from
tlb mapped memory, but trap & emulate KVM as found in mainline should
work as accessing kernel segments from user mode (which is where guest
runs) causes address error exception, triggering emulation of MMIO
accesses rather than TLB fills.

Hmm, perhaps locore.S should be restoring correct root ASID properly
however before returning to the caller or calling the exit handler,
either of which could be tlb mapped module code.

Cheers
James

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 0/7] MIPS: Add extended ASID support
  2016-05-09 19:04     ` James Hogan
@ 2016-05-09 19:56       ` Maciej W. Rozycki
  2016-05-09 19:59         ` James Hogan
  2016-05-10  7:34         ` Ralf Baechle
  0 siblings, 2 replies; 18+ messages in thread
From: Maciej W. Rozycki @ 2016-05-09 19:56 UTC (permalink / raw)
  To: James Hogan
  Cc: Ralf Baechle, Paul Burton, Manuel Lauss, Jayachandran C.,
	Paolo Bonzini, Radim Krčmář,
	linux-mips, kvm

On Mon, 9 May 2016, James Hogan wrote:

> > > Already PMC-Sierra's RM9000 / E9000 core had an extended ASID field, of
> > > 12 bits for 4096 ASID contexts.  Afaics this was an extension derived
> > > in-house back in the wild days before everything had to be sanctioned by
> > > the architecture folks, so there is nothing in a config register to test
> > > for it.
> > 
> >  Couldn't you just probe it in EntryHi directly, by writing all-ones, 
> > reading back and seeing how many bits have stuck?
> 
> Note, the tlbinv feature in recent versions of MIPS32/MIPS64 arch has
> EHINV bit in bit 10 (if I remember right) of EntryHi, which marks whole
> tlb entry as invalid, and the small pages feature (for 1k pages) extends
> VPN field downwards to bit 11.

 Yes, but these are not legacy architectures, are they?  Since you've got 
bits set across Config registers you don't need to resort to poking at 
other registers.  Although there are exceptions like PABITS and SEGBITS 
(we ought to handle this one day actually, for correct unaligned access 
emulation -- right now you get a repeated AdEL exception in emulation code 
for what originally was an unaligned out of range kernel XKPHYS access, 
making it a big pain to debug; I've had a hack for this since 2.4 days, 
but it should be done properly).

 In the old days pretty much nothing was recorded in the single Config 
register (very old chips didn't even have that -- you had to size caches 
manually for example), but stuff could often be determined via other 
means, sometimes (like probably here) without detailed checks on PRId.

  Maciej

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 0/7] MIPS: Add extended ASID support
  2016-05-09 19:56       ` Maciej W. Rozycki
@ 2016-05-09 19:59         ` James Hogan
  2016-05-10  7:34         ` Ralf Baechle
  1 sibling, 0 replies; 18+ messages in thread
From: James Hogan @ 2016-05-09 19:59 UTC (permalink / raw)
  To: Maciej W. Rozycki
  Cc: Ralf Baechle, Paul Burton, Manuel Lauss, Jayachandran C.,
	Paolo Bonzini, Radim Krčmář,
	linux-mips, kvm

[-- Attachment #1: Type: text/plain, Size: 1779 bytes --]

On Mon, May 09, 2016 at 08:56:51PM +0100, Maciej W. Rozycki wrote:
> On Mon, 9 May 2016, James Hogan wrote:
> 
> > > > Already PMC-Sierra's RM9000 / E9000 core had an extended ASID field, of
> > > > 12 bits for 4096 ASID contexts.  Afaics this was an extension derived
> > > > in-house back in the wild days before everything had to be sanctioned by
> > > > the architecture folks, so there is nothing in a config register to test
> > > > for it.
> > > 
> > >  Couldn't you just probe it in EntryHi directly, by writing all-ones, 
> > > reading back and seeing how many bits have stuck?
> > 
> > Note, the tlbinv feature in recent versions of MIPS32/MIPS64 arch has
> > EHINV bit in bit 10 (if I remember right) of EntryHi, which marks whole
> > tlb entry as invalid, and the small pages feature (for 1k pages) extends
> > VPN field downwards to bit 11.
> 
>  Yes, but these are not legacy architectures, are they?

Right.

> Since you've got
> bits set across Config registers you don't need to resort to poking at 
> other registers.  Although there are exceptions like PABITS and SEGBITS 
> (we ought to handle this one day actually, for correct unaligned access 
> emulation -- right now you get a repeated AdEL exception in emulation code 
> for what originally was an unaligned out of range kernel XKPHYS access, 
> making it a big pain to debug; I've had a hack for this since 2.4 days, 
> but it should be done properly).
> 
>  In the old days pretty much nothing was recorded in the single Config 
> register (very old chips didn't even have that -- you had to size caches 
> manually for example), but stuff could often be determined via other 
> means, sometimes (like probably here) without detailed checks on PRId.

Cheers
James

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 0/7] MIPS: Add extended ASID support
  2016-05-09 19:56       ` Maciej W. Rozycki
  2016-05-09 19:59         ` James Hogan
@ 2016-05-10  7:34         ` Ralf Baechle
  2016-05-10  8:55           ` Maciej W. Rozycki
  1 sibling, 1 reply; 18+ messages in thread
From: Ralf Baechle @ 2016-05-10  7:34 UTC (permalink / raw)
  To: Maciej W. Rozycki
  Cc: James Hogan, Paul Burton, Manuel Lauss, Jayachandran C.,
	Paolo Bonzini, Radim Krčmář,
	linux-mips, kvm

On Mon, May 09, 2016 at 08:56:51PM +0100, Maciej W. Rozycki wrote:

>  Yes, but these are not legacy architectures, are they?  Since you've got 
> bits set across Config registers you don't need to resort to poking at 
> other registers.  Although there are exceptions like PABITS and SEGBITS 
> (we ought to handle this one day actually, for correct unaligned access 
> emulation -- right now you get a repeated AdEL exception in emulation code 
> for what originally was an unaligned out of range kernel XKPHYS access, 
> making it a big pain to debug; I've had a hack for this since 2.4 days, 
> but it should be done properly).

Yeah, it's simply an implementation guided by the SISO principle.  Shit in,
shit out.  The issue you're having rarely hurts and if a simple hack can
solve it I'm in principle open to consider it for merging.

>  In the old days pretty much nothing was recorded in the single Config 
> register (very old chips didn't even have that -- you had to size caches 
> manually for example), but stuff could often be determined via other 
> means, sometimes (like probably here) without detailed checks on PRId.

Sizing the R4000/R4400 second level cache for example.  I'd call that
taking the RISC design principle to the edge :-)

  Ralf

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 0/7] MIPS: Add extended ASID support
  2016-05-10  7:34         ` Ralf Baechle
@ 2016-05-10  8:55           ` Maciej W. Rozycki
  0 siblings, 0 replies; 18+ messages in thread
From: Maciej W. Rozycki @ 2016-05-10  8:55 UTC (permalink / raw)
  To: Ralf Baechle
  Cc: James Hogan, Paul Burton, Manuel Lauss, Jayachandran C.,
	Paolo Bonzini, Radim Krčmář,
	linux-mips, kvm

On Tue, 10 May 2016, Ralf Baechle wrote:

> > (we ought to handle this one day actually, for correct unaligned access 
> > emulation -- right now you get a repeated AdEL exception in emulation code 
> > for what originally was an unaligned out of range kernel XKPHYS access, 
> > making it a big pain to debug; I've had a hack for this since 2.4 days, 
> > but it should be done properly).
> 
> Yeah, it's simply an implementation guided by the SISO principle.  Shit in,
> shit out.  The issue you're having rarely hurts and if a simple hack can
> solve it I'm in principle open to consider it for merging.

 The problem with my hack is it is only correct for the R4000/R4400 as I 
just hardwired the bits I needed to debug the issues I had.  Well, maybe 
for a couple of other processors as well, but I'd say it's not upstream 
quality.

> >  In the old days pretty much nothing was recorded in the single Config 
> > register (very old chips didn't even have that -- you had to size caches 
> > manually for example), but stuff could often be determined via other 
> > means, sometimes (like probably here) without detailed checks on PRId.
> 
> Sizing the R4000/R4400 second level cache for example.  I'd call that
> taking the RISC design principle to the edge :-)

 Or the corresponding R2000/R3000 stuff.  I'm still rather sceptical about 
our results of cache line size probing, they seem too different from what 
relevant documentation says.  To say nothing of the fill vs invalidation 
line size difference (I think we just report the latter while we're more 
interested in and the algorithm is supposed to report the former).

  Maciej

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2016-05-10  8:55 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-05-06 13:36 [PATCH 0/7] MIPS: Add extended ASID support James Hogan
2016-05-06 13:36 ` [PATCH 1/7] MIPS: KVM/locore.S: Don't preserve host ASID around vcpu_run James Hogan
2016-05-09 14:22   ` Paolo Bonzini
2016-05-09 15:30     ` Ralf Baechle
2016-05-09 19:42       ` James Hogan
2016-05-06 13:36 ` [PATCH 2/7] MIPS: Add & use CP0_EntryHi ASID definitions James Hogan
2016-05-06 13:36 ` [PATCH 3/7] MIPS: KVM: Abstract guest ASID mask James Hogan
2016-05-06 13:36 ` [PATCH 4/7] MIPS: KVM/locore.S: Only preserve callee saved registers James Hogan
2016-05-06 13:36 ` [PATCH 5/7] MIPS: KVM/locore.S: Relax noat James Hogan
2016-05-06 13:36 ` [PATCH 6/7] MIPS: Retrieve ASID masks using function accepting struct cpuinfo_mips James Hogan
2016-05-06 13:36 ` [PATCH 7/7] MIPS: Support extended ASIDs James Hogan
2016-05-09 13:23 ` [PATCH 0/7] MIPS: Add extended ASID support Ralf Baechle
2016-05-09 17:01   ` Maciej W. Rozycki
2016-05-09 19:04     ` James Hogan
2016-05-09 19:56       ` Maciej W. Rozycki
2016-05-09 19:59         ` James Hogan
2016-05-10  7:34         ` Ralf Baechle
2016-05-10  8:55           ` Maciej W. Rozycki

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.