* Re: [PATCH 2/13] KVM: MIPS: Pass type of fault down to kvm_mips_map_page()
[not found] <201701171906397244491@zte.com.cn>
@ 2017-01-17 13:27 ` James Hogan
2017-01-17 13:27 ` James Hogan
0 siblings, 1 reply; 8+ messages in thread
From: James Hogan @ 2017-01-17 13:27 UTC (permalink / raw)
To: jiang.biao2; +Cc: linux-mips, pbonzini, rkrcmar, ralf, kvm
[-- Attachment #1: Type: text/plain, Size: 3196 bytes --]
Hi,
On Tue, Jan 17, 2017 at 07:06:39PM +0800, jiang.biao2@zte.com.cn wrote:
> > kvm_mips_map_page() will need to know whether the fault was due to a
> > read or a write in order to support dirty page tracking,
> > KVM_CAP_SYNC_MMU, and read only memory regions, so get that information
> > passed down to it via new bool write_fault arguments to various
> > functions.
>
> Hi, James,
>
>
> Maybe it's not good idea to add an argument and pass it down to various functions, it will ,
>
> make the the interface more complicated, have more possibolity, and more difficult to test.
>
kvm_mips_map_page() needs to know is whether it is being triggered in
response to a write, but it is abstracted and independent from what host
exception triggered it. It is only really concerned with the GPA space.
>
>
>
> As far as I can see, what you need is to let kvm_mips_map_page() know the *reason*.
>
> would it be better to let kvm_mips_map_page() get *exccode* from the CP0 directly, or
>
>
> provide any interface to get the fault reason?
I presume you mean from the saved host cause register in the VCPU
context (since intervening exceptions/interrupts will clobber the actual
CP0 Cause register).
It directly needs to know whether it can get away with a read-only
mapping, and although it directly depends on a GVA segment, it doesn't
necessarily relate to a memory access made by the guest.
kvm_mips_map_page() is called via:
- kvm_mips_handle_kseg0_tlb_fault()
for faults in guest KSeg0
- kvm_mips_handle_mapped_seg_tlb_fault()
for faults in guest TLB mapped segments
From these functions:
- kvm_trap_emul_handle_tlb_mod() (write_fault = true)
in response to a write to a read-only page (exccode = MOD)
- kvm_trap_emul_handle_tlb_miss() (write_fault = true or false)
in response to a read or write when TLB mapping absent or invalid
(exccode = TLBL/TLBS)
- via the kvm_trap_emul_gva_fault() helper when KVM needs to directly
access GVA space:
- kvm_get_inst() (write_fault = false)
when reading an instruction from guest RAM (when BadInstr/BadInstrP
registers unavailable) which needs to be emulated, i.e. reserved
instruction (exccode = RI), cop0 unusuable (exccode = CPU), MMIO
load/store (TLBL/TLBS/MOD/ADEL/ADES), branch delay slot handling
for MMIO load/store, or debug output for an unhandled exccode
(exccode = ?)
- kvm_mips_trans_replace() (write_fault = true)
to write a replacement instruction into guest memory (mainly
exccode = CPU)
- kvm_mips_guest_cache_op() (write_fault = false)
cache instruction emulation (exccode = CPU)
So there is a many:many mapping from exccode to write_fault for these
exccodes:
- CPU (CoProcessor Unusable)
could be reading instruction or servicing a CACHE instruction
(write_fault = false) or replacing an instruction (write_fault =
true).
- MOD, TLBS, ADES
could be the write itself (write_fault = true), or a read of the
instruction triggering the exception or the prior branch instruction
(write_fault = false).
Cheers
James
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 801 bytes --]
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 2/13] KVM: MIPS: Pass type of fault down to kvm_mips_map_page()
2017-01-17 13:27 ` [PATCH 2/13] KVM: MIPS: Pass type of fault down to kvm_mips_map_page() James Hogan
@ 2017-01-17 13:27 ` James Hogan
0 siblings, 0 replies; 8+ messages in thread
From: James Hogan @ 2017-01-17 13:27 UTC (permalink / raw)
To: jiang.biao2; +Cc: linux-mips, pbonzini, rkrcmar, ralf, kvm
[-- Attachment #1: Type: text/plain, Size: 3196 bytes --]
Hi,
On Tue, Jan 17, 2017 at 07:06:39PM +0800, jiang.biao2@zte.com.cn wrote:
> > kvm_mips_map_page() will need to know whether the fault was due to a
> > read or a write in order to support dirty page tracking,
> > KVM_CAP_SYNC_MMU, and read only memory regions, so get that information
> > passed down to it via new bool write_fault arguments to various
> > functions.
>
> Hi, James,
>
>
> Maybe it's not good idea to add an argument and pass it down to various functions, it will ,
>
> make the the interface more complicated, have more possibolity, and more difficult to test.
>
kvm_mips_map_page() needs to know is whether it is being triggered in
response to a write, but it is abstracted and independent from what host
exception triggered it. It is only really concerned with the GPA space.
>
>
>
> As far as I can see, what you need is to let kvm_mips_map_page() know the *reason*.
>
> would it be better to let kvm_mips_map_page() get *exccode* from the CP0 directly, or
>
>
> provide any interface to get the fault reason?
I presume you mean from the saved host cause register in the VCPU
context (since intervening exceptions/interrupts will clobber the actual
CP0 Cause register).
It directly needs to know whether it can get away with a read-only
mapping, and although it directly depends on a GVA segment, it doesn't
necessarily relate to a memory access made by the guest.
kvm_mips_map_page() is called via:
- kvm_mips_handle_kseg0_tlb_fault()
for faults in guest KSeg0
- kvm_mips_handle_mapped_seg_tlb_fault()
for faults in guest TLB mapped segments
From these functions:
- kvm_trap_emul_handle_tlb_mod() (write_fault = true)
in response to a write to a read-only page (exccode = MOD)
- kvm_trap_emul_handle_tlb_miss() (write_fault = true or false)
in response to a read or write when TLB mapping absent or invalid
(exccode = TLBL/TLBS)
- via the kvm_trap_emul_gva_fault() helper when KVM needs to directly
access GVA space:
- kvm_get_inst() (write_fault = false)
when reading an instruction from guest RAM (when BadInstr/BadInstrP
registers unavailable) which needs to be emulated, i.e. reserved
instruction (exccode = RI), cop0 unusuable (exccode = CPU), MMIO
load/store (TLBL/TLBS/MOD/ADEL/ADES), branch delay slot handling
for MMIO load/store, or debug output for an unhandled exccode
(exccode = ?)
- kvm_mips_trans_replace() (write_fault = true)
to write a replacement instruction into guest memory (mainly
exccode = CPU)
- kvm_mips_guest_cache_op() (write_fault = false)
cache instruction emulation (exccode = CPU)
So there is a many:many mapping from exccode to write_fault for these
exccodes:
- CPU (CoProcessor Unusable)
could be reading instruction or servicing a CACHE instruction
(write_fault = false) or replacing an instruction (write_fault =
true).
- MOD, TLBS, ADES
could be the write itself (write_fault = true), or a read of the
instruction triggering the exception or the prior branch instruction
(write_fault = false).
Cheers
James
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 801 bytes --]
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 2/13] KVM: MIPS: Pass type of fault down to kvm_mips_map_page()
[not found] <201701191629518310684@zte.com.cn>
@ 2017-01-19 9:08 ` James Hogan
2017-01-19 9:08 ` James Hogan
0 siblings, 1 reply; 8+ messages in thread
From: James Hogan @ 2017-01-19 9:08 UTC (permalink / raw)
To: jiang.biao2; +Cc: linux-mips, pbonzini, rkrcmar, ralf, kvm
[-- Attachment #1: Type: text/plain, Size: 1162 bytes --]
Hi,
On Thu, Jan 19, 2017 at 04:29:51PM +0800, jiang.biao2@zte.com.cn wrote:
> Hi, James
>
>
>
>
>
>
>
> > Whats wrong with bool parameters?
> >
> > It needs a GPA mapping created, either for a read or a write depending
> > on the caller. bool would seem ideally suited for just such a situation,
> > and in fact its exactly what the KVM GPA fault code path does to pass
> > whether the page needs to be writable:
> >
> > kvm_mips_map_page() -> gfn_to_pfn_prot() -> __gfn_to_pfn_memslot() ->
> > hva_to_pfn() -> hva_to_pfn_slow().
> >
> > so all this really does is extend that pattern up the other way as
> > necessary to be able to provide that information to gfn_to_pfn_prot().
> Bool parameter may make the code less readable. :-)
>
>
> The way used is indeed consistent with the exist pattern, but the tramp data
>
>
> passed around and long parameters list maybe code smell(not for sure for
>
>
> the kernel :-) ), which may be improved by some means.
>
> No offense, just personal opinion. :-)
No offense taken :-)
Thanks again for reviewing,
Cheers
James
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 801 bytes --]
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 2/13] KVM: MIPS: Pass type of fault down to kvm_mips_map_page()
2017-01-19 9:08 ` James Hogan
@ 2017-01-19 9:08 ` James Hogan
0 siblings, 0 replies; 8+ messages in thread
From: James Hogan @ 2017-01-19 9:08 UTC (permalink / raw)
To: jiang.biao2; +Cc: linux-mips, pbonzini, rkrcmar, ralf, kvm
[-- Attachment #1: Type: text/plain, Size: 1162 bytes --]
Hi,
On Thu, Jan 19, 2017 at 04:29:51PM +0800, jiang.biao2@zte.com.cn wrote:
> Hi, James
>
>
>
>
>
>
>
> > Whats wrong with bool parameters?
> >
> > It needs a GPA mapping created, either for a read or a write depending
> > on the caller. bool would seem ideally suited for just such a situation,
> > and in fact its exactly what the KVM GPA fault code path does to pass
> > whether the page needs to be writable:
> >
> > kvm_mips_map_page() -> gfn_to_pfn_prot() -> __gfn_to_pfn_memslot() ->
> > hva_to_pfn() -> hva_to_pfn_slow().
> >
> > so all this really does is extend that pattern up the other way as
> > necessary to be able to provide that information to gfn_to_pfn_prot().
> Bool parameter may make the code less readable. :-)
>
>
> The way used is indeed consistent with the exist pattern, but the tramp data
>
>
> passed around and long parameters list maybe code smell(not for sure for
>
>
> the kernel :-) ), which may be improved by some means.
>
> No offense, just personal opinion. :-)
No offense taken :-)
Thanks again for reviewing,
Cheers
James
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 801 bytes --]
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 2/13] KVM: MIPS: Pass type of fault down to kvm_mips_map_page()
[not found] <201701181618464411994@zte.com.cn>
@ 2017-01-18 12:12 ` James Hogan
2017-01-18 12:12 ` James Hogan
0 siblings, 1 reply; 8+ messages in thread
From: James Hogan @ 2017-01-18 12:12 UTC (permalink / raw)
To: jiang.biao2; +Cc: linux-mips, pbonzini, rkrcmar, ralf, kvm
[-- Attachment #1: Type: text/plain, Size: 2511 bytes --]
Hi,
On Wed, Jan 18, 2017 at 04:18:46PM +0800, jiang.biao2@zte.com.cn wrote:
> Hi,
>
> > I presume you mean from the saved host cause register in the VCPU
> > context (since intervening exceptions/interrupts will clobber the actual
> > CP0 Cause register).
> >
> > It directly needs to know whether it can get away with a read-only
> > mapping, and although it directly depends on a GVA segment, it doesn't
> > necessarily relate to a memory access made by the guest.
> >
> > kvm_mips_map_page() is called via:
> >
> > - kvm_mips_handle_kseg0_tlb_fault()
> > for faults in guest KSeg0
> >
> > - kvm_mips_handle_mapped_seg_tlb_fault()
> > for faults in guest TLB mapped segments
> >
> > From these functions:
> >
> > - kvm_trap_emul_handle_tlb_mod() (write_fault = true)
> > in response to a write to a read-only page (exccode = MOD)
> >
> > - kvm_trap_emul_handle_tlb_miss() (write_fault = true or false)
> > in response to a read or write when TLB mapping absent or invalid
> > (exccode = TLBL/TLBS)
> >
> >
> > So there is a many:many mapping from exccode to write_fault for these
> > exccodes:
> >
> > - CPU (CoProcessor Unusable)
> > could be reading instruction or servicing a CACHE instruction
> > (write_fault = false) or replacing an instruction (write_fault =
> > true).
>
> > - MOD, TLBS, ADES
> > could be the write itself (write_fault = true), or a read of the
> > instruction triggering the exception or the prior branch instruction
> > (write_fault = false).
> >
> Thanks for the detail, it is more complicted than I thought.
>
> But there may be still bad smell from the long parameters, espacially from
>
>
> bool type ones.
Whats wrong with bool parameters?
It needs a GPA mapping created, either for a read or a write depending
on the caller. bool would seem ideally suited for just such a situation,
and in fact its exactly what the KVM GPA fault code path does to pass
whether the page needs to be writable:
kvm_mips_map_page() -> gfn_to_pfn_prot() -> __gfn_to_pfn_memslot() ->
hva_to_pfn() -> hva_to_pfn_slow().
so all this really does is extend that pattern up the other way as
necessary to be able to provide that information to gfn_to_pfn_prot().
Cheers
James
>
>
> Maybe there is better way to handle that, but I can not figure it out right now
>
>
> because of the complexity.
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 801 bytes --]
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 2/13] KVM: MIPS: Pass type of fault down to kvm_mips_map_page()
2017-01-18 12:12 ` James Hogan
@ 2017-01-18 12:12 ` James Hogan
0 siblings, 0 replies; 8+ messages in thread
From: James Hogan @ 2017-01-18 12:12 UTC (permalink / raw)
To: jiang.biao2; +Cc: linux-mips, pbonzini, rkrcmar, ralf, kvm
[-- Attachment #1: Type: text/plain, Size: 2511 bytes --]
Hi,
On Wed, Jan 18, 2017 at 04:18:46PM +0800, jiang.biao2@zte.com.cn wrote:
> Hi,
>
> > I presume you mean from the saved host cause register in the VCPU
> > context (since intervening exceptions/interrupts will clobber the actual
> > CP0 Cause register).
> >
> > It directly needs to know whether it can get away with a read-only
> > mapping, and although it directly depends on a GVA segment, it doesn't
> > necessarily relate to a memory access made by the guest.
> >
> > kvm_mips_map_page() is called via:
> >
> > - kvm_mips_handle_kseg0_tlb_fault()
> > for faults in guest KSeg0
> >
> > - kvm_mips_handle_mapped_seg_tlb_fault()
> > for faults in guest TLB mapped segments
> >
> > From these functions:
> >
> > - kvm_trap_emul_handle_tlb_mod() (write_fault = true)
> > in response to a write to a read-only page (exccode = MOD)
> >
> > - kvm_trap_emul_handle_tlb_miss() (write_fault = true or false)
> > in response to a read or write when TLB mapping absent or invalid
> > (exccode = TLBL/TLBS)
> >
> >
> > So there is a many:many mapping from exccode to write_fault for these
> > exccodes:
> >
> > - CPU (CoProcessor Unusable)
> > could be reading instruction or servicing a CACHE instruction
> > (write_fault = false) or replacing an instruction (write_fault =
> > true).
>
> > - MOD, TLBS, ADES
> > could be the write itself (write_fault = true), or a read of the
> > instruction triggering the exception or the prior branch instruction
> > (write_fault = false).
> >
> Thanks for the detail, it is more complicted than I thought.
>
> But there may be still bad smell from the long parameters, espacially from
>
>
> bool type ones.
Whats wrong with bool parameters?
It needs a GPA mapping created, either for a read or a write depending
on the caller. bool would seem ideally suited for just such a situation,
and in fact its exactly what the KVM GPA fault code path does to pass
whether the page needs to be writable:
kvm_mips_map_page() -> gfn_to_pfn_prot() -> __gfn_to_pfn_memslot() ->
hva_to_pfn() -> hva_to_pfn_slow().
so all this really does is extend that pattern up the other way as
necessary to be able to provide that information to gfn_to_pfn_prot().
Cheers
James
>
>
> Maybe there is better way to handle that, but I can not figure it out right now
>
>
> because of the complexity.
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 801 bytes --]
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH 2/13] KVM: MIPS: Pass type of fault down to kvm_mips_map_page()
2017-01-16 12:49 [PATCH 0/13] KVM: MIPS: Dirty logging, SYNC_MMU & READONLY_MEM James Hogan
@ 2017-01-16 12:49 ` James Hogan
2017-01-16 12:49 ` James Hogan
0 siblings, 1 reply; 8+ messages in thread
From: James Hogan @ 2017-01-16 12:49 UTC (permalink / raw)
To: linux-mips
Cc: James Hogan, Paolo Bonzini, Radim Krčmář,
Ralf Baechle, kvm
kvm_mips_map_page() will need to know whether the fault was due to a
read or a write in order to support dirty page tracking,
KVM_CAP_SYNC_MMU, and read only memory regions, so get that information
passed down to it via new bool write_fault arguments to various
functions.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
---
arch/mips/include/asm/kvm_host.h | 9 ++++++---
arch/mips/kvm/emulate.c | 7 ++++---
arch/mips/kvm/mmu.c | 21 +++++++++++++--------
arch/mips/kvm/trap_emul.c | 4 ++--
4 files changed, 25 insertions(+), 16 deletions(-)
diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h
index c24c1c23196b..70c2dd353468 100644
--- a/arch/mips/include/asm/kvm_host.h
+++ b/arch/mips/include/asm/kvm_host.h
@@ -597,19 +597,22 @@ u32 kvm_get_user_asid(struct kvm_vcpu *vcpu);
u32 kvm_get_commpage_asid (struct kvm_vcpu *vcpu);
extern int kvm_mips_handle_kseg0_tlb_fault(unsigned long badbaddr,
- struct kvm_vcpu *vcpu);
+ struct kvm_vcpu *vcpu,
+ bool write_fault);
extern int kvm_mips_handle_commpage_tlb_fault(unsigned long badvaddr,
struct kvm_vcpu *vcpu);
extern int kvm_mips_handle_mapped_seg_tlb_fault(struct kvm_vcpu *vcpu,
struct kvm_mips_tlb *tlb,
- unsigned long gva);
+ unsigned long gva,
+ bool write_fault);
extern enum emulation_result kvm_mips_handle_tlbmiss(u32 cause,
u32 *opc,
struct kvm_run *run,
- struct kvm_vcpu *vcpu);
+ struct kvm_vcpu *vcpu,
+ bool write_fault);
extern enum emulation_result kvm_mips_handle_tlbmod(u32 cause,
u32 *opc,
diff --git a/arch/mips/kvm/emulate.c b/arch/mips/kvm/emulate.c
index 72eb307a61a7..a47f8af9193e 100644
--- a/arch/mips/kvm/emulate.c
+++ b/arch/mips/kvm/emulate.c
@@ -2705,7 +2705,8 @@ enum emulation_result kvm_mips_check_privilege(u32 cause,
enum emulation_result kvm_mips_handle_tlbmiss(u32 cause,
u32 *opc,
struct kvm_run *run,
- struct kvm_vcpu *vcpu)
+ struct kvm_vcpu *vcpu,
+ bool write_fault)
{
enum emulation_result er = EMULATE_DONE;
u32 exccode = (cause >> CAUSEB_EXCCODE) & 0x1f;
@@ -2761,8 +2762,8 @@ enum emulation_result kvm_mips_handle_tlbmiss(u32 cause,
* OK we have a Guest TLB entry, now inject it into the
* shadow host TLB
*/
- if (kvm_mips_handle_mapped_seg_tlb_fault(vcpu, tlb,
- va)) {
+ if (kvm_mips_handle_mapped_seg_tlb_fault(vcpu, tlb, va,
+ write_fault)) {
kvm_err("%s: handling mapped seg tlb fault for %lx, index: %u, vcpu: %p, ASID: %#lx\n",
__func__, va, index, vcpu,
read_c0_entryhi());
diff --git a/arch/mips/kvm/mmu.c b/arch/mips/kvm/mmu.c
index b3da473e1569..1af65f2e6bb7 100644
--- a/arch/mips/kvm/mmu.c
+++ b/arch/mips/kvm/mmu.c
@@ -308,6 +308,7 @@ bool kvm_mips_flush_gpa_pt(struct kvm *kvm, gfn_t start_gfn, gfn_t end_gfn)
* kvm_mips_map_page() - Map a guest physical page.
* @vcpu: VCPU pointer.
* @gpa: Guest physical address of fault.
+ * @write_fault: Whether the fault was due to a write.
* @out_entry: New PTE for @gpa (written on success unless NULL).
* @out_buddy: New PTE for @gpa's buddy (written on success unless
* NULL).
@@ -327,6 +328,7 @@ bool kvm_mips_flush_gpa_pt(struct kvm *kvm, gfn_t start_gfn, gfn_t end_gfn)
* as an MMIO access.
*/
static int kvm_mips_map_page(struct kvm_vcpu *vcpu, unsigned long gpa,
+ bool write_fault,
pte_t *out_entry, pte_t *out_buddy)
{
struct kvm *kvm = vcpu->kvm;
@@ -558,7 +560,8 @@ void kvm_mips_flush_gva_pt(pgd_t *pgd, enum kvm_mips_flush flags)
/* XXXKYMA: Must be called with interrupts disabled */
int kvm_mips_handle_kseg0_tlb_fault(unsigned long badvaddr,
- struct kvm_vcpu *vcpu)
+ struct kvm_vcpu *vcpu,
+ bool write_fault)
{
unsigned long gpa;
kvm_pfn_t pfn0, pfn1;
@@ -576,10 +579,11 @@ int kvm_mips_handle_kseg0_tlb_fault(unsigned long badvaddr,
gpa = KVM_GUEST_CPHYSADDR(badvaddr & (PAGE_MASK << 1));
vaddr = badvaddr & (PAGE_MASK << 1);
- if (kvm_mips_map_page(vcpu, gpa, &pte_gpa[0], NULL) < 0)
+ if (kvm_mips_map_page(vcpu, gpa, write_fault, &pte_gpa[0], NULL) < 0)
return -1;
- if (kvm_mips_map_page(vcpu, gpa | PAGE_SIZE, &pte_gpa[1], NULL) < 0)
+ if (kvm_mips_map_page(vcpu, gpa | PAGE_SIZE, write_fault, &pte_gpa[1],
+ NULL) < 0)
return -1;
pfn0 = pte_pfn(pte_gpa[0]);
@@ -604,7 +608,8 @@ int kvm_mips_handle_kseg0_tlb_fault(unsigned long badvaddr,
int kvm_mips_handle_mapped_seg_tlb_fault(struct kvm_vcpu *vcpu,
struct kvm_mips_tlb *tlb,
- unsigned long gva)
+ unsigned long gva,
+ bool write_fault)
{
kvm_pfn_t pfn;
long tlb_lo = 0;
@@ -621,8 +626,8 @@ int kvm_mips_handle_mapped_seg_tlb_fault(struct kvm_vcpu *vcpu,
tlb_lo = tlb->tlb_lo[idx];
/* Find host PFN */
- if (kvm_mips_map_page(vcpu, mips3_tlbpfn_to_paddr(tlb_lo), &pte_gpa,
- NULL) < 0)
+ if (kvm_mips_map_page(vcpu, mips3_tlbpfn_to_paddr(tlb_lo), write_fault,
+ &pte_gpa, NULL) < 0)
return -1;
pfn = pte_pfn(pte_gpa);
@@ -757,7 +762,7 @@ enum kvm_mips_fault_result kvm_trap_emul_gva_fault(struct kvm_vcpu *vcpu,
int index;
if (KVM_GUEST_KSEGX(gva) == KVM_GUEST_KSEG0) {
- if (kvm_mips_handle_kseg0_tlb_fault(gva, vcpu) < 0)
+ if (kvm_mips_handle_kseg0_tlb_fault(gva, vcpu, write) < 0)
return KVM_MIPS_GPA;
} else if ((KVM_GUEST_KSEGX(gva) < KVM_GUEST_KSEG0) ||
KVM_GUEST_KSEGX(gva) == KVM_GUEST_KSEG23) {
@@ -774,7 +779,7 @@ enum kvm_mips_fault_result kvm_trap_emul_gva_fault(struct kvm_vcpu *vcpu,
if (write && !TLB_IS_DIRTY(*tlb, gva))
return KVM_MIPS_TLBMOD;
- if (kvm_mips_handle_mapped_seg_tlb_fault(vcpu, tlb, gva))
+ if (kvm_mips_handle_mapped_seg_tlb_fault(vcpu, tlb, gva, write))
return KVM_MIPS_GPA;
} else {
return KVM_MIPS_GVA;
diff --git a/arch/mips/kvm/trap_emul.c b/arch/mips/kvm/trap_emul.c
index 970ab216b355..33888f1a89f4 100644
--- a/arch/mips/kvm/trap_emul.c
+++ b/arch/mips/kvm/trap_emul.c
@@ -159,7 +159,7 @@ static int kvm_trap_emul_handle_tlb_miss(struct kvm_vcpu *vcpu, bool store)
* into the shadow host TLB
*/
- er = kvm_mips_handle_tlbmiss(cause, opc, run, vcpu);
+ er = kvm_mips_handle_tlbmiss(cause, opc, run, vcpu, store);
if (er == EMULATE_DONE)
ret = RESUME_GUEST;
else {
@@ -172,7 +172,7 @@ static int kvm_trap_emul_handle_tlb_miss(struct kvm_vcpu *vcpu, bool store)
* not expect to ever get them
*/
if (kvm_mips_handle_kseg0_tlb_fault
- (vcpu->arch.host_cp0_badvaddr, vcpu) < 0) {
+ (vcpu->arch.host_cp0_badvaddr, vcpu, store) < 0) {
run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
ret = RESUME_HOST;
}
--
git-series 0.8.10
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 2/13] KVM: MIPS: Pass type of fault down to kvm_mips_map_page()
2017-01-16 12:49 ` [PATCH 2/13] KVM: MIPS: Pass type of fault down to kvm_mips_map_page() James Hogan
@ 2017-01-16 12:49 ` James Hogan
0 siblings, 0 replies; 8+ messages in thread
From: James Hogan @ 2017-01-16 12:49 UTC (permalink / raw)
To: linux-mips
Cc: James Hogan, Paolo Bonzini, Radim Krčmář,
Ralf Baechle, kvm
kvm_mips_map_page() will need to know whether the fault was due to a
read or a write in order to support dirty page tracking,
KVM_CAP_SYNC_MMU, and read only memory regions, so get that information
passed down to it via new bool write_fault arguments to various
functions.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
---
arch/mips/include/asm/kvm_host.h | 9 ++++++---
arch/mips/kvm/emulate.c | 7 ++++---
arch/mips/kvm/mmu.c | 21 +++++++++++++--------
arch/mips/kvm/trap_emul.c | 4 ++--
4 files changed, 25 insertions(+), 16 deletions(-)
diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h
index c24c1c23196b..70c2dd353468 100644
--- a/arch/mips/include/asm/kvm_host.h
+++ b/arch/mips/include/asm/kvm_host.h
@@ -597,19 +597,22 @@ u32 kvm_get_user_asid(struct kvm_vcpu *vcpu);
u32 kvm_get_commpage_asid (struct kvm_vcpu *vcpu);
extern int kvm_mips_handle_kseg0_tlb_fault(unsigned long badbaddr,
- struct kvm_vcpu *vcpu);
+ struct kvm_vcpu *vcpu,
+ bool write_fault);
extern int kvm_mips_handle_commpage_tlb_fault(unsigned long badvaddr,
struct kvm_vcpu *vcpu);
extern int kvm_mips_handle_mapped_seg_tlb_fault(struct kvm_vcpu *vcpu,
struct kvm_mips_tlb *tlb,
- unsigned long gva);
+ unsigned long gva,
+ bool write_fault);
extern enum emulation_result kvm_mips_handle_tlbmiss(u32 cause,
u32 *opc,
struct kvm_run *run,
- struct kvm_vcpu *vcpu);
+ struct kvm_vcpu *vcpu,
+ bool write_fault);
extern enum emulation_result kvm_mips_handle_tlbmod(u32 cause,
u32 *opc,
diff --git a/arch/mips/kvm/emulate.c b/arch/mips/kvm/emulate.c
index 72eb307a61a7..a47f8af9193e 100644
--- a/arch/mips/kvm/emulate.c
+++ b/arch/mips/kvm/emulate.c
@@ -2705,7 +2705,8 @@ enum emulation_result kvm_mips_check_privilege(u32 cause,
enum emulation_result kvm_mips_handle_tlbmiss(u32 cause,
u32 *opc,
struct kvm_run *run,
- struct kvm_vcpu *vcpu)
+ struct kvm_vcpu *vcpu,
+ bool write_fault)
{
enum emulation_result er = EMULATE_DONE;
u32 exccode = (cause >> CAUSEB_EXCCODE) & 0x1f;
@@ -2761,8 +2762,8 @@ enum emulation_result kvm_mips_handle_tlbmiss(u32 cause,
* OK we have a Guest TLB entry, now inject it into the
* shadow host TLB
*/
- if (kvm_mips_handle_mapped_seg_tlb_fault(vcpu, tlb,
- va)) {
+ if (kvm_mips_handle_mapped_seg_tlb_fault(vcpu, tlb, va,
+ write_fault)) {
kvm_err("%s: handling mapped seg tlb fault for %lx, index: %u, vcpu: %p, ASID: %#lx\n",
__func__, va, index, vcpu,
read_c0_entryhi());
diff --git a/arch/mips/kvm/mmu.c b/arch/mips/kvm/mmu.c
index b3da473e1569..1af65f2e6bb7 100644
--- a/arch/mips/kvm/mmu.c
+++ b/arch/mips/kvm/mmu.c
@@ -308,6 +308,7 @@ bool kvm_mips_flush_gpa_pt(struct kvm *kvm, gfn_t start_gfn, gfn_t end_gfn)
* kvm_mips_map_page() - Map a guest physical page.
* @vcpu: VCPU pointer.
* @gpa: Guest physical address of fault.
+ * @write_fault: Whether the fault was due to a write.
* @out_entry: New PTE for @gpa (written on success unless NULL).
* @out_buddy: New PTE for @gpa's buddy (written on success unless
* NULL).
@@ -327,6 +328,7 @@ bool kvm_mips_flush_gpa_pt(struct kvm *kvm, gfn_t start_gfn, gfn_t end_gfn)
* as an MMIO access.
*/
static int kvm_mips_map_page(struct kvm_vcpu *vcpu, unsigned long gpa,
+ bool write_fault,
pte_t *out_entry, pte_t *out_buddy)
{
struct kvm *kvm = vcpu->kvm;
@@ -558,7 +560,8 @@ void kvm_mips_flush_gva_pt(pgd_t *pgd, enum kvm_mips_flush flags)
/* XXXKYMA: Must be called with interrupts disabled */
int kvm_mips_handle_kseg0_tlb_fault(unsigned long badvaddr,
- struct kvm_vcpu *vcpu)
+ struct kvm_vcpu *vcpu,
+ bool write_fault)
{
unsigned long gpa;
kvm_pfn_t pfn0, pfn1;
@@ -576,10 +579,11 @@ int kvm_mips_handle_kseg0_tlb_fault(unsigned long badvaddr,
gpa = KVM_GUEST_CPHYSADDR(badvaddr & (PAGE_MASK << 1));
vaddr = badvaddr & (PAGE_MASK << 1);
- if (kvm_mips_map_page(vcpu, gpa, &pte_gpa[0], NULL) < 0)
+ if (kvm_mips_map_page(vcpu, gpa, write_fault, &pte_gpa[0], NULL) < 0)
return -1;
- if (kvm_mips_map_page(vcpu, gpa | PAGE_SIZE, &pte_gpa[1], NULL) < 0)
+ if (kvm_mips_map_page(vcpu, gpa | PAGE_SIZE, write_fault, &pte_gpa[1],
+ NULL) < 0)
return -1;
pfn0 = pte_pfn(pte_gpa[0]);
@@ -604,7 +608,8 @@ int kvm_mips_handle_kseg0_tlb_fault(unsigned long badvaddr,
int kvm_mips_handle_mapped_seg_tlb_fault(struct kvm_vcpu *vcpu,
struct kvm_mips_tlb *tlb,
- unsigned long gva)
+ unsigned long gva,
+ bool write_fault)
{
kvm_pfn_t pfn;
long tlb_lo = 0;
@@ -621,8 +626,8 @@ int kvm_mips_handle_mapped_seg_tlb_fault(struct kvm_vcpu *vcpu,
tlb_lo = tlb->tlb_lo[idx];
/* Find host PFN */
- if (kvm_mips_map_page(vcpu, mips3_tlbpfn_to_paddr(tlb_lo), &pte_gpa,
- NULL) < 0)
+ if (kvm_mips_map_page(vcpu, mips3_tlbpfn_to_paddr(tlb_lo), write_fault,
+ &pte_gpa, NULL) < 0)
return -1;
pfn = pte_pfn(pte_gpa);
@@ -757,7 +762,7 @@ enum kvm_mips_fault_result kvm_trap_emul_gva_fault(struct kvm_vcpu *vcpu,
int index;
if (KVM_GUEST_KSEGX(gva) == KVM_GUEST_KSEG0) {
- if (kvm_mips_handle_kseg0_tlb_fault(gva, vcpu) < 0)
+ if (kvm_mips_handle_kseg0_tlb_fault(gva, vcpu, write) < 0)
return KVM_MIPS_GPA;
} else if ((KVM_GUEST_KSEGX(gva) < KVM_GUEST_KSEG0) ||
KVM_GUEST_KSEGX(gva) == KVM_GUEST_KSEG23) {
@@ -774,7 +779,7 @@ enum kvm_mips_fault_result kvm_trap_emul_gva_fault(struct kvm_vcpu *vcpu,
if (write && !TLB_IS_DIRTY(*tlb, gva))
return KVM_MIPS_TLBMOD;
- if (kvm_mips_handle_mapped_seg_tlb_fault(vcpu, tlb, gva))
+ if (kvm_mips_handle_mapped_seg_tlb_fault(vcpu, tlb, gva, write))
return KVM_MIPS_GPA;
} else {
return KVM_MIPS_GVA;
diff --git a/arch/mips/kvm/trap_emul.c b/arch/mips/kvm/trap_emul.c
index 970ab216b355..33888f1a89f4 100644
--- a/arch/mips/kvm/trap_emul.c
+++ b/arch/mips/kvm/trap_emul.c
@@ -159,7 +159,7 @@ static int kvm_trap_emul_handle_tlb_miss(struct kvm_vcpu *vcpu, bool store)
* into the shadow host TLB
*/
- er = kvm_mips_handle_tlbmiss(cause, opc, run, vcpu);
+ er = kvm_mips_handle_tlbmiss(cause, opc, run, vcpu, store);
if (er == EMULATE_DONE)
ret = RESUME_GUEST;
else {
@@ -172,7 +172,7 @@ static int kvm_trap_emul_handle_tlb_miss(struct kvm_vcpu *vcpu, bool store)
* not expect to ever get them
*/
if (kvm_mips_handle_kseg0_tlb_fault
- (vcpu->arch.host_cp0_badvaddr, vcpu) < 0) {
+ (vcpu->arch.host_cp0_badvaddr, vcpu, store) < 0) {
run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
ret = RESUME_HOST;
}
--
git-series 0.8.10
^ permalink raw reply related [flat|nested] 8+ messages in thread
end of thread, other threads:[~2017-01-19 9:08 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
[not found] <201701171906397244491@zte.com.cn>
2017-01-17 13:27 ` [PATCH 2/13] KVM: MIPS: Pass type of fault down to kvm_mips_map_page() James Hogan
2017-01-17 13:27 ` James Hogan
[not found] <201701191629518310684@zte.com.cn>
2017-01-19 9:08 ` James Hogan
2017-01-19 9:08 ` James Hogan
[not found] <201701181618464411994@zte.com.cn>
2017-01-18 12:12 ` James Hogan
2017-01-18 12:12 ` James Hogan
2017-01-16 12:49 [PATCH 0/13] KVM: MIPS: Dirty logging, SYNC_MMU & READONLY_MEM James Hogan
2017-01-16 12:49 ` [PATCH 2/13] KVM: MIPS: Pass type of fault down to kvm_mips_map_page() James Hogan
2017-01-16 12:49 ` James Hogan
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).