All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 1/2] x86/mem_event: Deliver gla fault EPT violation information
@ 2014-08-07 23:12 Tamas K Lengyel
  2014-08-07 23:12 ` [PATCH v3 2/2] tools/xen-access: Print gla valid/fault information Tamas K Lengyel
                   ` (2 more replies)
  0 siblings, 3 replies; 13+ messages in thread
From: Tamas K Lengyel @ 2014-08-07 23:12 UTC (permalink / raw)
  To: xen-devel
  Cc: kevin.tian, Tamas K Lengyel, ian.campbell, stefano.stabellini,
	jun.nakajima, eddie.dong, ian.jackson, Aravind.Gopalakrishnan,
	suravee.suthikulpanit, andrew.cooper3, boris.ostrovsky

On Intel EPT the exit qualification generated by a violation also includes a bit (EPT_GLA_FAULT) which describes the following information:
Set if the access causing the EPT violation is to a guest-physical address that is the translation of a linear address. Clear if the access causing the EPT violation is to a paging-structure entry as part of a page walk or the update of an accessed or dirty bit.

For more information see Table 27-7 in the Intel SDM.

This patch extends the mem_event system to deliver this extra information, which could be useful for determining the cause of a violation.

v3: Style fixes.
v2: Split gla_fault into fault_in_gpt and fault_gla to be more compatible with the AMD implementation.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
---
 xen/arch/x86/hvm/hvm.c         |  8 ++++++--
 xen/arch/x86/hvm/svm/svm.c     |  2 +-
 xen/arch/x86/hvm/vmx/vmx.c     | 11 ++++++++++-
 xen/arch/x86/mm/p2m.c          |  5 ++++-
 xen/include/asm-x86/hvm/hvm.h  |  5 ++++-
 xen/include/asm-x86/p2m.h      |  3 ++-
 xen/include/public/mem_event.h |  4 +++-
 7 files changed, 30 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index e834406..15bd01f 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -2725,6 +2725,8 @@ void hvm_inject_page_fault(int errcode, unsigned long cr2)
 int hvm_hap_nested_page_fault(paddr_t gpa,
                               bool_t gla_valid,
                               unsigned long gla,
+                              bool_t fault_in_gpt,
+                              bool_t fault_gla,
                               bool_t access_r,
                               bool_t access_w,
                               bool_t access_x)
@@ -2832,8 +2834,10 @@ int hvm_hap_nested_page_fault(paddr_t gpa,
 
         if ( violation )
         {
-            if ( p2m_mem_access_check(gpa, gla_valid, gla, access_r, 
-                                        access_w, access_x, &req_ptr) )
+            if ( p2m_mem_access_check(gpa, gla_valid, gla,
+                                      fault_in_gpt, fault_gla,
+                                      access_r, access_w, access_x,
+                                      &req_ptr) )
             {
                 fall_through = 1;
             } else {
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 76616ac..9e35e7a 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -1403,7 +1403,7 @@ static void svm_do_nested_pgfault(struct vcpu *v,
     p2m_access_t p2ma;
     struct p2m_domain *p2m = NULL;
 
-    ret = hvm_hap_nested_page_fault(gpa, 0, ~0ul, 
+    ret = hvm_hap_nested_page_fault(gpa, 0, ~0ul, 0, 0,
                                     1, /* All NPFs count as reads */
                                     npfec & PFEC_write_access, 
                                     npfec & PFEC_insn_fetch);
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 2caa04a..a7a0396 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2353,6 +2353,7 @@ static void ept_handle_violation(unsigned long qualification, paddr_t gpa)
     p2m_type_t p2mt;
     int ret;
     struct domain *d = current->domain;
+    bool_t fault_in_gpt, fault_gla;
 
     if ( tb_init_done )
     {
@@ -2371,11 +2372,19 @@ static void ept_handle_violation(unsigned long qualification, paddr_t gpa)
     }
 
     if ( qualification & EPT_GLA_VALID )
+    {
         __vmread(GUEST_LINEAR_ADDRESS, &gla);
+        fault_gla = !!(qualification & EPT_GLA_FAULT);
+        fault_in_gpt = !fault_gla;
+    }
     else
+    {
         gla = ~0ull;
+        fault_gla = fault_in_gpt = 0;
+    }
     ret = hvm_hap_nested_page_fault(gpa,
-                                    !!(qualification & EPT_GLA_VALID), gla,
+                                    !!(qualification & EPT_GLA_VALID),
+                                    gla, fault_in_gpt, fault_gla,
                                     !!(qualification & EPT_READ_VIOLATION),
                                     !!(qualification & EPT_WRITE_VIOLATION),
                                     !!(qualification & EPT_EXEC_VIOLATION));
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index bca9f0f..132f0d2 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1323,7 +1323,8 @@ void p2m_mem_paging_resume(struct domain *d)
     }
 }
 
-bool_t p2m_mem_access_check(paddr_t gpa, bool_t gla_valid, unsigned long gla, 
+bool_t p2m_mem_access_check(paddr_t gpa, bool_t gla_valid, unsigned long gla,
+                          bool_t fault_in_gpt, bool_t fault_gla,
                           bool_t access_r, bool_t access_w, bool_t access_x,
                           mem_event_request_t **req_ptr)
 {
@@ -1405,6 +1406,8 @@ bool_t p2m_mem_access_check(paddr_t gpa, bool_t gla_valid, unsigned long gla,
         req->offset = gpa & ((1 << PAGE_SHIFT) - 1);
         req->gla_valid = gla_valid;
         req->gla = gla;
+        req->fault_in_gpt = fault_in_gpt;
+        req->fault_gla = fault_gla;
         req->access_r = access_r;
         req->access_w = access_w;
         req->access_x = access_x;
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index 0ebd478..de755b6 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -456,7 +456,10 @@ static inline void hvm_invalidate_regs_fields(struct cpu_user_regs *regs)
 }
 
 int hvm_hap_nested_page_fault(paddr_t gpa,
-                              bool_t gla_valid, unsigned long gla,
+                              bool_t gla_valid,
+                              unsigned long gla,
+                              bool_t fault_in_gpt,
+                              bool_t fault_gla,
                               bool_t access_r,
                               bool_t access_w,
                               bool_t access_x);
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 0ddbadb..59803c5 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -597,7 +597,8 @@ void p2m_mem_paging_resume(struct domain *d);
  * been promoted with no underlying vcpu pause. If the req_ptr has been populated, 
  * then the caller must put the event in the ring (once having released get_gfn*
  * locks -- caller must also xfree the request. */
-bool_t p2m_mem_access_check(paddr_t gpa, bool_t gla_valid, unsigned long gla, 
+bool_t p2m_mem_access_check(paddr_t gpa, bool_t gla_valid, unsigned long gla,
+                          bool_t fault_in_gpt, bool_t fault_gla,
                           bool_t access_r, bool_t access_w, bool_t access_x,
                           mem_event_request_t **req_ptr);
 /* Resumes the running of the VCPU, restarting the last instruction */
diff --git a/xen/include/public/mem_event.h b/xen/include/public/mem_event.h
index 3831b41..1ba6863 100644
--- a/xen/include/public/mem_event.h
+++ b/xen/include/public/mem_event.h
@@ -62,7 +62,9 @@ typedef struct mem_event_st {
     uint16_t access_w:1;
     uint16_t access_x:1;
     uint16_t gla_valid:1;
-    uint16_t available:12;
+    uint16_t fault_in_gpt:1;
+    uint16_t fault_gla:1;
+    uint16_t available:10;
 
     uint16_t reason;
 } mem_event_request_t, mem_event_response_t;
-- 
2.0.1

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v3 2/2] tools/xen-access: Print gla valid/fault information
  2014-08-07 23:12 [PATCH v3 1/2] x86/mem_event: Deliver gla fault EPT violation information Tamas K Lengyel
@ 2014-08-07 23:12 ` Tamas K Lengyel
  2014-08-08  8:07   ` Andrew Cooper
  2014-08-07 23:33 ` [PATCH v3 1/2] x86/mem_event: Deliver gla fault EPT violation information Tian, Kevin
  2014-08-08  7:00 ` Jan Beulich
  2 siblings, 1 reply; 13+ messages in thread
From: Tamas K Lengyel @ 2014-08-07 23:12 UTC (permalink / raw)
  To: xen-devel
  Cc: kevin.tian, Tamas K Lengyel, ian.campbell, stefano.stabellini,
	jun.nakajima, eddie.dong, ian.jackson, Aravind.Gopalakrishnan,
	suravee.suthikulpanit, andrew.cooper3, boris.ostrovsky

Extend the print-out of the memory violations to show gla valid/fault information.

v2: Update to new fields and change printing 1/0 to y/n.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
---
 tools/tests/xen-access/xen-access.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/tools/tests/xen-access/xen-access.c b/tools/tests/xen-access/xen-access.c
index 090df5f..378b205 100644
--- a/tools/tests/xen-access/xen-access.c
+++ b/tools/tests/xen-access/xen-access.c
@@ -566,13 +566,16 @@ int main(int argc, char *argv[])
                 }
 
                 printf("PAGE ACCESS: %c%c%c for GFN %"PRIx64" (offset %06"
-                       PRIx64") gla %016"PRIx64" (vcpu %d)\n",
+                       PRIx64") gla %016"PRIx64" (valid: %c) fault in gpt: %c gpa has gla: %c (vcpu %d)\n",
                        req.access_r ? 'r' : '-',
                        req.access_w ? 'w' : '-',
                        req.access_x ? 'x' : '-',
                        req.gfn,
                        req.offset,
                        req.gla,
+                       req.gla_valid ? 'y' : 'n',
+                       req.fault_in_gpt ? 'y' : 'n',
+                       req.fault_gla ? 'y': 'n',
                        req.vcpu_id);
 
                 if ( default_access != after_first_access )
-- 
2.0.1

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 1/2] x86/mem_event: Deliver gla fault EPT violation information
  2014-08-07 23:12 [PATCH v3 1/2] x86/mem_event: Deliver gla fault EPT violation information Tamas K Lengyel
  2014-08-07 23:12 ` [PATCH v3 2/2] tools/xen-access: Print gla valid/fault information Tamas K Lengyel
@ 2014-08-07 23:33 ` Tian, Kevin
  2014-08-08  7:00 ` Jan Beulich
  2 siblings, 0 replies; 13+ messages in thread
From: Tian, Kevin @ 2014-08-07 23:33 UTC (permalink / raw)
  To: Tamas K Lengyel, xen-devel
  Cc: ian.campbell, stefano.stabellini, Nakajima, Jun, Dong, Eddie,
	ian.jackson, Aravind.Gopalakrishnan, suravee.suthikulpanit,
	andrew.cooper3, boris.ostrovsky

> From: Tamas K Lengyel [mailto:tamas.lengyel@zentific.com]
> Sent: Thursday, August 07, 2014 4:13 PM
> 
> On Intel EPT the exit qualification generated by a violation also includes a bit
> (EPT_GLA_FAULT) which describes the following information:
> Set if the access causing the EPT violation is to a guest-physical address that is
> the translation of a linear address. Clear if the access causing the EPT violation
> is to a paging-structure entry as part of a page walk or the update of an
> accessed or dirty bit.
> 
> For more information see Table 27-7 in the Intel SDM.
> 
> This patch extends the mem_event system to deliver this extra information,
> which could be useful for determining the cause of a violation.
> 
> v3: Style fixes.
> v2: Split gla_fault into fault_in_gpt and fault_gla to be more compatible with
> the AMD implementation.
> 
> Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>

Acked-by: Kevin Tian <kevin.tian@intel.com>

Thanks
Kevin

> ---
>  xen/arch/x86/hvm/hvm.c         |  8 ++++++--
>  xen/arch/x86/hvm/svm/svm.c     |  2 +-
>  xen/arch/x86/hvm/vmx/vmx.c     | 11 ++++++++++-
>  xen/arch/x86/mm/p2m.c          |  5 ++++-
>  xen/include/asm-x86/hvm/hvm.h  |  5 ++++-
>  xen/include/asm-x86/p2m.h      |  3 ++-
>  xen/include/public/mem_event.h |  4 +++-
>  7 files changed, 30 insertions(+), 8 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index e834406..15bd01f 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -2725,6 +2725,8 @@ void hvm_inject_page_fault(int errcode, unsigned
> long cr2)
>  int hvm_hap_nested_page_fault(paddr_t gpa,
>                                bool_t gla_valid,
>                                unsigned long gla,
> +                              bool_t fault_in_gpt,
> +                              bool_t fault_gla,
>                                bool_t access_r,
>                                bool_t access_w,
>                                bool_t access_x)
> @@ -2832,8 +2834,10 @@ int hvm_hap_nested_page_fault(paddr_t gpa,
> 
>          if ( violation )
>          {
> -            if ( p2m_mem_access_check(gpa, gla_valid, gla, access_r,
> -                                        access_w, access_x,
> &req_ptr) )
> +            if ( p2m_mem_access_check(gpa, gla_valid, gla,
> +                                      fault_in_gpt, fault_gla,
> +                                      access_r, access_w, access_x,
> +                                      &req_ptr) )
>              {
>                  fall_through = 1;
>              } else {
> diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
> index 76616ac..9e35e7a 100644
> --- a/xen/arch/x86/hvm/svm/svm.c
> +++ b/xen/arch/x86/hvm/svm/svm.c
> @@ -1403,7 +1403,7 @@ static void svm_do_nested_pgfault(struct vcpu *v,
>      p2m_access_t p2ma;
>      struct p2m_domain *p2m = NULL;
> 
> -    ret = hvm_hap_nested_page_fault(gpa, 0, ~0ul,
> +    ret = hvm_hap_nested_page_fault(gpa, 0, ~0ul, 0, 0,
>                                      1, /* All NPFs count as reads */
>                                      npfec & PFEC_write_access,
>                                      npfec & PFEC_insn_fetch);
> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> index 2caa04a..a7a0396 100644
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -2353,6 +2353,7 @@ static void ept_handle_violation(unsigned long
> qualification, paddr_t gpa)
>      p2m_type_t p2mt;
>      int ret;
>      struct domain *d = current->domain;
> +    bool_t fault_in_gpt, fault_gla;
> 
>      if ( tb_init_done )
>      {
> @@ -2371,11 +2372,19 @@ static void ept_handle_violation(unsigned long
> qualification, paddr_t gpa)
>      }
> 
>      if ( qualification & EPT_GLA_VALID )
> +    {
>          __vmread(GUEST_LINEAR_ADDRESS, &gla);
> +        fault_gla = !!(qualification & EPT_GLA_FAULT);
> +        fault_in_gpt = !fault_gla;
> +    }
>      else
> +    {
>          gla = ~0ull;
> +        fault_gla = fault_in_gpt = 0;
> +    }
>      ret = hvm_hap_nested_page_fault(gpa,
> -                                    !!(qualification &
> EPT_GLA_VALID), gla,
> +                                    !!(qualification &
> EPT_GLA_VALID),
> +                                    gla, fault_in_gpt, fault_gla,
>                                      !!(qualification &
> EPT_READ_VIOLATION),
>                                      !!(qualification &
> EPT_WRITE_VIOLATION),
>                                      !!(qualification &
> EPT_EXEC_VIOLATION));
> diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
> index bca9f0f..132f0d2 100644
> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -1323,7 +1323,8 @@ void p2m_mem_paging_resume(struct domain *d)
>      }
>  }
> 
> -bool_t p2m_mem_access_check(paddr_t gpa, bool_t gla_valid, unsigned long
> gla,
> +bool_t p2m_mem_access_check(paddr_t gpa, bool_t gla_valid, unsigned long
> gla,
> +                          bool_t fault_in_gpt, bool_t fault_gla,
>                            bool_t access_r, bool_t access_w, bool_t
> access_x,
>                            mem_event_request_t **req_ptr)
>  {
> @@ -1405,6 +1406,8 @@ bool_t p2m_mem_access_check(paddr_t gpa,
> bool_t gla_valid, unsigned long gla,
>          req->offset = gpa & ((1 << PAGE_SHIFT) - 1);
>          req->gla_valid = gla_valid;
>          req->gla = gla;
> +        req->fault_in_gpt = fault_in_gpt;
> +        req->fault_gla = fault_gla;
>          req->access_r = access_r;
>          req->access_w = access_w;
>          req->access_x = access_x;
> diff --git a/xen/include/asm-x86/hvm/hvm.h
> b/xen/include/asm-x86/hvm/hvm.h
> index 0ebd478..de755b6 100644
> --- a/xen/include/asm-x86/hvm/hvm.h
> +++ b/xen/include/asm-x86/hvm/hvm.h
> @@ -456,7 +456,10 @@ static inline void hvm_invalidate_regs_fields(struct
> cpu_user_regs *regs)
>  }
> 
>  int hvm_hap_nested_page_fault(paddr_t gpa,
> -                              bool_t gla_valid, unsigned long gla,
> +                              bool_t gla_valid,
> +                              unsigned long gla,
> +                              bool_t fault_in_gpt,
> +                              bool_t fault_gla,
>                                bool_t access_r,
>                                bool_t access_w,
>                                bool_t access_x);
> diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
> index 0ddbadb..59803c5 100644
> --- a/xen/include/asm-x86/p2m.h
> +++ b/xen/include/asm-x86/p2m.h
> @@ -597,7 +597,8 @@ void p2m_mem_paging_resume(struct domain *d);
>   * been promoted with no underlying vcpu pause. If the req_ptr has been
> populated,
>   * then the caller must put the event in the ring (once having released
> get_gfn*
>   * locks -- caller must also xfree the request. */
> -bool_t p2m_mem_access_check(paddr_t gpa, bool_t gla_valid, unsigned long
> gla,
> +bool_t p2m_mem_access_check(paddr_t gpa, bool_t gla_valid, unsigned long
> gla,
> +                          bool_t fault_in_gpt, bool_t fault_gla,
>                            bool_t access_r, bool_t access_w, bool_t
> access_x,
>                            mem_event_request_t **req_ptr);
>  /* Resumes the running of the VCPU, restarting the last instruction */
> diff --git a/xen/include/public/mem_event.h
> b/xen/include/public/mem_event.h
> index 3831b41..1ba6863 100644
> --- a/xen/include/public/mem_event.h
> +++ b/xen/include/public/mem_event.h
> @@ -62,7 +62,9 @@ typedef struct mem_event_st {
>      uint16_t access_w:1;
>      uint16_t access_x:1;
>      uint16_t gla_valid:1;
> -    uint16_t available:12;
> +    uint16_t fault_in_gpt:1;
> +    uint16_t fault_gla:1;
> +    uint16_t available:10;
> 
>      uint16_t reason;
>  } mem_event_request_t, mem_event_response_t;
> --
> 2.0.1

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 1/2] x86/mem_event: Deliver gla fault EPT violation information
  2014-08-07 23:12 [PATCH v3 1/2] x86/mem_event: Deliver gla fault EPT violation information Tamas K Lengyel
  2014-08-07 23:12 ` [PATCH v3 2/2] tools/xen-access: Print gla valid/fault information Tamas K Lengyel
  2014-08-07 23:33 ` [PATCH v3 1/2] x86/mem_event: Deliver gla fault EPT violation information Tian, Kevin
@ 2014-08-08  7:00 ` Jan Beulich
  2014-08-08  8:04   ` Andrew Cooper
  2014-08-08  8:48   ` Tamas Lengyel
  2 siblings, 2 replies; 13+ messages in thread
From: Jan Beulich @ 2014-08-08  7:00 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: kevin.tian, ian.campbell, stefano.stabellini, jun.nakajima,
	andrew.cooper3, ian.jackson, eddie.dong, Aravind.Gopalakrishnan,
	suravee.suthikulpanit, xen-devel, boris.ostrovsky

[-- Attachment #1: Type: text/plain, Size: 1667 bytes --]

>>> On 08.08.14 at 01:12, <tamas.lengyel@zentific.com> wrote:
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -2725,6 +2725,8 @@ void hvm_inject_page_fault(int errcode, unsigned long cr2)
>  int hvm_hap_nested_page_fault(paddr_t gpa,
>                                bool_t gla_valid,
>                                unsigned long gla,
> +                              bool_t fault_in_gpt,
> +                              bool_t fault_gla,
>                                bool_t access_r,
>                                bool_t access_w,
>                                bool_t access_x)

Afaic it is out of question to have a function with _six_ boolean
parameters. This needs to be consolidated into a single flags field. I
have actually done that already, in a patch serving a different
purpose (see attached), as discussed recently on this list. I would
very much appreciate if you either re-based yours on top of that or
modified it along those lines.

> @@ -2371,11 +2372,19 @@ static void ept_handle_violation(unsigned long qualification, paddr_t gpa)
>      }
>  
>      if ( qualification & EPT_GLA_VALID )
> +    {
>          __vmread(GUEST_LINEAR_ADDRESS, &gla);
> +        fault_gla = !!(qualification & EPT_GLA_FAULT);
> +        fault_in_gpt = !fault_gla;

I am actually not agreeing with Andrew regarding the need for two
flags here, if we already know that SVM also properly expresses the
distinction between faults on page table accesses and faults on the
actual translation. The attached patch is also coded in this way, and
I agree with your earlier arguing for just a single flag.

Jan


[-- Attachment #2: EPT-fault-GLA.patch --]
[-- Type: text/plain, Size: 17674 bytes --]

TODO: remove //temp-s

There are a few intentional but not necessarily obvious (and possibly
subtle) adjustments to behavior:
- VMX now marks all write access faults also as read accesses (whether
  the hardware would do so for read-modify-write instructions depends
  on CPU model and the particular instruction)
- SVM no longer marks instruction fetch faults as read accesses
- __hvmemul_read() no longer blindly bails on instruction fetches
  matching the MMIO GVA (the callers of handle_mmio_with_translation()
  now control the behavior via the NPFEC_* flags they pass, and it
  didn't seem right to bail here rather than just falling through to
  the unaccelerated path)

--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -462,6 +462,7 @@ static int __hvmemul_read(
     struct hvm_vcpu_io *vio = &curr->arch.hvm_vcpu.hvm_io;
     paddr_t gpa;
     int rc;
+static unsigned long tot, cnt, thr, xlat;//temp
 
     rc = hvmemul_virtual_to_linear(
         seg, offset, bytes, &reps, access_type, hvmemul_ctxt, &addr);
@@ -481,11 +482,16 @@ static int __hvmemul_read(
         while ( off & (chunk - 1) )
             chunk >>= 1;
 
-    if ( unlikely(vio->mmio_gva == (addr & PAGE_MASK)) && vio->mmio_gva )
+++tot;//temp
+    if ( (vio->mmio_access & (access_type != hvm_access_insn_fetch ?
+                              NPFEC_read_access : NPFEC_insn_fetch)) &&
+         (vio->mmio_gva == (addr & PAGE_MASK)) )
     {
-        if ( access_type == hvm_access_insn_fetch )
-            return X86EMUL_UNHANDLEABLE;
         gpa = (((paddr_t)vio->mmio_gpfn << PAGE_SHIFT) | off);
+if(++cnt > thr) {//temp
+ thr |= cnt;
+ printk("read %lx/%lx/%lx [%pv:%lx:%x]\n", cnt, tot, xlat, curr, gpa, bytes);
+}
         while ( (off + chunk) <= PAGE_SIZE )
         {
             rc = hvmemul_do_mmio(gpa, &reps, chunk, 0, IOREQ_READ, 0, p_data);
@@ -522,6 +528,7 @@ static int __hvmemul_read(
                                     hvmemul_ctxt);
         while ( rc == X86EMUL_OKAY )
         {
+++xlat;//temp
             rc = hvmemul_do_mmio(gpa, &reps, chunk, 0, IOREQ_READ, 0, p_data);
             if ( rc != X86EMUL_OKAY || bytes == chunk )
                 break;
@@ -535,6 +542,7 @@ static int __hvmemul_read(
                 gpa += chunk;
             else
             {
+++xlat;//temp
                 rc = hvmemul_linear_to_phys(addr, &gpa, chunk, &reps, pfec,
                                             hvmemul_ctxt);
                 off = 0;
@@ -611,6 +619,7 @@ static int hvmemul_write(
     struct hvm_vcpu_io *vio = &curr->arch.hvm_vcpu.hvm_io;
     paddr_t gpa;
     int rc;
+static unsigned long tot, cnt, thr, xlat;//temp
 
     rc = hvmemul_virtual_to_linear(
         seg, offset, bytes, &reps, hvm_access_write, hvmemul_ctxt, &addr);
@@ -624,9 +633,15 @@ static int hvmemul_write(
         while ( off & (chunk - 1) )
             chunk >>= 1;
 
-    if ( unlikely(vio->mmio_gva == (addr & PAGE_MASK)) && vio->mmio_gva )
+++tot;//temp
+    if ( (vio->mmio_access & NPFEC_write_access) &&
+         (vio->mmio_gva == (addr & PAGE_MASK)) )
     {
         gpa = (((paddr_t)vio->mmio_gpfn << PAGE_SHIFT) | off);
+if(++cnt > thr) {//temp
+ thr |= cnt;
+ printk("write %lx/%lx/%lx [%pv:%lx:%x]\n", cnt, tot, xlat, curr, gpa, bytes);
+}
         while ( (off + chunk) <= PAGE_SIZE )
         {
             rc = hvmemul_do_mmio(gpa, &reps, chunk, 0, IOREQ_WRITE, 0, p_data);
@@ -655,6 +670,7 @@ static int hvmemul_write(
     case HVMCOPY_bad_gva_to_gfn:
         return X86EMUL_EXCEPTION;
     case HVMCOPY_bad_gfn_to_mfn:
+++xlat;//temp
         rc = hvmemul_linear_to_phys(addr, &gpa, chunk, &reps, pfec,
                                     hvmemul_ctxt);
         while ( rc == X86EMUL_OKAY )
@@ -672,6 +688,7 @@ static int hvmemul_write(
                 gpa += chunk;
             else
             {
+++xlat;//temp
                 rc = hvmemul_linear_to_phys(addr, &gpa, chunk, &reps, pfec,
                                             hvmemul_ctxt);
                 off = 0;
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -2684,11 +2684,8 @@ void hvm_inject_page_fault(int errcode, 
 }
 
 int hvm_hap_nested_page_fault(paddr_t gpa,
-                              bool_t gla_valid,
-                              unsigned long gla,
-                              bool_t access_r,
-                              bool_t access_w,
-                              bool_t access_x)
+                              unsigned int npfec,
+                              unsigned long gla)
 {
     unsigned long gfn = gpa >> PAGE_SHIFT;
     p2m_type_t p2mt;
@@ -2718,7 +2715,9 @@ int hvm_hap_nested_page_fault(paddr_t gp
          * the same as for shadow paging.
          */
         rv = nestedhvm_hap_nested_page_fault(v, &gpa,
-                                             access_r, access_w, access_x);
+                                             !!(npfec & NPFEC_read_access),
+                                             !!(npfec & NPFEC_write_access),
+                                             !!(npfec & NPFEC_insn_fetch));
         switch (rv) {
         case NESTEDHVM_PAGEFAULT_DONE:
         case NESTEDHVM_PAGEFAULT_RETRY:
@@ -2749,7 +2748,7 @@ int hvm_hap_nested_page_fault(paddr_t gp
          && is_hvm_vcpu(v)
          && hvm_mmio_internal(gpa) )
     {
-        if ( !handle_mmio() )
+        if ( !handle_mmio_with_translation(gla, gpa >> PAGE_SHIFT, npfec) )
             hvm_inject_hw_exception(TRAP_gp_fault, 0);
         rc = 1;
         goto out;
@@ -2757,7 +2756,9 @@ int hvm_hap_nested_page_fault(paddr_t gp
 
     p2m = p2m_get_hostp2m(v->domain);
     mfn = get_gfn_type_access(p2m, gfn, &p2mt, &p2ma, 
-                              P2M_ALLOC | (access_w ? P2M_UNSHARE : 0), NULL);
+                              P2M_ALLOC | (npfec & NPFEC_write_access ?
+                                           P2M_UNSHARE : 0),
+                              NULL);
 
     /* Check access permissions first, then handle faults */
     if ( mfn_x(mfn) != INVALID_MFN )
@@ -2769,26 +2770,27 @@ int hvm_hap_nested_page_fault(paddr_t gp
         case p2m_access_n:
         case p2m_access_n2rwx:
         default:
-            violation = access_r || access_w || access_x;
+            violation = !!(npfec & (NPFEC_read_access | NPFEC_write_access |
+                                    NPFEC_insn_fetch));
             break;
         case p2m_access_r:
-            violation = access_w || access_x;
+            violation = !!(npfec & (NPFEC_write_access | NPFEC_insn_fetch));
             break;
         case p2m_access_w:
-            violation = access_r || access_x;
+            violation = !!(npfec & (NPFEC_read_access | NPFEC_insn_fetch));
             break;
         case p2m_access_x:
-            violation = access_r || access_w;
+            violation = !!(npfec & (NPFEC_read_access | NPFEC_write_access));
             break;
         case p2m_access_rx:
         case p2m_access_rx2rw:
-            violation = access_w;
+            violation = !!(npfec & NPFEC_write_access);
             break;
         case p2m_access_wx:
-            violation = access_r;
+            violation = !!(npfec & NPFEC_read_access);
             break;
         case p2m_access_rw:
-            violation = access_x;
+            violation = !!(npfec & NPFEC_insn_fetch);
             break;
         case p2m_access_rwx:
             break;
@@ -2796,8 +2798,11 @@ int hvm_hap_nested_page_fault(paddr_t gp
 
         if ( violation )
         {
-            if ( p2m_mem_access_check(gpa, gla_valid, gla, access_r, 
-                                        access_w, access_x, &req_ptr) )
+            if ( p2m_mem_access_check(gpa, !!(npfec & NPFEC_gla_valid), gla,
+                                      !!(npfec & NPFEC_read_access),
+                                      !!(npfec & NPFEC_write_access),
+                                      !!(npfec & NPFEC_insn_fetch),
+                                      &req_ptr) )
             {
                 fall_through = 1;
             } else {
@@ -2813,7 +2818,7 @@ int hvm_hap_nested_page_fault(paddr_t gp
      * to the mmio handler.
      */
     if ( (p2mt == p2m_mmio_dm) || 
-         (access_w && (p2mt == p2m_ram_ro)) )
+         ((npfec & NPFEC_write_access) && (p2mt == p2m_ram_ro)) )
     {
         put_gfn(p2m->domain, gfn);
 
@@ -2821,7 +2826,7 @@ int hvm_hap_nested_page_fault(paddr_t gp
         if ( unlikely(is_pvh_vcpu(v)) )
             goto out;
 
-        if ( !handle_mmio() )
+        if ( !handle_mmio_with_translation(gla, gpa >> PAGE_SHIFT, npfec) )
             hvm_inject_hw_exception(TRAP_gp_fault, 0);
         rc = 1;
         goto out;
@@ -2832,7 +2837,7 @@ int hvm_hap_nested_page_fault(paddr_t gp
         paged = 1;
 
     /* Mem sharing: unshare the page and try again */
-    if ( access_w && (p2mt == p2m_ram_shared) )
+    if ( (npfec & NPFEC_write_access) && (p2mt == p2m_ram_shared) )
     {
         ASSERT(!p2m_is_nestedp2m(p2m));
         sharing_enomem = 
@@ -2849,7 +2854,7 @@ int hvm_hap_nested_page_fault(paddr_t gp
          * a large page, we do not change other pages type within that large
          * page.
          */
-        if ( access_w )
+        if ( npfec & NPFEC_write_access )
         {
             paging_mark_dirty(v->domain, mfn_x(mfn));
             p2m_change_type_one(v->domain, gfn, p2m_ram_logdirty, p2m_ram_rw);
@@ -2859,7 +2864,7 @@ int hvm_hap_nested_page_fault(paddr_t gp
     }
 
     /* Shouldn't happen: Maybe the guest was writing to a r/o grant mapping? */
-    if ( access_w && (p2mt == p2m_grant_map_ro) )
+    if ( (npfec & NPFEC_write_access) && (p2mt == p2m_grant_map_ro) )
     {
         gdprintk(XENLOG_WARNING,
                  "trying to write to read-only grant mapping\n");
--- a/xen/arch/x86/hvm/io.c
+++ b/xen/arch/x86/hvm/io.c
@@ -95,7 +95,7 @@ int handle_mmio(void)
     if ( vio->io_state == HVMIO_awaiting_completion )
         vio->io_state = HVMIO_handle_mmio_awaiting_completion;
     else
-        vio->mmio_gva = 0;
+        vio->mmio_access = 0;
 
     switch ( rc )
     {
@@ -124,9 +124,12 @@ int handle_mmio(void)
     return 1;
 }
 
-int handle_mmio_with_translation(unsigned long gva, unsigned long gpfn)
+int handle_mmio_with_translation(unsigned long gva, unsigned long gpfn,
+                                 unsigned int access)
 {
     struct hvm_vcpu_io *vio = &current->arch.hvm_vcpu.hvm_io;
+
+    vio->mmio_access = access & NPFEC_gla_translation ? access : 0;
     vio->mmio_gva = gva & PAGE_MASK;
     vio->mmio_gpfn = gpfn;
     return handle_mmio();
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -1396,18 +1396,13 @@ const struct hvm_function_table * __init
 static void svm_do_nested_pgfault(struct vcpu *v,
     struct cpu_user_regs *regs, uint32_t npfec, paddr_t gpa)
 {
-    int ret;
+    unsigned int hapec;
     unsigned long gfn = gpa >> PAGE_SHIFT;
     mfn_t mfn;
     p2m_type_t p2mt;
     p2m_access_t p2ma;
     struct p2m_domain *p2m = NULL;
 
-    ret = hvm_hap_nested_page_fault(gpa, 0, ~0ul, 
-                                    1, /* All NPFs count as reads */
-                                    npfec & PFEC_write_access, 
-                                    npfec & PFEC_insn_fetch);
-
     if ( tb_init_done )
     {
         struct {
@@ -1426,7 +1421,12 @@ static void svm_do_nested_pgfault(struct
         __trace_var(TRC_HVM_NPF, 0, sizeof(_d), &_d);
     }
 
-    switch (ret) {
+    hapec = npfec & PFEC_insn_fetch ? NPFEC_insn_fetch : NPFEC_read_access;
+    if ( npfec & PFEC_write_access )
+        hapec |= NPFEC_write_access;
+
+    switch ( hvm_hap_nested_page_fault(gpa, hapec, ~0ul) )
+    {
     case 0:
         break;
     case 1:
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2351,7 +2351,7 @@ static void ept_handle_violation(unsigne
     unsigned long gla, gfn = gpa >> PAGE_SHIFT;
     mfn_t mfn;
     p2m_type_t p2mt;
-    int ret;
+    unsigned int npfec = 0;
     struct domain *d = current->domain;
 
     if ( tb_init_done )
@@ -2370,16 +2370,23 @@ static void ept_handle_violation(unsigne
         __trace_var(TRC_HVM_NPF, 0, sizeof(_d), &_d);
     }
 
+    if ( qualification & EPT_WRITE_VIOLATION )
+        npfec |= NPFEC_read_access | NPFEC_write_access;
+    else if ( qualification & EPT_READ_VIOLATION )
+        npfec |= NPFEC_read_access;
+    if ( qualification & EPT_EXEC_VIOLATION )
+        npfec |= NPFEC_insn_fetch;
     if ( qualification & EPT_GLA_VALID )
+    {
         __vmread(GUEST_LINEAR_ADDRESS, &gla);
+        npfec |= NPFEC_gla_valid;
+        if ( qualification & EPT_GLA_FAULT )
+            npfec |= NPFEC_gla_translation;
+    }
     else
         gla = ~0ull;
-    ret = hvm_hap_nested_page_fault(gpa,
-                                    !!(qualification & EPT_GLA_VALID), gla,
-                                    !!(qualification & EPT_READ_VIOLATION),
-                                    !!(qualification & EPT_WRITE_VIOLATION),
-                                    !!(qualification & EPT_EXEC_VIOLATION));
-    switch ( ret )
+
+    switch ( hvm_hap_nested_page_fault(gpa, npfec, gla) )
     {
     case 0:         // Unhandled L1 EPT violation
         break;
@@ -2412,7 +2419,8 @@ static void ept_handle_violation(unsigne
     ept_walk_table(d, gfn);
 
     if ( qualification & EPT_GLA_VALID )
-        gdprintk(XENLOG_ERR, " --- GLA %#lx\n", gla);
+        gdprintk(XENLOG_ERR, " --- GLA %#lx (%s)\n", gla,
+                 qualification & EPT_GLA_FAULT ? "translation" : "walk");
 
     domain_crash(d);
 }
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -2824,6 +2824,8 @@ static int sh_page_fault(struct vcpu *v,
     p2m_type_t p2mt;
     uint32_t rc;
     int version;
+    unsigned int npfec = NPFEC_read_access | NPFEC_gla_valid |
+                         NPFEC_gla_translation;
 #if SHADOW_OPTIMIZATIONS & SHOPT_FAST_EMULATION
     int fast_emul = 0;
 #endif
@@ -2834,6 +2836,9 @@ static int sh_page_fault(struct vcpu *v,
 
     perfc_incr(shadow_fault);
 
+    if ( regs->error_code & PFEC_write_access )
+        npfec |= NPFEC_write_access;
+
 #if SHADOW_OPTIMIZATIONS & SHOPT_FAST_EMULATION
     /* If faulting frame is successfully emulated in last shadow fault
      * it's highly likely to reach same emulation action for this frame.
@@ -2935,7 +2940,7 @@ static int sh_page_fault(struct vcpu *v,
             SHADOW_PRINTK("fast path mmio %#"PRIpaddr"\n", gpa);
             reset_early_unshadow(v);
             trace_shadow_gen(TRC_SHADOW_FAST_MMIO, va);
-            return (handle_mmio_with_translation(va, gpa >> PAGE_SHIFT)
+            return (handle_mmio_with_translation(va, gpa >> PAGE_SHIFT, npfec)
                     ? EXCRET_fault_fixed : 0);
         }
         else
@@ -3424,7 +3429,7 @@ static int sh_page_fault(struct vcpu *v,
     paging_unlock(d);
     put_gfn(d, gfn_x(gfn));
     trace_shadow_gen(TRC_SHADOW_MMIO, va);
-    return (handle_mmio_with_translation(va, gpa >> PAGE_SHIFT)
+    return (handle_mmio_with_translation(va, gpa >> PAGE_SHIFT, npfec)
             ? EXCRET_fault_fixed : 0);
 
  not_a_shadow_fault:
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -455,11 +455,14 @@ static inline void hvm_invalidate_regs_f
 #endif
 }
 
-int hvm_hap_nested_page_fault(paddr_t gpa,
-                              bool_t gla_valid, unsigned long gla,
-                              bool_t access_r,
-                              bool_t access_w,
-                              bool_t access_x);
+int hvm_hap_nested_page_fault(paddr_t gpa, unsigned int npfec,
+                              unsigned long gla);
+
+#define NPFEC_read_access     (1U << 0)
+#define NPFEC_write_access    (1U << 1)
+#define NPFEC_insn_fetch      (1U << 2)
+#define NPFEC_gla_valid       (1U << 3)
+#define NPFEC_gla_translation (1U << 4)
 
 #define hvm_msr_tsc_aux(v) ({                                               \
     struct domain *__d = (v)->domain;                                       \
--- a/xen/include/asm-x86/hvm/io.h
+++ b/xen/include/asm-x86/hvm/io.h
@@ -119,7 +119,8 @@ static inline void register_buffered_io_
 void send_timeoffset_req(unsigned long timeoff);
 void send_invalidate_req(void);
 int handle_mmio(void);
-int handle_mmio_with_translation(unsigned long gva, unsigned long gpfn);
+int handle_mmio_with_translation(unsigned long gva, unsigned long gpfn,
+                                 unsigned int access);
 int handle_pio(uint16_t port, unsigned int size, int dir);
 void hvm_interrupt_post(struct vcpu *v, int vector, int type);
 void hvm_io_assist(ioreq_t *p);
--- a/xen/include/asm-x86/hvm/vcpu.h
+++ b/xen/include/asm-x86/hvm/vcpu.h
@@ -54,8 +54,9 @@ struct hvm_vcpu_io {
      * HVM emulation:
      *  Virtual address @mmio_gva maps to MMIO physical frame @mmio_gpfn.
      *  The latter is known to be an MMIO frame (not RAM).
-     *  This translation is only valid if @mmio_gva is non-zero.
+     *  This translation is only valid for accesses as per @mmio_access.
      */
+    unsigned int        mmio_access; /* using NPFEC_* values */
     unsigned long       mmio_gva;
     unsigned long       mmio_gpfn;
 

[-- Attachment #3: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 1/2] x86/mem_event: Deliver gla fault EPT violation information
  2014-08-08  7:00 ` Jan Beulich
@ 2014-08-08  8:04   ` Andrew Cooper
  2014-08-08  8:36     ` Tamas Lengyel
  2014-08-08  8:48   ` Tamas Lengyel
  1 sibling, 1 reply; 13+ messages in thread
From: Andrew Cooper @ 2014-08-08  8:04 UTC (permalink / raw)
  To: Jan Beulich, Tamas K Lengyel
  Cc: kevin.tian, ian.campbell, stefano.stabellini, jun.nakajima,
	eddie.dong, ian.jackson, Aravind.Gopalakrishnan,
	suravee.suthikulpanit, xen-devel, boris.ostrovsky

On 08/08/2014 08:00, Jan Beulich wrote:
>>>> On 08.08.14 at 01:12, <tamas.lengyel@zentific.com> wrote:
>> --- a/xen/arch/x86/hvm/hvm.c
>> +++ b/xen/arch/x86/hvm/hvm.c
>> @@ -2725,6 +2725,8 @@ void hvm_inject_page_fault(int errcode, unsigned long cr2)
>>  int hvm_hap_nested_page_fault(paddr_t gpa,
>>                                bool_t gla_valid,
>>                                unsigned long gla,
>> +                              bool_t fault_in_gpt,
>> +                              bool_t fault_gla,
>>                                bool_t access_r,
>>                                bool_t access_w,
>>                                bool_t access_x)
> Afaic it is out of question to have a function with _six_ boolean
> parameters. This needs to be consolidated into a single flags field. I
> have actually done that already, in a patch serving a different
> purpose (see attached), as discussed recently on this list. I would
> very much appreciate if you either re-based yours on top of that or
> modified it along those lines.
>
>> @@ -2371,11 +2372,19 @@ static void ept_handle_violation(unsigned long qualification, paddr_t gpa)
>>      }
>>  
>>      if ( qualification & EPT_GLA_VALID )
>> +    {
>>          __vmread(GUEST_LINEAR_ADDRESS, &gla);
>> +        fault_gla = !!(qualification & EPT_GLA_FAULT);
>> +        fault_in_gpt = !fault_gla;
> I am actually not agreeing with Andrew regarding the need for two
> flags here, if we already know that SVM also properly expresses the
> distinction between faults on page table accesses and faults on the
> actual translation. The attached patch is also coded in this way, and
> I agree with your earlier arguing for just a single flag.

I also agree.  My suggestion for two flags was on the (wrong) assumption
that AMD didn't currently support providing this information (although I
should have picked up on this and retracted my suggestion).

~Andrew

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 2/2] tools/xen-access: Print gla valid/fault information
  2014-08-07 23:12 ` [PATCH v3 2/2] tools/xen-access: Print gla valid/fault information Tamas K Lengyel
@ 2014-08-08  8:07   ` Andrew Cooper
  0 siblings, 0 replies; 13+ messages in thread
From: Andrew Cooper @ 2014-08-08  8:07 UTC (permalink / raw)
  To: Tamas K Lengyel, xen-devel
  Cc: kevin.tian, ian.campbell, stefano.stabellini, jun.nakajima,
	eddie.dong, ian.jackson, Aravind.Gopalakrishnan,
	suravee.suthikulpanit, boris.ostrovsky

On 08/08/2014 00:12, Tamas K Lengyel wrote:
> Extend the print-out of the memory violations to show gla valid/fault information.
>
> v2: Update to new fields and change printing 1/0 to y/n.
>
> Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
> ---
>  tools/tests/xen-access/xen-access.c | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/tools/tests/xen-access/xen-access.c b/tools/tests/xen-access/xen-access.c
> index 090df5f..378b205 100644
> --- a/tools/tests/xen-access/xen-access.c
> +++ b/tools/tests/xen-access/xen-access.c
> @@ -566,13 +566,16 @@ int main(int argc, char *argv[])
>                  }
>  
>                  printf("PAGE ACCESS: %c%c%c for GFN %"PRIx64" (offset %06"
> -                       PRIx64") gla %016"PRIx64" (vcpu %d)\n",
> +                       PRIx64") gla %016"PRIx64" (valid: %c) fault in gpt: %c gpa has gla: %c (vcpu %d)\n",

As you likely need to repsin this patch, please correct %d to %u as
req.vcpu_id is unsigned.

~Andrew

>                         req.access_r ? 'r' : '-',
>                         req.access_w ? 'w' : '-',
>                         req.access_x ? 'x' : '-',
>                         req.gfn,
>                         req.offset,
>                         req.gla,
> +                       req.gla_valid ? 'y' : 'n',
> +                       req.fault_in_gpt ? 'y' : 'n',
> +                       req.fault_gla ? 'y': 'n',
>                         req.vcpu_id);
>  
>                  if ( default_access != after_first_access )

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 1/2] x86/mem_event: Deliver gla fault EPT violation information
  2014-08-08  8:04   ` Andrew Cooper
@ 2014-08-08  8:36     ` Tamas Lengyel
  2014-08-08  8:57       ` Jan Beulich
  0 siblings, 1 reply; 13+ messages in thread
From: Tamas Lengyel @ 2014-08-08  8:36 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: kevin.tian, Jan Beulich, Stefano Stabellini, Jun Nakajima,
	eddie.dong, Ian Jackson, Aravind.Gopalakrishnan,
	suravee.suthikulpanit, xen-devel, Boris Ostrovsky, Ian Campbell


[-- Attachment #1.1: Type: text/plain, Size: 2811 bytes --]

On Fri, Aug 8, 2014 at 10:04 AM, Andrew Cooper <andrew.cooper3@citrix.com>
wrote:

> On 08/08/2014 08:00, Jan Beulich wrote:
> >>>> On 08.08.14 at 01:12, <tamas.lengyel@zentific.com> wrote:
> >> --- a/xen/arch/x86/hvm/hvm.c
> >> +++ b/xen/arch/x86/hvm/hvm.c
> >> @@ -2725,6 +2725,8 @@ void hvm_inject_page_fault(int errcode, unsigned
> long cr2)
> >>  int hvm_hap_nested_page_fault(paddr_t gpa,
> >>                                bool_t gla_valid,
> >>                                unsigned long gla,
> >> +                              bool_t fault_in_gpt,
> >> +                              bool_t fault_gla,
> >>                                bool_t access_r,
> >>                                bool_t access_w,
> >>                                bool_t access_x)
> > Afaic it is out of question to have a function with _six_ boolean
> > parameters. This needs to be consolidated into a single flags field. I
> > have actually done that already, in a patch serving a different
> > purpose (see attached), as discussed recently on this list. I would
> > very much appreciate if you either re-based yours on top of that or
> > modified it along those lines.
> >
> >> @@ -2371,11 +2372,19 @@ static void ept_handle_violation(unsigned long
> qualification, paddr_t gpa)
> >>      }
> >>
> >>      if ( qualification & EPT_GLA_VALID )
> >> +    {
> >>          __vmread(GUEST_LINEAR_ADDRESS, &gla);
> >> +        fault_gla = !!(qualification & EPT_GLA_FAULT);
> >> +        fault_in_gpt = !fault_gla;
> > I am actually not agreeing with Andrew regarding the need for two
> > flags here, if we already know that SVM also properly expresses the
> > distinction between faults on page table accesses and faults on the
> > actual translation. The attached patch is also coded in this way, and
> > I agree with your earlier arguing for just a single flag.
>
> I also agree.  My suggestion for two flags was on the (wrong) assumption
> that AMD didn't currently support providing this information (although Is
> should have picked up on this and retracted my suggestion).
>
> ~Andrew
>

I actually disagree here because there is still a slight difference between
the Intel and AMD implementation. In Intel's case gla_fault is only
available iff gla_valid is set. AMD doesn't have gla_valid. So in the
abstracted code, having gla_fault 0 could mean potentially different
things. That's certainly not good for abstraction. The AMD side could set
gla_valid = 1 if either if their two-bits that we will aggregate into
gla_fault is set, but without actually providing a gla.. and that could
lead to its own set of problems. So if we want to keep the bit aggregated
in gla_fault, we would need a bool_t gla_fault_valid.... which is just
going back to square one of having two bool_t fields as this patch does
already.

[-- Attachment #1.2: Type: text/html, Size: 3613 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 1/2] x86/mem_event: Deliver gla fault EPT violation information
  2014-08-08  7:00 ` Jan Beulich
  2014-08-08  8:04   ` Andrew Cooper
@ 2014-08-08  8:48   ` Tamas Lengyel
  2014-08-08 10:28     ` Tamas Lengyel
  1 sibling, 1 reply; 13+ messages in thread
From: Tamas Lengyel @ 2014-08-08  8:48 UTC (permalink / raw)
  To: Jan Beulich
  Cc: kevin.tian, Ian Campbell, Stefano Stabellini, Jun Nakajima,
	Andrew Cooper, Ian Jackson, eddie.dong, Aravind.Gopalakrishnan,
	suravee.suthikulpanit, xen-devel, Boris Ostrovsky


[-- Attachment #1.1: Type: text/plain, Size: 1865 bytes --]

On Fri, Aug 8, 2014 at 9:00 AM, Jan Beulich <JBeulich@suse.com> wrote:

> >>> On 08.08.14 at 01:12, <tamas.lengyel@zentific.com> wrote:
> > --- a/xen/arch/x86/hvm/hvm.c
> > +++ b/xen/arch/x86/hvm/hvm.c
> > @@ -2725,6 +2725,8 @@ void hvm_inject_page_fault(int errcode, unsigned
> long cr2)
> >  int hvm_hap_nested_page_fault(paddr_t gpa,
> >                                bool_t gla_valid,
> >                                unsigned long gla,
> > +                              bool_t fault_in_gpt,
> > +                              bool_t fault_gla,
> >                                bool_t access_r,
> >                                bool_t access_w,
> >                                bool_t access_x)
>
> Afaic it is out of question to have a function with _six_ boolean
> parameters. This needs to be consolidated into a single flags field. I
> have actually done that already, in a patch serving a different
> purpose (see attached), as discussed recently on this list. I would
> very much appreciate if you either re-based yours on top of that or
> modified it along those lines.
>

Certainly. I'll split your consolidation part into its own patch and rebase
on that.


>
> > @@ -2371,11 +2372,19 @@ static void ept_handle_violation(unsigned long
> qualification, paddr_t gpa)
> >      }
> >
> >      if ( qualification & EPT_GLA_VALID )
> > +    {
> >          __vmread(GUEST_LINEAR_ADDRESS, &gla);
> > +        fault_gla = !!(qualification & EPT_GLA_FAULT);
> > +        fault_in_gpt = !fault_gla;
>
> I am actually not agreeing with Andrew regarding the need for two
> flags here, if we already know that SVM also properly expresses the
> distinction between faults on page table accesses and faults on the
> actual translation. The attached patch is also coded in this way, and
> I agree with your earlier arguing for just a single flag.
>
> Jan
>
>

[-- Attachment #1.2: Type: text/html, Size: 2667 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 1/2] x86/mem_event: Deliver gla fault EPT violation information
  2014-08-08  8:36     ` Tamas Lengyel
@ 2014-08-08  8:57       ` Jan Beulich
  2014-08-08  9:02         ` Andrew Cooper
  0 siblings, 1 reply; 13+ messages in thread
From: Jan Beulich @ 2014-08-08  8:57 UTC (permalink / raw)
  To: Tamas Lengyel
  Cc: kevin.tian, Ian Campbell, Stefano Stabellini, Jun Nakajima,
	Andrew Cooper, Ian Jackson, eddie.dong, Aravind.Gopalakrishnan,
	suravee.suthikulpanit, xen-devel, Boris Ostrovsky

>>> On 08.08.14 at 10:36, <tamas.lengyel@zentific.com> wrote:
> I actually disagree here because there is still a slight difference between
> the Intel and AMD implementation. In Intel's case gla_fault is only
> available iff gla_valid is set. AMD doesn't have gla_valid. So in the
> abstracted code, having gla_fault 0 could mean potentially different
> things. That's certainly not good for abstraction. The AMD side could set
> gla_valid = 1 if either if their two-bits that we will aggregate into
> gla_fault is set, but without actually providing a gla.. and that could
> lead to its own set of problems. So if we want to keep the bit aggregated
> in gla_fault, we would need a bool_t gla_fault_valid.... which is just
> going back to square one of having two bool_t fields as this patch does
> already.

Hmm, the question is of what use the fault kind is when you don't
get passed the GLA. I.e. part of the problem may be a naming one:
"gla_fault" clearly implies a connection to there being a GLA.

Jan

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 1/2] x86/mem_event: Deliver gla fault EPT violation information
  2014-08-08  8:57       ` Jan Beulich
@ 2014-08-08  9:02         ` Andrew Cooper
  2014-08-08  9:17           ` Tamas Lengyel
  0 siblings, 1 reply; 13+ messages in thread
From: Andrew Cooper @ 2014-08-08  9:02 UTC (permalink / raw)
  To: Jan Beulich, Tamas Lengyel
  Cc: kevin.tian, Ian Campbell, Stefano Stabellini, Jun Nakajima,
	eddie.dong, Ian Jackson, Aravind.Gopalakrishnan,
	suravee.suthikulpanit, xen-devel, Boris Ostrovsky

On 08/08/14 09:57, Jan Beulich wrote:
>>>> On 08.08.14 at 10:36, <tamas.lengyel@zentific.com> wrote:
>> I actually disagree here because there is still a slight difference between
>> the Intel and AMD implementation. In Intel's case gla_fault is only
>> available iff gla_valid is set. AMD doesn't have gla_valid. So in the
>> abstracted code, having gla_fault 0 could mean potentially different
>> things. That's certainly not good for abstraction. The AMD side could set
>> gla_valid = 1 if either if their two-bits that we will aggregate into
>> gla_fault is set, but without actually providing a gla.. and that could
>> lead to its own set of problems. So if we want to keep the bit aggregated
>> in gla_fault, we would need a bool_t gla_fault_valid.... which is just
>> going back to square one of having two bool_t fields as this patch does
>> already.
> Hmm, the question is of what use the fault kind is when you don't
> get passed the GLA. I.e. part of the problem may be a naming one:
> "gla_fault" clearly implies a connection to there being a GLA.
>
> Jan
>

I made the assumption that "gla_fault" was more "linear fault
information valid", and it does appear to be used that way in the
current code.  Renaming it might be hard, as it is now in the public
API, but a clarifying comment in public/mem_event.h might not go amiss.

~Andrew

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 1/2] x86/mem_event: Deliver gla fault EPT violation information
  2014-08-08  9:02         ` Andrew Cooper
@ 2014-08-08  9:17           ` Tamas Lengyel
  0 siblings, 0 replies; 13+ messages in thread
From: Tamas Lengyel @ 2014-08-08  9:17 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: kevin.tian, Jan Beulich, Stefano Stabellini, Jun Nakajima,
	eddie.dong, Ian Jackson, Aravind.Gopalakrishnan,
	suravee.suthikulpanit, xen-devel, Boris Ostrovsky, Ian Campbell


[-- Attachment #1.1: Type: text/plain, Size: 2209 bytes --]

On Fri, Aug 8, 2014 at 11:02 AM, Andrew Cooper <andrew.cooper3@citrix.com>
wrote:

> On 08/08/14 09:57, Jan Beulich wrote:
> >>>> On 08.08.14 at 10:36, <tamas.lengyel@zentific.com> wrote:
> >> I actually disagree here because there is still a slight difference
> between
> >> the Intel and AMD implementation. In Intel's case gla_fault is only
> >> available iff gla_valid is set. AMD doesn't have gla_valid. So in the
> >> abstracted code, having gla_fault 0 could mean potentially different
> >> things. That's certainly not good for abstraction. The AMD side could
> set
> >> gla_valid = 1 if either if their two-bits that we will aggregate into
> >> gla_fault is set, but without actually providing a gla.. and that could
> >> lead to its own set of problems. So if we want to keep the bit
> aggregated
> >> in gla_fault, we would need a bool_t gla_fault_valid.... which is just
> >> going back to square one of having two bool_t fields as this patch does
> >> already.
> > Hmm, the question is of what use the fault kind is when you don't
> > get passed the GLA. I.e. part of the problem may be a naming one:
> > "gla_fault" clearly implies a connection to there being a GLA.
> >
> > Jan
> >
>

Being given gla_fault without the gla is still useful. Based on that you
can distinguish 3 types of events: violation in ptlookup; violation after
ptlookup with the resulting address and violation by direct pa access. Not
having the actual GLA for first two is weird, but doesn't render the
information useless.

An interesting side of gla_fault that is still unclear to me is if that bit
is flipped for TLB hits as well (which would make sense). If so, it could
be also used to profile the TLB utilization of the guest.


>
> I made the assumption that "gla_fault" was more "linear fault
> information valid", and it does appear to be used that way in the
> current code.  Renaming it might be hard, as it is now in the public
> API, but a clarifying comment in public/mem_event.h might not go amiss.
>
> ~Andrew
>

The gla_fault wasn't used anywhere in the Xen codebase, the only thing that
existed was an internal typedef for its bit. If you mean gla_valid, yes,
renaming that might be problematic.

Tamas

[-- Attachment #1.2: Type: text/html, Size: 3223 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 1/2] x86/mem_event: Deliver gla fault EPT violation information
  2014-08-08  8:48   ` Tamas Lengyel
@ 2014-08-08 10:28     ` Tamas Lengyel
  2014-08-08 10:34       ` Jan Beulich
  0 siblings, 1 reply; 13+ messages in thread
From: Tamas Lengyel @ 2014-08-08 10:28 UTC (permalink / raw)
  To: Jan Beulich
  Cc: kevin.tian, Ian Campbell, Stefano Stabellini, Jun Nakajima,
	Andrew Cooper, Ian Jackson, eddie.dong, Aravind.Gopalakrishnan,
	suravee.suthikulpanit, xen-devel, Boris Ostrovsky


[-- Attachment #1.1: Type: text/plain, Size: 1376 bytes --]

On Fri, Aug 8, 2014 at 10:48 AM, Tamas Lengyel <tamas.lengyel@zentific.com>
wrote:

>
>
>
> On Fri, Aug 8, 2014 at 9:00 AM, Jan Beulich <JBeulich@suse.com> wrote:
>
>> >>> On 08.08.14 at 01:12, <tamas.lengyel@zentific.com> wrote:
>> > --- a/xen/arch/x86/hvm/hvm.c
>> > +++ b/xen/arch/x86/hvm/hvm.c
>> > @@ -2725,6 +2725,8 @@ void hvm_inject_page_fault(int errcode, unsigned
>> long cr2)
>> >  int hvm_hap_nested_page_fault(paddr_t gpa,
>> >                                bool_t gla_valid,
>> >                                unsigned long gla,
>> > +                              bool_t fault_in_gpt,
>> > +                              bool_t fault_gla,
>> >                                bool_t access_r,
>> >                                bool_t access_w,
>> >                                bool_t access_x)
>>
>> Afaic it is out of question to have a function with _six_ boolean
>> parameters. This needs to be consolidated into a single flags field. I
>> have actually done that already, in a patch serving a different
>> purpose (see attached), as discussed recently on this list. I would
>> very much appreciate if you either re-based yours on top of that or
>> modified it along those lines.
>>
>
> Certainly. I'll split your consolidation part into its own patch and
> rebase on that.
>

Should we consolidate p2m_mem_access_check's inputs into a bitmap as well?

[-- Attachment #1.2: Type: text/html, Size: 2251 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 1/2] x86/mem_event: Deliver gla fault EPT violation information
  2014-08-08 10:28     ` Tamas Lengyel
@ 2014-08-08 10:34       ` Jan Beulich
  0 siblings, 0 replies; 13+ messages in thread
From: Jan Beulich @ 2014-08-08 10:34 UTC (permalink / raw)
  To: Tamas Lengyel
  Cc: kevin.tian, Ian Campbell, Stefano Stabellini, Jun Nakajima,
	Andrew Cooper, Ian Jackson, eddie.dong, Aravind.Gopalakrishnan,
	suravee.suthikulpanit, xen-devel, Boris Ostrovsky

>>> On 08.08.14 at 12:28, <tamas.lengyel@zentific.com> wrote:
> On Fri, Aug 8, 2014 at 10:48 AM, Tamas Lengyel <tamas.lengyel@zentific.com>
> wrote:
> 
>>
>>
>>
>> On Fri, Aug 8, 2014 at 9:00 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>
>>> >>> On 08.08.14 at 01:12, <tamas.lengyel@zentific.com> wrote:
>>> > --- a/xen/arch/x86/hvm/hvm.c
>>> > +++ b/xen/arch/x86/hvm/hvm.c
>>> > @@ -2725,6 +2725,8 @@ void hvm_inject_page_fault(int errcode, unsigned
>>> long cr2)
>>> >  int hvm_hap_nested_page_fault(paddr_t gpa,
>>> >                                bool_t gla_valid,
>>> >                                unsigned long gla,
>>> > +                              bool_t fault_in_gpt,
>>> > +                              bool_t fault_gla,
>>> >                                bool_t access_r,
>>> >                                bool_t access_w,
>>> >                                bool_t access_x)
>>>
>>> Afaic it is out of question to have a function with _six_ boolean
>>> parameters. This needs to be consolidated into a single flags field. I
>>> have actually done that already, in a patch serving a different
>>> purpose (see attached), as discussed recently on this list. I would
>>> very much appreciate if you either re-based yours on top of that or
>>> modified it along those lines.
>>>
>>
>> Certainly. I'll split your consolidation part into its own patch and
>> rebase on that.
>>
> 
> Should we consolidate p2m_mem_access_check's inputs into a bitmap as well?

I'd say so, but that's really a question to the respective maintainers.

Jan

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2014-08-08 10:34 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-08-07 23:12 [PATCH v3 1/2] x86/mem_event: Deliver gla fault EPT violation information Tamas K Lengyel
2014-08-07 23:12 ` [PATCH v3 2/2] tools/xen-access: Print gla valid/fault information Tamas K Lengyel
2014-08-08  8:07   ` Andrew Cooper
2014-08-07 23:33 ` [PATCH v3 1/2] x86/mem_event: Deliver gla fault EPT violation information Tian, Kevin
2014-08-08  7:00 ` Jan Beulich
2014-08-08  8:04   ` Andrew Cooper
2014-08-08  8:36     ` Tamas Lengyel
2014-08-08  8:57       ` Jan Beulich
2014-08-08  9:02         ` Andrew Cooper
2014-08-08  9:17           ` Tamas Lengyel
2014-08-08  8:48   ` Tamas Lengyel
2014-08-08 10:28     ` Tamas Lengyel
2014-08-08 10:34       ` Jan Beulich

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.