All of lore.kernel.org
 help / color / mirror / Atom feed
From: Tamas K Lengyel <tamas.lengyel@zentific.com>
To: xen-devel@lists.xen.org
Cc: kevin.tian@intel.com, wei.liu2@citrix.com,
	ian.campbell@citrix.com, steve@zentific.com,
	stefano.stabellini@eu.citrix.com, jun.nakajima@intel.com,
	tim@xen.org, ian.jackson@eu.citrix.com, eddie.dong@intel.com,
	andres@lagarcavilla.org, jbeulich@suse.com,
	Tamas K Lengyel <tamas.lengyel@zentific.com>,
	rshriram@cs.ubc.ca, keir@xen.org, dgdegra@tycho.nsa.gov,
	yanghy@cn.fujitsu.com, rcojocaru@bitdefender.com
Subject: [RFC PATCH V3 01/12] xen/mem_event: Cleanup of mem_event structures
Date: Thu, 29 Jan 2015 22:46:27 +0100	[thread overview]
Message-ID: <1422567998-29995-2-git-send-email-tamas.lengyel@zentific.com> (raw)
In-Reply-To: <1422567998-29995-1-git-send-email-tamas.lengyel@zentific.com>

From: Razvan Cojocaru <rcojocaru@bitdefender.com>

The public mem_event structures used to communicate with helper applications via
shared rings have been used in different settings. However, the variable names
within this structure have not reflected this fact, resulting in the reuse of
variables to mean different things under different scenarios.

This patch remedies the issue by clearly defining the structure members based on
the actual context within which the structure is used.

Signed-off-by: Razvan Cojocaru <rcojocaru@bitdefender.com>
Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
---
v3: Add padding to mem_event structures.
    Add version field to mem_event structures and checks for it.
---
 tools/tests/xen-access/xen-access.c |  43 +++++----
 tools/xenpaging/xenpaging.c         |  34 +++----
 xen/arch/x86/hvm/hvm.c              | 183 ++++++++++++++++++++----------------
 xen/arch/x86/mm/mem_sharing.c       |  10 +-
 xen/arch/x86/mm/p2m.c               | 132 ++++++++++++++------------
 xen/common/mem_access.c             |   3 +
 xen/include/public/mem_event.h      | 106 ++++++++++++++++-----
 7 files changed, 308 insertions(+), 203 deletions(-)

diff --git a/tools/tests/xen-access/xen-access.c b/tools/tests/xen-access/xen-access.c
index 6cb382d..002dc3e 100644
--- a/tools/tests/xen-access/xen-access.c
+++ b/tools/tests/xen-access/xen-access.c
@@ -551,13 +551,21 @@ int main(int argc, char *argv[])
                 continue;
             }
 
+            if ( req.version != MEM_EVENT_INTERFACE_VERSION )
+            {
+                ERROR("Error: mem_event interface version mismatch!\n");
+                interrupted = -1;
+                continue;
+            }
+
             memset( &rsp, 0, sizeof (rsp) );
+            rsp.version = MEM_EVENT_INTERFACE_VERSION;
             rsp.vcpu_id = req.vcpu_id;
             rsp.flags = req.flags;
 
             switch (req.reason) {
-            case MEM_EVENT_REASON_VIOLATION:
-                rc = xc_get_mem_access(xch, domain_id, req.gfn, &access);
+            case MEM_EVENT_REASON_MEM_ACCESS:
+                rc = xc_get_mem_access(xch, domain_id, req.data.mem_access.gfn, &access);
                 if (rc < 0)
                 {
                     ERROR("Error %d getting mem_access event\n", rc);
@@ -567,21 +575,21 @@ int main(int argc, char *argv[])
 
                 printf("PAGE ACCESS: %c%c%c for GFN %"PRIx64" (offset %06"
                        PRIx64") gla %016"PRIx64" (valid: %c; fault in gpt: %c; fault with gla: %c) (vcpu %u)\n",
-                       req.access_r ? 'r' : '-',
-                       req.access_w ? 'w' : '-',
-                       req.access_x ? 'x' : '-',
-                       req.gfn,
-                       req.offset,
-                       req.gla,
-                       req.gla_valid ? 'y' : 'n',
-                       req.fault_in_gpt ? 'y' : 'n',
-                       req.fault_with_gla ? 'y': 'n',
+                       req.data.mem_access.access_r ? 'r' : '-',
+                       req.data.mem_access.access_w ? 'w' : '-',
+                       req.data.mem_access.access_x ? 'x' : '-',
+                       req.data.mem_access.gfn,
+                       req.data.mem_access.offset,
+                       req.data.mem_access.gla,
+                       req.data.mem_access.gla_valid ? 'y' : 'n',
+                       req.data.mem_access.fault_in_gpt ? 'y' : 'n',
+                       req.data.mem_access.fault_with_gla ? 'y': 'n',
                        req.vcpu_id);
 
                 if ( default_access != after_first_access )
                 {
                     rc = xc_set_mem_access(xch, domain_id, after_first_access,
-                                           req.gfn, 1);
+                                           req.data.mem_access.gfn, 1);
                     if (rc < 0)
                     {
                         ERROR("Error %d setting gfn to access_type %d\n", rc,
@@ -592,13 +600,12 @@ int main(int argc, char *argv[])
                 }
 
 
-                rsp.gfn = req.gfn;
-                rsp.p2mt = req.p2mt;
+                rsp.data.mem_access.gfn = req.data.mem_access.gfn;
                 break;
-            case MEM_EVENT_REASON_INT3:
-                printf("INT3: rip=%016"PRIx64", gfn=%"PRIx64" (vcpu %d)\n", 
-                       req.gla, 
-                       req.gfn,
+            case MEM_EVENT_REASON_SOFTWARE_BREAKPOINT:
+                printf("INT3: rip=%016"PRIx64", gfn=%"PRIx64" (vcpu %d)\n",
+                       req.regs.x86.rip,
+                       req.data.software_breakpoint.gfn,
                        req.vcpu_id);
 
                 /* Reinject */
diff --git a/tools/xenpaging/xenpaging.c b/tools/xenpaging/xenpaging.c
index 82c1ee4..e5c5c76 100644
--- a/tools/xenpaging/xenpaging.c
+++ b/tools/xenpaging/xenpaging.c
@@ -684,9 +684,9 @@ static int xenpaging_resume_page(struct xenpaging *paging, mem_event_response_t
          * This allows page-out of these gfns if the target grows again.
          */
         if (paging->num_paged_out > paging->policy_mru_size)
-            policy_notify_paged_in(rsp->gfn);
+            policy_notify_paged_in(rsp->data.mem_paging.gfn);
         else
-            policy_notify_paged_in_nomru(rsp->gfn);
+            policy_notify_paged_in_nomru(rsp->data.mem_paging.gfn);
 
        /* Record number of resumed pages */
        paging->num_paged_out--;
@@ -910,49 +910,49 @@ int main(int argc, char *argv[])
 
             get_request(&paging->mem_event, &req);
 
-            if ( req.gfn > paging->max_pages )
+            if ( req.data.mem_paging.gfn > paging->max_pages )
             {
-                ERROR("Requested gfn %"PRIx64" higher than max_pages %x\n", req.gfn, paging->max_pages);
+                ERROR("Requested gfn %"PRIx64" higher than max_pages %x\n", req.data.mem_paging.gfn, paging->max_pages);
                 goto out;
             }
 
             /* Check if the page has already been paged in */
-            if ( test_and_clear_bit(req.gfn, paging->bitmap) )
+            if ( test_and_clear_bit(req.data.mem_paging.gfn, paging->bitmap) )
             {
                 /* Find where in the paging file to read from */
-                slot = paging->gfn_to_slot[req.gfn];
+                slot = paging->gfn_to_slot[req.data.mem_paging.gfn];
 
                 /* Sanity check */
-                if ( paging->slot_to_gfn[slot] != req.gfn )
+                if ( paging->slot_to_gfn[slot] != req.data.mem_paging.gfn )
                 {
-                    ERROR("Expected gfn %"PRIx64" in slot %d, but found gfn %lx\n", req.gfn, slot, paging->slot_to_gfn[slot]);
+                    ERROR("Expected gfn %"PRIx64" in slot %d, but found gfn %lx\n", req.data.mem_paging.gfn, slot, paging->slot_to_gfn[slot]);
                     goto out;
                 }
 
                 if ( req.flags & MEM_EVENT_FLAG_DROP_PAGE )
                 {
-                    DPRINTF("drop_page ^ gfn %"PRIx64" pageslot %d\n", req.gfn, slot);
+                    DPRINTF("drop_page ^ gfn %"PRIx64" pageslot %d\n", req.data.mem_paging.gfn, slot);
                     /* Notify policy of page being dropped */
-                    policy_notify_dropped(req.gfn);
+                    policy_notify_dropped(req.data.mem_paging.gfn);
                 }
                 else
                 {
                     /* Populate the page */
-                    if ( xenpaging_populate_page(paging, req.gfn, slot) < 0 )
+                    if ( xenpaging_populate_page(paging, req.data.mem_paging.gfn, slot) < 0 )
                     {
-                        ERROR("Error populating page %"PRIx64"", req.gfn);
+                        ERROR("Error populating page %"PRIx64"", req.data.mem_paging.gfn);
                         goto out;
                     }
                 }
 
                 /* Prepare the response */
-                rsp.gfn = req.gfn;
+                rsp.data.mem_paging.gfn = req.data.mem_paging.gfn;
                 rsp.vcpu_id = req.vcpu_id;
                 rsp.flags = req.flags;
 
                 if ( xenpaging_resume_page(paging, &rsp, 1) < 0 )
                 {
-                    PERROR("Error resuming page %"PRIx64"", req.gfn);
+                    PERROR("Error resuming page %"PRIx64"", req.data.mem_paging.gfn);
                     goto out;
                 }
 
@@ -967,7 +967,7 @@ int main(int argc, char *argv[])
                 DPRINTF("page %s populated (domain = %d; vcpu = %d;"
                         " gfn = %"PRIx64"; paused = %d; evict_fail = %d)\n",
                         req.flags & MEM_EVENT_FLAG_EVICT_FAIL ? "not" : "already",
-                        paging->mem_event.domain_id, req.vcpu_id, req.gfn,
+                        paging->mem_event.domain_id, req.vcpu_id, req.data.mem_paging.gfn,
                         !!(req.flags & MEM_EVENT_FLAG_VCPU_PAUSED) ,
                         !!(req.flags & MEM_EVENT_FLAG_EVICT_FAIL) );
 
@@ -975,13 +975,13 @@ int main(int argc, char *argv[])
                 if (( req.flags & MEM_EVENT_FLAG_VCPU_PAUSED ) || ( req.flags & MEM_EVENT_FLAG_EVICT_FAIL ))
                 {
                     /* Prepare the response */
-                    rsp.gfn = req.gfn;
+                    rsp.data.mem_paging.gfn = req.data.mem_paging.gfn;
                     rsp.vcpu_id = req.vcpu_id;
                     rsp.flags = req.flags;
 
                     if ( xenpaging_resume_page(paging, &rsp, 0) < 0 )
                     {
-                        PERROR("Error resuming page %"PRIx64"", req.gfn);
+                        PERROR("Error resuming page %"PRIx64"", req.data.mem_paging.gfn);
                         goto out;
                     }
                 }
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index c7984d1..e0c6f22 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -6266,48 +6266,42 @@ static void hvm_mem_event_fill_regs(mem_event_request_t *req)
     const struct cpu_user_regs *regs = guest_cpu_user_regs();
     const struct vcpu *curr = current;
 
-    req->x86_regs.rax = regs->eax;
-    req->x86_regs.rcx = regs->ecx;
-    req->x86_regs.rdx = regs->edx;
-    req->x86_regs.rbx = regs->ebx;
-    req->x86_regs.rsp = regs->esp;
-    req->x86_regs.rbp = regs->ebp;
-    req->x86_regs.rsi = regs->esi;
-    req->x86_regs.rdi = regs->edi;
-
-    req->x86_regs.r8  = regs->r8;
-    req->x86_regs.r9  = regs->r9;
-    req->x86_regs.r10 = regs->r10;
-    req->x86_regs.r11 = regs->r11;
-    req->x86_regs.r12 = regs->r12;
-    req->x86_regs.r13 = regs->r13;
-    req->x86_regs.r14 = regs->r14;
-    req->x86_regs.r15 = regs->r15;
-
-    req->x86_regs.rflags = regs->eflags;
-    req->x86_regs.rip    = regs->eip;
-
-    req->x86_regs.msr_efer = curr->arch.hvm_vcpu.guest_efer;
-    req->x86_regs.cr0 = curr->arch.hvm_vcpu.guest_cr[0];
-    req->x86_regs.cr3 = curr->arch.hvm_vcpu.guest_cr[3];
-    req->x86_regs.cr4 = curr->arch.hvm_vcpu.guest_cr[4];
-}
-
-static int hvm_memory_event_traps(long p, uint32_t reason,
-                                  unsigned long value, unsigned long old, 
-                                  bool_t gla_valid, unsigned long gla) 
-{
-    struct vcpu* v = current;
-    struct domain *d = v->domain;
-    mem_event_request_t req = { .reason = reason };
+    req->regs.x86.rax = regs->eax;
+    req->regs.x86.rcx = regs->ecx;
+    req->regs.x86.rdx = regs->edx;
+    req->regs.x86.rbx = regs->ebx;
+    req->regs.x86.rsp = regs->esp;
+    req->regs.x86.rbp = regs->ebp;
+    req->regs.x86.rsi = regs->esi;
+    req->regs.x86.rdi = regs->edi;
+
+    req->regs.x86.r8  = regs->r8;
+    req->regs.x86.r9  = regs->r9;
+    req->regs.x86.r10 = regs->r10;
+    req->regs.x86.r11 = regs->r11;
+    req->regs.x86.r12 = regs->r12;
+    req->regs.x86.r13 = regs->r13;
+    req->regs.x86.r14 = regs->r14;
+    req->regs.x86.r15 = regs->r15;
+
+    req->regs.x86.rflags = regs->eflags;
+    req->regs.x86.rip    = regs->eip;
+
+    req->regs.x86.msr_efer = curr->arch.hvm_vcpu.guest_efer;
+    req->regs.x86.cr0 = curr->arch.hvm_vcpu.guest_cr[0];
+    req->regs.x86.cr3 = curr->arch.hvm_vcpu.guest_cr[3];
+    req->regs.x86.cr4 = curr->arch.hvm_vcpu.guest_cr[4];
+}
+
+static int hvm_memory_event_traps(long parameters, mem_event_request_t *req)
+{
     int rc;
+    struct vcpu *v = current;
+    struct domain *d = v->domain;
 
-    if ( !(p & HVMPME_MODE_MASK) ) 
+    if ( !(parameters & HVMPME_MODE_MASK) )
         return 0;
 
-    if ( (p & HVMPME_onchangeonly) && (value == old) )
-        return 1;
-
     rc = mem_event_claim_slot(d, &d->mem_event->access);
     if ( rc == -ENOSYS )
     {
@@ -6318,85 +6312,112 @@ static int hvm_memory_event_traps(long p, uint32_t reason,
     else if ( rc < 0 )
         return rc;
 
-    if ( (p & HVMPME_MODE_MASK) == HVMPME_mode_sync ) 
+    if ( (parameters & HVMPME_MODE_MASK) == HVMPME_mode_sync )
     {
-        req.flags |= MEM_EVENT_FLAG_VCPU_PAUSED;    
+        req->flags |= MEM_EVENT_FLAG_VCPU_PAUSED;
         mem_event_vcpu_pause(v);
     }
 
-    req.gfn = value;
-    req.vcpu_id = v->vcpu_id;
-    if ( gla_valid ) 
-    {
-        req.offset = gla & ((1 << PAGE_SHIFT) - 1);
-        req.gla = gla;
-        req.gla_valid = 1;
-    }
-    else
-    {
-        req.gla = old;
-    }
-    
-    hvm_mem_event_fill_regs(&req);
-    mem_event_put_request(d, &d->mem_event->access, &req);
-    
+    hvm_mem_event_fill_regs(req);
+    mem_event_put_request(d, &d->mem_event->access, req);
+
     return 1;
 }
 
 void hvm_memory_event_cr0(unsigned long value, unsigned long old) 
 {
-    hvm_memory_event_traps(current->domain->arch.hvm_domain
-                             .params[HVM_PARAM_MEMORY_EVENT_CR0],
-                           MEM_EVENT_REASON_CR0,
-                           value, old, 0, 0);
+    mem_event_request_t req = {
+        .reason = MEM_EVENT_REASON_MOV_TO_CR0,
+        .vcpu_id = current->vcpu_id,
+        .data.mov_to_cr.new_value = value,
+        .data.mov_to_cr.old_value = old
+    };
+
+    long parameters = current->domain->arch.hvm_domain
+                        .params[HVM_PARAM_MEMORY_EVENT_CR0];
+
+    if ( (parameters & HVMPME_onchangeonly) && (value == old) )
+        return;
+
+    hvm_memory_event_traps(parameters, &req);
 }
 
 void hvm_memory_event_cr3(unsigned long value, unsigned long old) 
 {
-    hvm_memory_event_traps(current->domain->arch.hvm_domain
-                             .params[HVM_PARAM_MEMORY_EVENT_CR3],
-                           MEM_EVENT_REASON_CR3,
-                           value, old, 0, 0);
+    mem_event_request_t req = {
+        .reason = MEM_EVENT_REASON_MOV_TO_CR3,
+        .vcpu_id = current->vcpu_id,
+        .data.mov_to_cr.new_value = value,
+        .data.mov_to_cr.old_value = old
+    };
+
+    long parameters = current->domain->arch.hvm_domain
+                        .params[HVM_PARAM_MEMORY_EVENT_CR3];
+
+    if ( (parameters & HVMPME_onchangeonly) && (value == old) )
+        return;
+
+    hvm_memory_event_traps(parameters, &req);
 }
 
 void hvm_memory_event_cr4(unsigned long value, unsigned long old) 
 {
-    hvm_memory_event_traps(current->domain->arch.hvm_domain
-                             .params[HVM_PARAM_MEMORY_EVENT_CR4],
-                           MEM_EVENT_REASON_CR4,
-                           value, old, 0, 0);
+    mem_event_request_t req = {
+        .reason = MEM_EVENT_REASON_MOV_TO_CR4,
+        .vcpu_id = current->vcpu_id,
+        .data.mov_to_cr.new_value = value,
+        .data.mov_to_cr.old_value = old
+    };
+
+    long parameters = current->domain->arch.hvm_domain
+                        .params[HVM_PARAM_MEMORY_EVENT_CR4];
+
+    if ( (parameters & HVMPME_onchangeonly) && (value == old) )
+        return;
+
+    hvm_memory_event_traps(parameters, &req);
 }
 
 void hvm_memory_event_msr(unsigned long msr, unsigned long value)
 {
+    mem_event_request_t req = {
+        .reason = MEM_EVENT_REASON_MOV_TO_MSR,
+        .vcpu_id = current->vcpu_id,
+        .data.mov_to_msr.msr = msr,
+        .data.mov_to_msr.value = value,
+    };
+
     hvm_memory_event_traps(current->domain->arch.hvm_domain
-                             .params[HVM_PARAM_MEMORY_EVENT_MSR],
-                           MEM_EVENT_REASON_MSR,
-                           value, ~value, 1, msr);
+                            .params[HVM_PARAM_MEMORY_EVENT_MSR],
+                           &req);
 }
 
 int hvm_memory_event_int3(unsigned long gla) 
 {
     uint32_t pfec = PFEC_page_present;
-    unsigned long gfn;
-    gfn = paging_gva_to_gfn(current, gla, &pfec);
+    mem_event_request_t req = {
+        .reason = MEM_EVENT_REASON_SOFTWARE_BREAKPOINT,
+        .vcpu_id = current->vcpu_id,
+        .data.software_breakpoint.gfn = paging_gva_to_gfn(current, gla, &pfec)
+    };
 
     return hvm_memory_event_traps(current->domain->arch.hvm_domain
-                                    .params[HVM_PARAM_MEMORY_EVENT_INT3],
-                                  MEM_EVENT_REASON_INT3,
-                                  gfn, 0, 1, gla);
+                                   .params[HVM_PARAM_MEMORY_EVENT_INT3],
+                                  &req);
 }
 
 int hvm_memory_event_single_step(unsigned long gla)
 {
     uint32_t pfec = PFEC_page_present;
-    unsigned long gfn;
-    gfn = paging_gva_to_gfn(current, gla, &pfec);
+    mem_event_request_t req = {
+        .reason = MEM_EVENT_REASON_SINGLESTEP,
+        .vcpu_id = current->vcpu_id,
+        .data.singlestep.gfn = paging_gva_to_gfn(current, gla, &pfec)
+    };
 
     return hvm_memory_event_traps(current->domain->arch.hvm_domain
-            .params[HVM_PARAM_MEMORY_EVENT_SINGLE_STEP],
-            MEM_EVENT_REASON_SINGLESTEP,
-            gfn, 0, 1, gla);
+                                   .params[HVM_PARAM_MEMORY_EVENT_SINGLE_STEP],
+                                  &req);
 }
 
 int nhvm_vcpu_hostrestore(struct vcpu *v, struct cpu_user_regs *regs)
diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 7c0fc7d..b5149f7 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -559,7 +559,10 @@ int mem_sharing_notify_enomem(struct domain *d, unsigned long gfn,
 {
     struct vcpu *v = current;
     int rc;
-    mem_event_request_t req = { .gfn = gfn };
+    mem_event_request_t req = {
+        .reason = MEM_EVENT_REASON_MEM_SHARING,
+        .data.mem_sharing.gfn = gfn
+    };
 
     if ( (rc = __mem_event_claim_slot(d, 
                         &d->mem_event->share, allow_sleep)) < 0 )
@@ -571,7 +574,7 @@ int mem_sharing_notify_enomem(struct domain *d, unsigned long gfn,
         mem_event_vcpu_pause(v);
     }
 
-    req.p2mt = p2m_ram_shared;
+    req.data.mem_sharing.p2mt = p2m_ram_shared;
     req.vcpu_id = v->vcpu_id;
 
     mem_event_put_request(d, &d->mem_event->share, &req);
@@ -598,6 +601,9 @@ int mem_sharing_sharing_resume(struct domain *d)
     {
         struct vcpu *v;
 
+        if ( rsp.version != MEM_EVENT_INTERFACE_VERSION )
+            continue;
+
         if ( rsp.flags & MEM_EVENT_FLAG_DUMMY )
             continue;
 
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index efa49dd..4b5eade 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1077,7 +1077,10 @@ int p2m_mem_paging_evict(struct domain *d, unsigned long gfn)
 void p2m_mem_paging_drop_page(struct domain *d, unsigned long gfn,
                                 p2m_type_t p2mt)
 {
-    mem_event_request_t req = { .gfn = gfn };
+    mem_event_request_t req = {
+        .reason = MEM_EVENT_REASON_MEM_PAGING,
+        .data.mem_paging.gfn = gfn
+    };
 
     /* We allow no ring in this unique case, because it won't affect
      * correctness of the guest execution at this point.  If this is the only
@@ -1124,7 +1127,10 @@ void p2m_mem_paging_drop_page(struct domain *d, unsigned long gfn,
 void p2m_mem_paging_populate(struct domain *d, unsigned long gfn)
 {
     struct vcpu *v = current;
-    mem_event_request_t req = { .gfn = gfn };
+    mem_event_request_t req = {
+        .reason = MEM_EVENT_REASON_MEM_PAGING,
+        .data.mem_paging.gfn = gfn
+    };
     p2m_type_t p2mt;
     p2m_access_t a;
     mfn_t mfn;
@@ -1174,7 +1180,7 @@ void p2m_mem_paging_populate(struct domain *d, unsigned long gfn)
     }
 
     /* Send request to pager */
-    req.p2mt = p2mt;
+    req.data.mem_paging.p2mt = p2mt;
     req.vcpu_id = v->vcpu_id;
 
     mem_event_put_request(d, &d->mem_event->paging, &req);
@@ -1296,6 +1302,9 @@ void p2m_mem_paging_resume(struct domain *d)
     {
         struct vcpu *v;
 
+        if ( rsp.version != MEM_EVENT_INTERFACE_VERSION )
+            continue;
+
         if ( rsp.flags & MEM_EVENT_FLAG_DUMMY )
             continue;
 
@@ -1309,15 +1318,15 @@ void p2m_mem_paging_resume(struct domain *d)
         if ( !(rsp.flags & MEM_EVENT_FLAG_DROP_PAGE) )
         {
             gfn_lock(p2m, rsp.gfn, 0);
-            mfn = p2m->get_entry(p2m, rsp.gfn, &p2mt, &a, 0, NULL);
+            mfn = p2m->get_entry(p2m, rsp.data.mem_access.gfn, &p2mt, &a, 0, NULL);
             /* Allow only pages which were prepared properly, or pages which
              * were nominated but not evicted */
             if ( mfn_valid(mfn) && (p2mt == p2m_ram_paging_in) )
             {
-                p2m_set_entry(p2m, rsp.gfn, mfn, PAGE_ORDER_4K,
+                p2m_set_entry(p2m, rsp.data.mem_access.gfn, mfn, PAGE_ORDER_4K,
                               paging_mode_log_dirty(d) ? p2m_ram_logdirty :
                               p2m_ram_rw, a);
-                set_gpfn_from_mfn(mfn_x(mfn), rsp.gfn);
+                set_gpfn_from_mfn(mfn_x(mfn), rsp.data.mem_access.gfn);
             }
             gfn_unlock(p2m, rsp.gfn, 0);
         }
@@ -1337,49 +1346,49 @@ static void p2m_mem_event_fill_regs(mem_event_request_t *req)
     /* Architecture-specific vmcs/vmcb bits */
     hvm_funcs.save_cpu_ctxt(curr, &ctxt);
 
-    req->x86_regs.rax = regs->eax;
-    req->x86_regs.rcx = regs->ecx;
-    req->x86_regs.rdx = regs->edx;
-    req->x86_regs.rbx = regs->ebx;
-    req->x86_regs.rsp = regs->esp;
-    req->x86_regs.rbp = regs->ebp;
-    req->x86_regs.rsi = regs->esi;
-    req->x86_regs.rdi = regs->edi;
-
-    req->x86_regs.r8  = regs->r8;
-    req->x86_regs.r9  = regs->r9;
-    req->x86_regs.r10 = regs->r10;
-    req->x86_regs.r11 = regs->r11;
-    req->x86_regs.r12 = regs->r12;
-    req->x86_regs.r13 = regs->r13;
-    req->x86_regs.r14 = regs->r14;
-    req->x86_regs.r15 = regs->r15;
-
-    req->x86_regs.rflags = regs->eflags;
-    req->x86_regs.rip    = regs->eip;
-
-    req->x86_regs.dr7 = curr->arch.debugreg[7];
-    req->x86_regs.cr0 = ctxt.cr0;
-    req->x86_regs.cr2 = ctxt.cr2;
-    req->x86_regs.cr3 = ctxt.cr3;
-    req->x86_regs.cr4 = ctxt.cr4;
-
-    req->x86_regs.sysenter_cs = ctxt.sysenter_cs;
-    req->x86_regs.sysenter_esp = ctxt.sysenter_esp;
-    req->x86_regs.sysenter_eip = ctxt.sysenter_eip;
-
-    req->x86_regs.msr_efer = ctxt.msr_efer;
-    req->x86_regs.msr_star = ctxt.msr_star;
-    req->x86_regs.msr_lstar = ctxt.msr_lstar;
+    req->regs.x86.rax = regs->eax;
+    req->regs.x86.rcx = regs->ecx;
+    req->regs.x86.rdx = regs->edx;
+    req->regs.x86.rbx = regs->ebx;
+    req->regs.x86.rsp = regs->esp;
+    req->regs.x86.rbp = regs->ebp;
+    req->regs.x86.rsi = regs->esi;
+    req->regs.x86.rdi = regs->edi;
+
+    req->regs.x86.r8  = regs->r8;
+    req->regs.x86.r9  = regs->r9;
+    req->regs.x86.r10 = regs->r10;
+    req->regs.x86.r11 = regs->r11;
+    req->regs.x86.r12 = regs->r12;
+    req->regs.x86.r13 = regs->r13;
+    req->regs.x86.r14 = regs->r14;
+    req->regs.x86.r15 = regs->r15;
+
+    req->regs.x86.rflags = regs->eflags;
+    req->regs.x86.rip    = regs->eip;
+
+    req->regs.x86.dr7 = curr->arch.debugreg[7];
+    req->regs.x86.cr0 = ctxt.cr0;
+    req->regs.x86.cr2 = ctxt.cr2;
+    req->regs.x86.cr3 = ctxt.cr3;
+    req->regs.x86.cr4 = ctxt.cr4;
+
+    req->regs.x86.sysenter_cs = ctxt.sysenter_cs;
+    req->regs.x86.sysenter_esp = ctxt.sysenter_esp;
+    req->regs.x86.sysenter_eip = ctxt.sysenter_eip;
+
+    req->regs.x86.msr_efer = ctxt.msr_efer;
+    req->regs.x86.msr_star = ctxt.msr_star;
+    req->regs.x86.msr_lstar = ctxt.msr_lstar;
 
     hvm_get_segment_register(curr, x86_seg_fs, &seg);
-    req->x86_regs.fs_base = seg.base;
+    req->regs.x86.fs_base = seg.base;
 
     hvm_get_segment_register(curr, x86_seg_gs, &seg);
-    req->x86_regs.gs_base = seg.base;
+    req->regs.x86.gs_base = seg.base;
 
     hvm_get_segment_register(curr, x86_seg_cs, &seg);
-    req->x86_regs.cs_arbytes = seg.attr.bytes;
+    req->regs.x86.cs_arbytes = seg.attr.bytes;
 }
 
 void p2m_mem_event_emulate_check(struct vcpu *v, const mem_event_response_t *rsp)
@@ -1390,39 +1399,40 @@ void p2m_mem_event_emulate_check(struct vcpu *v, const mem_event_response_t *rsp
         xenmem_access_t access;
         bool_t violation = 1;
 
-        if ( p2m_get_mem_access(v->domain, rsp->gfn, &access) == 0 )
+        if ( p2m_get_mem_access(v->domain, rsp->data.mem_access.gfn, &access) == 0 )
         {
             switch ( access )
             {
             case XENMEM_access_n:
             case XENMEM_access_n2rwx:
             default:
-                violation = rsp->access_r || rsp->access_w || rsp->access_x;
+                violation = rsp->data.mem_access.access_r || rsp->data.mem_access.access_w ||
+                            rsp->data.mem_access.access_x;
                 break;
 
             case XENMEM_access_r:
-                violation = rsp->access_w || rsp->access_x;
+                violation = rsp->data.mem_access.access_w || rsp->data.mem_access.access_x;
                 break;
 
             case XENMEM_access_w:
-                violation = rsp->access_r || rsp->access_x;
+                violation = rsp->data.mem_access.access_r || rsp->data.mem_access.access_x;
                 break;
 
             case XENMEM_access_x:
-                violation = rsp->access_r || rsp->access_w;
+                violation = rsp->data.mem_access.access_r || rsp->data.mem_access.access_w;
                 break;
 
             case XENMEM_access_rx:
             case XENMEM_access_rx2rw:
-                violation = rsp->access_w;
+                violation = rsp->data.mem_access.access_w;
                 break;
 
             case XENMEM_access_wx:
-                violation = rsp->access_r;
+                violation = rsp->data.mem_access.access_r;
                 break;
 
             case XENMEM_access_rw:
-                violation = rsp->access_x;
+                violation = rsp->data.mem_access.access_x;
                 break;
 
             case XENMEM_access_rwx:
@@ -1540,24 +1550,24 @@ bool_t p2m_mem_access_check(paddr_t gpa, unsigned long gla,
     if ( req )
     {
         *req_ptr = req;
-        req->reason = MEM_EVENT_REASON_VIOLATION;
+        req->reason = MEM_EVENT_REASON_MEM_ACCESS;
 
         /* Pause the current VCPU */
         if ( p2ma != p2m_access_n2rwx )
             req->flags |= MEM_EVENT_FLAG_VCPU_PAUSED;
 
         /* Send request to mem event */
-        req->gfn = gfn;
-        req->offset = gpa & ((1 << PAGE_SHIFT) - 1);
-        req->gla_valid = npfec.gla_valid;
-        req->gla = gla;
+        req->data.mem_access.gfn = gfn;
+        req->data.mem_access.offset = gpa & ((1 << PAGE_SHIFT) - 1);
+        req->data.mem_access.gla_valid = npfec.gla_valid;
+        req->data.mem_access.gla = gla;
         if ( npfec.kind == npfec_kind_with_gla )
-            req->fault_with_gla = 1;
+            req->data.mem_access.fault_with_gla = 1;
         else if ( npfec.kind == npfec_kind_in_gpt )
-            req->fault_in_gpt = 1;
-        req->access_r = npfec.read_access;
-        req->access_w = npfec.write_access;
-        req->access_x = npfec.insn_fetch;
+            req->data.mem_access.fault_in_gpt = 1;
+        req->data.mem_access.access_r = npfec.read_access;
+        req->data.mem_access.access_w = npfec.write_access;
+        req->data.mem_access.access_x = npfec.insn_fetch;
         req->vcpu_id = v->vcpu_id;
 
         p2m_mem_event_fill_regs(req);
diff --git a/xen/common/mem_access.c b/xen/common/mem_access.c
index d8aac5f..656dd8e 100644
--- a/xen/common/mem_access.c
+++ b/xen/common/mem_access.c
@@ -38,6 +38,9 @@ void mem_access_resume(struct domain *d)
     {
         struct vcpu *v;
 
+        if ( rsp.version != MEM_EVENT_INTERFACE_VERSION )
+            continue;
+
         if ( rsp.flags & MEM_EVENT_FLAG_DUMMY )
             continue;
 
diff --git a/xen/include/public/mem_event.h b/xen/include/public/mem_event.h
index 599f9e8..0f36b33 100644
--- a/xen/include/public/mem_event.h
+++ b/xen/include/public/mem_event.h
@@ -27,9 +27,13 @@
 #ifndef _XEN_PUBLIC_MEM_EVENT_H
 #define _XEN_PUBLIC_MEM_EVENT_H
 
+#if defined(__XEN__) || defined(__XEN_TOOLS__)
+
 #include "xen.h"
 #include "io/ring.h"
 
+#define MEM_EVENT_INTERFACE_VERSION 0x00000001
+
 /* Memory event flags */
 #define MEM_EVENT_FLAG_VCPU_PAUSED     (1 << 0)
 #define MEM_EVENT_FLAG_DROP_PAGE       (1 << 1)
@@ -48,16 +52,27 @@
  */
 #define MEM_EVENT_FLAG_EMULATE_NOWRITE (1 << 6)
 
-/* Reasons for the memory event request */
-#define MEM_EVENT_REASON_UNKNOWN     0    /* typical reason */
-#define MEM_EVENT_REASON_VIOLATION   1    /* access violation, GFN is address */
-#define MEM_EVENT_REASON_CR0         2    /* CR0 was hit: gfn is new CR0 value, gla is previous */
-#define MEM_EVENT_REASON_CR3         3    /* CR3 was hit: gfn is new CR3 value, gla is previous */
-#define MEM_EVENT_REASON_CR4         4    /* CR4 was hit: gfn is new CR4 value, gla is previous */
-#define MEM_EVENT_REASON_INT3        5    /* int3 was hit: gla/gfn are RIP */
-#define MEM_EVENT_REASON_SINGLESTEP  6    /* single step was invoked: gla/gfn are RIP */
-#define MEM_EVENT_REASON_MSR         7    /* MSR was hit: gfn is MSR value, gla is MSR address;
-                                             does NOT honour HVMPME_onchangeonly */
+/* Reasons for the vm event request */
+/* Default case */
+#define MEM_EVENT_REASON_UNKNOWN                 0
+/* Memory access violation */
+#define MEM_EVENT_REASON_MEM_ACCESS              1
+/* Memory sharing event */
+#define MEM_EVENT_REASON_MEM_SHARING             2
+/* Memory paging event */
+#define MEM_EVENT_REASON_MEM_PAGING              3
+/* CR0 was updated */
+#define MEM_EVENT_REASON_MOV_TO_CR0              4
+/* CR3 was updated */
+#define MEM_EVENT_REASON_MOV_TO_CR3              5
+/* CR4 was updated */
+#define MEM_EVENT_REASON_MOV_TO_CR4              6
+/* An MSR was updated. Does NOT honour HVMPME_onchangeonly */
+#define MEM_EVENT_REASON_MOV_TO_MSR              9
+/* Debug operation executed (int3) */
+#define MEM_EVENT_REASON_SOFTWARE_BREAKPOINT     7
+/* Single-step (MTF) */
+#define MEM_EVENT_REASON_SINGLESTEP              8
 
 /* Using a custom struct (not hvm_hw_cpu) so as to not fill
  * the mem_event ring buffer too quickly. */
@@ -97,31 +112,74 @@ struct mem_event_regs_x86 {
     uint32_t _pad;
 };
 
-typedef struct mem_event_st {
-    uint32_t flags;
-    uint32_t vcpu_id;
-
+struct mem_event_mem_access_data {
     uint64_t gfn;
     uint64_t offset;
     uint64_t gla; /* if gla_valid */
+    uint8_t access_r;
+    uint8_t access_w;
+    uint8_t access_x;
+    uint8_t gla_valid;
+    uint8_t fault_with_gla;
+    uint8_t fault_in_gpt;
+    uint16_t _pad;
+};
+
+struct mem_event_mov_to_cr_data {
+    uint64_t new_value;
+    uint64_t old_value;
+};
+
+struct mem_event_software_breakpoint_data {
+    uint64_t gfn;
+};
+
+struct mem_event_singlestep_data {
+    uint64_t gfn;
+};
 
+struct mem_event_mov_to_msr_data {
+    uint64_t msr;
+    uint64_t value;
+};
+
+struct mem_event_paging_data {
+    uint64_t gfn;
+    uint32_t p2mt;
+    uint32_t _pad;
+};
+
+struct mem_event_sharing_data {
+    uint64_t gfn;
     uint32_t p2mt;
+    uint32_t _pad;
+};
+
+typedef struct mem_event_st {
+    uint32_t version; /* MEM_EVENT_INTERFACE_VERSION */
+    uint32_t flags;
+    uint32_t vcpu_id;
+    uint32_t reason; /* MEM_EVENT_REASON_* */
 
-    uint16_t access_r:1;
-    uint16_t access_w:1;
-    uint16_t access_x:1;
-    uint16_t gla_valid:1;
-    uint16_t fault_with_gla:1;
-    uint16_t fault_in_gpt:1;
-    uint16_t available:10;
+    union {
+        struct mem_event_paging_data                mem_paging;
+        struct mem_event_sharing_data               mem_sharing;
+        struct mem_event_mem_access_data            mem_access;
+        struct mem_event_mov_to_cr_data             mov_to_cr;
+        struct mem_event_mov_to_msr_data            mov_to_msr;
+        struct mem_event_software_breakpoint_data   software_breakpoint;
+        struct mem_event_singlestep_data            singlestep;
+    } data;
 
-    uint16_t reason;
-    struct mem_event_regs_x86 x86_regs;
+    union {
+        struct mem_event_regs_x86 x86;
+    } regs;
 } mem_event_request_t, mem_event_response_t;
 
 DEFINE_RING_TYPES(mem_event, mem_event_request_t, mem_event_response_t);
 
-#endif
+#endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
+#endif /* _XEN_PUBLIC_MEM_EVENT_H */
 
 /*
  * Local variables:
-- 
2.1.4

  reply	other threads:[~2015-01-29 21:46 UTC|newest]

Thread overview: 66+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-01-29 21:46 [RFC PATCH V3 00/12] xen: Clean-up of mem_event subsystem Tamas K Lengyel
2015-01-29 21:46 ` Tamas K Lengyel [this message]
2015-02-02 17:19   ` [RFC PATCH V3 01/12] xen/mem_event: Cleanup of mem_event structures Ian Campbell
2015-02-03  9:17     ` Jan Beulich
2015-02-05 12:12       ` Tamas K Lengyel
2015-02-05 12:13     ` Tamas K Lengyel
2015-02-03 15:32   ` Jan Beulich
2015-01-29 21:46 ` [RFC PATCH V3 02/12] xen/mem_event: Rename the mem_event ring from 'access' to 'monitor' Tamas K Lengyel
2015-02-02 17:22   ` Ian Campbell
2015-02-03 15:37   ` Jan Beulich
2015-02-05 14:24     ` Tamas K Lengyel
2015-01-29 21:46 ` [RFC PATCH V3 03/12] xen/mem_paging: Convert mem_event_op to mem_paging_op Tamas K Lengyel
2015-02-02 17:23   ` Ian Campbell
2015-02-03 15:41   ` Jan Beulich
2015-01-29 21:46 ` [RFC PATCH V3 04/12] xen/mem_access: Merge mem_event sanity check into mem_access check Tamas K Lengyel
2015-01-29 21:46 ` [RFC PATCH V3 05/12] xen: Introduce vm_event Tamas K Lengyel
2015-01-30 17:25   ` Daniel De Graaf
2015-01-31 13:24     ` Tamas K Lengyel
2015-02-02 19:35       ` Daniel De Graaf
2015-02-06 14:04         ` Tamas K Lengyel
2015-02-02 17:27   ` Ian Campbell
2015-02-03 15:54   ` Jan Beulich
2015-02-06 13:54     ` Tamas K Lengyel
2015-02-06 13:58       ` Andrew Cooper
2015-02-06 14:01         ` Tamas K Lengyel
2015-01-29 21:46 ` [RFC PATCH V3 06/12] xen: migrate mem_event to vm_event Tamas K Lengyel
2015-02-02 17:27   ` Ian Campbell
2015-02-03 16:22   ` Jan Beulich
2015-01-29 21:46 ` [RFC PATCH V3 07/12] xen: Remove mem_event Tamas K Lengyel
2015-01-30 17:25   ` Daniel De Graaf
2015-02-02 17:29   ` Ian Campbell
2015-02-03 16:26   ` Jan Beulich
2015-02-06 12:54     ` Tamas K Lengyel
2015-02-06 14:18       ` Jan Beulich
2015-02-06 16:13         ` Tamas K Lengyel
2015-01-29 21:46 ` [RFC PATCH V3 08/12] tools/tests: Clean-up tools/tests/xen-access Tamas K Lengyel
2015-02-02 17:30   ` Ian Campbell
2015-01-29 21:46 ` [RFC PATCH V3 09/12] x86/hvm: factor out and rename vm_event related functions into separate file Tamas K Lengyel
2015-02-04  5:54   ` Tian, Kevin
2015-02-04  9:14   ` Jan Beulich
2015-01-29 21:46 ` [RFC PATCH V3 10/12] xen: Introduce monitor_op domctl Tamas K Lengyel
2015-01-30  7:58   ` Razvan Cojocaru
2015-01-30 10:38     ` Tamas K Lengyel
2015-01-30 11:07       ` Razvan Cojocaru
2015-01-30 11:15         ` Tamas K Lengyel
2015-01-30 11:24           ` Tamas K Lengyel
2015-01-30 11:33           ` Razvan Cojocaru
2015-01-30 11:45             ` Tamas K Lengyel
2015-01-30 12:24               ` Razvan Cojocaru
2015-01-30 12:36                 ` Tamas K Lengyel
2015-02-02 17:32   ` Ian Campbell
2015-02-04  5:57   ` Tian, Kevin
2015-02-04  9:34   ` Jan Beulich
2015-02-05 14:15     ` Tamas K Lengyel
2015-02-09 18:45       ` Tamas K Lengyel
2015-01-29 21:46 ` [RFC PATCH V3 11/12] xen/vm_event: Decouple vm_event and mem_access Tamas K Lengyel
2015-02-04  9:47   ` Jan Beulich
2015-02-06 13:10     ` Tamas K Lengyel
2015-02-06 14:20       ` Jan Beulich
2015-02-06 16:12         ` Tamas K Lengyel
2015-01-29 21:46 ` [RFC PATCH V3 12/12] xen/vm_event: Check for VM_EVENT_FLAG_DUMMY only in Debug builds Tamas K Lengyel
2015-02-04  5:59   ` Tian, Kevin
2015-02-06 13:20     ` Tamas K Lengyel
2015-02-04  9:49   ` Jan Beulich
2015-02-06 13:22     ` Tamas K Lengyel
2015-02-02 17:33 ` [RFC PATCH V3 00/12] xen: Clean-up of mem_event subsystem Ian Campbell

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1422567998-29995-2-git-send-email-tamas.lengyel@zentific.com \
    --to=tamas.lengyel@zentific.com \
    --cc=andres@lagarcavilla.org \
    --cc=dgdegra@tycho.nsa.gov \
    --cc=eddie.dong@intel.com \
    --cc=ian.campbell@citrix.com \
    --cc=ian.jackson@eu.citrix.com \
    --cc=jbeulich@suse.com \
    --cc=jun.nakajima@intel.com \
    --cc=keir@xen.org \
    --cc=kevin.tian@intel.com \
    --cc=rcojocaru@bitdefender.com \
    --cc=rshriram@cs.ubc.ca \
    --cc=stefano.stabellini@eu.citrix.com \
    --cc=steve@zentific.com \
    --cc=tim@xen.org \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xen.org \
    --cc=yanghy@cn.fujitsu.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.