All of lore.kernel.org
 help / color / mirror / Atom feed
From: Razvan Cojocaru <rcojocaru@bitdefender.com>
To: Tamas K Lengyel <tamas.lengyel@zentific.com>,
	xen-devel@lists.xenproject.org
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Julien Grall <julien.grall@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH 1/2] vm_event: Sanitize vm_event response handling
Date: Wed, 14 Sep 2016 10:49:48 +0300	[thread overview]
Message-ID: <8c8ecd0b-4ab9-6a4e-29f0-1ce62bce1839@bitdefender.com> (raw)
In-Reply-To: <20160913181223.1459-1-tamas.lengyel@zentific.com>

On 09/13/2016 09:12 PM, Tamas K Lengyel wrote:
> Setting response flags in vm_event are only ever safe if the vCPUs are paused.
> To reflect this we move all checks within the if block that already checks
> whether this is the case. Checks that are only supported on one architecture
> we relocate the bitmask operations to the arch-specific handlers to avoid
> the overhead on architectures that don't support it.
> 
> Furthermore, we clean-up the emulation checks so it more clearly represents the
> decision-logic when emulation should take place. As part of this we also
> set the stage to allow emulation in response to other types of events, not just
> mem_access violations.
> 
> Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
> ---
> Cc: George Dunlap <george.dunlap@eu.citrix.com>
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: Razvan Cojocaru <rcojocaru@bitdefender.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Julien Grall <julien.grall@arm.com>
> ---
>  xen/arch/x86/mm/p2m.c          | 81 +++++++++++++++++++-----------------------
>  xen/arch/x86/vm_event.c        | 35 +++++++++++++++++-
>  xen/common/vm_event.c          | 53 ++++++++++++++-------------
>  xen/include/asm-arm/p2m.h      |  4 +--
>  xen/include/asm-arm/vm_event.h |  9 ++++-
>  xen/include/asm-x86/p2m.h      |  4 +--
>  xen/include/asm-x86/vm_event.h |  5 ++-
>  xen/include/xen/mem_access.h   | 12 -------
>  8 files changed, 113 insertions(+), 90 deletions(-)
> 
> diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
> index 7d14c3b..6c01868 100644
> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -1588,62 +1588,55 @@ void p2m_mem_paging_resume(struct domain *d, vm_event_response_t *rsp)
>      }
>  }
>  
> -void p2m_mem_access_emulate_check(struct vcpu *v,
> -                                  const vm_event_response_t *rsp)
> +bool_t p2m_mem_access_emulate_check(struct vcpu *v,

Judging by the reviews for my pending patch, I believe you'll be asked
to switch to bool / true / false here.

> +                                    const vm_event_response_t *rsp)
>  {
> -    /* Mark vcpu for skipping one instruction upon rescheduling. */
> -    if ( rsp->flags & VM_EVENT_FLAG_EMULATE )
> -    {
> -        xenmem_access_t access;
> -        bool_t violation = 1;
> -        const struct vm_event_mem_access *data = &rsp->u.mem_access;
> +    xenmem_access_t access;
> +    bool_t violation = 1;
> +    const struct vm_event_mem_access *data = &rsp->u.mem_access;
>  
> -        if ( p2m_get_mem_access(v->domain, _gfn(data->gfn), &access) == 0 )
> +    if ( p2m_get_mem_access(v->domain, _gfn(data->gfn), &access) == 0 )
> +    {
> +        switch ( access )
>          {
> -            switch ( access )
> -            {
> -            case XENMEM_access_n:
> -            case XENMEM_access_n2rwx:
> -            default:
> -                violation = data->flags & MEM_ACCESS_RWX;
> -                break;
> +        case XENMEM_access_n:
> +        case XENMEM_access_n2rwx:
> +        default:
> +            violation = data->flags & MEM_ACCESS_RWX;
> +            break;
>  
> -            case XENMEM_access_r:
> -                violation = data->flags & MEM_ACCESS_WX;
> -                break;
> +        case XENMEM_access_r:
> +            violation = data->flags & MEM_ACCESS_WX;
> +            break;
>  
> -            case XENMEM_access_w:
> -                violation = data->flags & MEM_ACCESS_RX;
> -                break;
> +        case XENMEM_access_w:
> +            violation = data->flags & MEM_ACCESS_RX;
> +            break;
>  
> -            case XENMEM_access_x:
> -                violation = data->flags & MEM_ACCESS_RW;
> -                break;
> +        case XENMEM_access_x:
> +            violation = data->flags & MEM_ACCESS_RW;
> +            break;
>  
> -            case XENMEM_access_rx:
> -            case XENMEM_access_rx2rw:
> -                violation = data->flags & MEM_ACCESS_W;
> -                break;
> +        case XENMEM_access_rx:
> +        case XENMEM_access_rx2rw:
> +            violation = data->flags & MEM_ACCESS_W;
> +            break;
>  
> -            case XENMEM_access_wx:
> -                violation = data->flags & MEM_ACCESS_R;
> -                break;
> +        case XENMEM_access_wx:
> +            violation = data->flags & MEM_ACCESS_R;
> +            break;
>  
> -            case XENMEM_access_rw:
> -                violation = data->flags & MEM_ACCESS_X;
> -                break;
> +        case XENMEM_access_rw:
> +            violation = data->flags & MEM_ACCESS_X;
> +            break;
>  
> -            case XENMEM_access_rwx:
> -                violation = 0;
> -                break;
> -            }
> +        case XENMEM_access_rwx:
> +            violation = 0;
> +            break;
>          }
> -
> -        v->arch.vm_event->emulate_flags = violation ? rsp->flags : 0;
> -
> -        if ( (rsp->flags & VM_EVENT_FLAG_SET_EMUL_READ_DATA) )
> -            v->arch.vm_event->emul_read_data = rsp->data.emul_read_data;
>      }
> +
> +    return violation;
>  }
>  
>  void p2m_altp2m_check(struct vcpu *v, uint16_t idx)
> diff --git a/xen/arch/x86/vm_event.c b/xen/arch/x86/vm_event.c
> index e938ca3..343b9c8 100644
> --- a/xen/arch/x86/vm_event.c
> +++ b/xen/arch/x86/vm_event.c
> @@ -18,6 +18,7 @@
>   * License along with this program; If not, see <http://www.gnu.org/licenses/>.
>   */
>  
> +#include <asm/p2m.h>
>  #include <asm/vm_event.h>
>  
>  /* Implicitly serialized by the domctl lock. */
> @@ -56,8 +57,12 @@ void vm_event_cleanup_domain(struct domain *d)
>      d->arch.mem_access_emulate_each_rep = 0;
>  }
>  
> -void vm_event_toggle_singlestep(struct domain *d, struct vcpu *v)
> +void vm_event_toggle_singlestep(struct domain *d, struct vcpu *v,
> +                                vm_event_response_t *rsp)
>  {
> +    if ( !(rsp->flags & VM_EVENT_FLAG_TOGGLE_SINGLESTEP) )
> +        return;
> +
>      if ( !is_hvm_domain(d) )
>          return;
>  
> @@ -186,6 +191,34 @@ void vm_event_fill_regs(vm_event_request_t *req)
>      req->data.regs.x86.cs_arbytes = seg.attr.bytes;
>  }
>  
> +void vm_event_emulate_check(struct vcpu *v, vm_event_response_t *rsp)
> +{
> +    if ( !(rsp->flags & VM_EVENT_FLAG_EMULATE) )
> +    {
> +        v->arch.vm_event->emulate_flags = 0;
> +        return;
> +    }
> +
> +    switch ( rsp->reason )
> +    {
> +    case VM_EVENT_REASON_MEM_ACCESS:
> +        /*
> +         * Emulate iff this is a response to a mem_access violation and there
> +         * are still conflicting mem_access permissions in-place.
> +         */
> +        if ( p2m_mem_access_emulate_check(v, rsp) )
> +        {
> +            if ( rsp->flags & VM_EVENT_FLAG_SET_EMUL_READ_DATA )
> +                v->arch.vm_event->emul_read_data = rsp->data.emul_read_data;
> +
> +            v->arch.vm_event->emulate_flags = rsp->flags;
> +        }
> +        break;
> +    default:
> +        break;
> +    };
> +}
> +
>  /*
>   * Local variables:
>   * mode: C
> diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
> index 8398af7..907ab40 100644
> --- a/xen/common/vm_event.c
> +++ b/xen/common/vm_event.c
> @@ -398,42 +398,41 @@ void vm_event_resume(struct domain *d, struct vm_event_domain *ved)
>           * In some cases the response type needs extra handling, so here
>           * we call the appropriate handlers.
>           */
> -        switch ( rsp.reason )
> -        {
> -#ifdef CONFIG_X86
> -        case VM_EVENT_REASON_MOV_TO_MSR:
> -#endif
> -        case VM_EVENT_REASON_WRITE_CTRLREG:
> -            vm_event_register_write_resume(v, &rsp);
> -            break;
> -
> -#ifdef CONFIG_HAS_MEM_ACCESS
> -        case VM_EVENT_REASON_MEM_ACCESS:
> -            mem_access_resume(v, &rsp);
> -            break;
> -#endif
>  
> +        /* Check flags which apply only when the vCPU is paused */
> +        if ( atomic_read(&v->vm_event_pause_count) )
> +        {
>  #ifdef CONFIG_HAS_MEM_PAGING
> -        case VM_EVENT_REASON_MEM_PAGING:
> -            p2m_mem_paging_resume(d, &rsp);
> -            break;
> +            if ( rsp.reason == VM_EVENT_REASON_MEM_PAGING )
> +                p2m_mem_paging_resume(d, &rsp);
>  #endif
>  
> -        };
> +            /*
> +             * Check emulation flags in the arch-specific handler only, as it
> +             * has to set arch-specific flags when supported, and to avoid
> +             * bitmask overhead when it isn't supported.
> +             */
> +            vm_event_emulate_check(v, &rsp);
> +
> +            /*
> +             * Check in arch-specific handler to avoid bitmask overhead when
> +             * not supported.
> +             */
> +            vm_event_register_write_resume(v, &rsp);
>  
> -        /* Check for altp2m switch */
> -        if ( rsp.flags & VM_EVENT_FLAG_ALTERNATE_P2M )
> -            p2m_altp2m_check(v, rsp.altp2m_idx);
> +            /*
> +             * Check in arch-specific handler to avoid bitmask overhead when
> +             * not supported.
> +             */
> +            vm_event_toggle_singlestep(d, v, &rsp);
> +
> +            /* Check for altp2m switch */
> +            if ( rsp.flags & VM_EVENT_FLAG_ALTERNATE_P2M )
> +                p2m_altp2m_check(v, rsp.altp2m_idx);
>  
> -        /* Check flags which apply only when the vCPU is paused */
> -        if ( atomic_read(&v->vm_event_pause_count) )
> -        {
>              if ( rsp.flags & VM_EVENT_FLAG_SET_REGISTERS )
>                  vm_event_set_registers(v, &rsp);
>  
> -            if ( rsp.flags & VM_EVENT_FLAG_TOGGLE_SINGLESTEP )
> -                vm_event_toggle_singlestep(d, v);
> -
>              if ( rsp.flags & VM_EVENT_FLAG_VCPU_PAUSED )
>                  vm_event_vcpu_unpause(v);
>          }
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index 53c4d78..5e9bc54 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -121,10 +121,10 @@ typedef enum {
>                               p2m_to_mask(p2m_map_foreign)))
>  
>  static inline
> -void p2m_mem_access_emulate_check(struct vcpu *v,
> +bool_t p2m_mem_access_emulate_check(struct vcpu *v,
>                                    const vm_event_response_t *rsp)

This returns bool_t ...

>  {
> -    /* Not supported on ARM. */
> +    return false;
>  }

... but you return false. Please switch to bool.

>  
>  static inline
> diff --git a/xen/include/asm-arm/vm_event.h b/xen/include/asm-arm/vm_event.h
> index 9482636..66f2474 100644
> --- a/xen/include/asm-arm/vm_event.h
> +++ b/xen/include/asm-arm/vm_event.h
> @@ -34,7 +34,8 @@ static inline void vm_event_cleanup_domain(struct domain *d)
>      memset(&d->monitor, 0, sizeof(d->monitor));
>  }
>  
> -static inline void vm_event_toggle_singlestep(struct domain *d, struct vcpu *v)
> +static inline void vm_event_toggle_singlestep(struct domain *d, struct vcpu *v,
> +                                              vm_event_response_t *rsp)
>  {
>      /* Not supported on ARM. */
>  }
> @@ -45,4 +46,10 @@ void vm_event_register_write_resume(struct vcpu *v, vm_event_response_t *rsp)
>      /* Not supported on ARM. */
>  }
>  
> +static inline
> +void vm_event_emulate_check(struct vcpu *v, vm_event_response_t *rsp)
> +{
> +    /* Not supported on ARM. */
> +}
> +
>  #endif /* __ASM_ARM_VM_EVENT_H__ */
> diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
> index 9fc9ead..1897def 100644
> --- a/xen/include/asm-x86/p2m.h
> +++ b/xen/include/asm-x86/p2m.h
> @@ -677,8 +677,8 @@ bool_t p2m_mem_access_check(paddr_t gpa, unsigned long gla,
>  
>  /* Check for emulation and mark vcpu for skipping one instruction
>   * upon rescheduling if required. */
> -void p2m_mem_access_emulate_check(struct vcpu *v,
> -                                  const vm_event_response_t *rsp);
> +bool_t p2m_mem_access_emulate_check(struct vcpu *v,
> +                                    const vm_event_response_t *rsp);

Bool.


Thanks,
Razvan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

  parent reply	other threads:[~2016-09-14  7:49 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-09-13 18:12 [PATCH 1/2] vm_event: Sanitize vm_event response handling Tamas K Lengyel
2016-09-13 18:12 ` [PATCH 2/2] x86/vm_event: Allow returning i-cache for emulation Tamas K Lengyel
2016-09-14  7:58   ` Razvan Cojocaru
2016-09-15 16:36     ` Tamas K Lengyel
2016-09-15 17:08       ` Razvan Cojocaru
2016-09-16  7:21         ` Razvan Cojocaru
2016-09-16 15:37           ` Tamas K Lengyel
2016-09-16 16:09             ` Razvan Cojocaru
2016-09-14 15:55   ` Jan Beulich
2016-09-14 16:20     ` Tamas K Lengyel
2016-09-15  5:58       ` Jan Beulich
2016-09-15 15:27         ` Tamas K Lengyel
2016-09-15 16:09           ` Jan Beulich
2016-09-14  7:49 ` Razvan Cojocaru [this message]
2016-09-14  9:33 ` [PATCH 1/2] vm_event: Sanitize vm_event response handling Julien Grall
2016-09-14 15:14   ` Tamas K Lengyel
2016-09-14 15:15     ` Julien Grall
2016-09-14 15:19       ` Tamas K Lengyel
2016-09-14 13:38 ` George Dunlap
2016-09-14 15:11   ` Tamas K Lengyel
2016-09-15 10:35     ` George Dunlap

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8c8ecd0b-4ab9-6a4e-29f0-1ce62bce1839@bitdefender.com \
    --to=rcojocaru@bitdefender.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=george.dunlap@eu.citrix.com \
    --cc=jbeulich@suse.com \
    --cc=julien.grall@arm.com \
    --cc=sstabellini@kernel.org \
    --cc=tamas.lengyel@zentific.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.