All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Jan Beulich" <JBeulich@suse.com>
To: aisaila@bitdefender.com
Cc: Tamas K Lengyel <tamas@tklengyel.com>,
	Wei Liu <wei.liu2@citrix.com>,
	Razvan Cojocaru <rcojocaru@bitdefender.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Paul Durrant <paul.durrant@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [PATCH v4 2/2] x86/emulate: Send vm_event from emulate
Date: Wed, 22 May 2019 03:56:13 -0600	[thread overview]
Message-ID: <5CE51CBD0200007800231438@prv1-mh.provo.novell.com> (raw)
In-Reply-To: <20190520125454.14805-2-aisaila@bitdefender.com>

>>> On 20.05.19 at 14:55, <aisaila@bitdefender.com> wrote:
> This patch aims to have mem access vm events sent from the emulator.
> This is useful in the case of emulated instructions that cause
> page-walks on access protected pages.
> 
> We use hvmemul_map_linear_addr() ro intercept r/w access and
> hvmemul_insn_fetch() to intercept exec access.

I'm afraid I don't understand this sentence. Or wait - is this a
simple typo, and you mean "to" instead of "ro"?

> First we try to send a vm event and if the event is sent then emulation
> returns X86EMUL_ACCESS_EXCEPTION. If the event is not sent then the
> emulation goes on as expected.

Perhaps it's obvious for a vm-event person why successful sending
of an event is to result in X86EMUL_ACCESS_EXCEPTION, but it's not
to me, despite having looked at prior versions. Can this (odd at the
first glance) behavior please be briefly explained here?

> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -15,6 +15,7 @@
>  #include <xen/paging.h>
>  #include <xen/trace.h>
>  #include <xen/vm_event.h>
> +#include <xen/monitor.h>
>  #include <asm/event.h>
>  #include <asm/i387.h>
>  #include <asm/xstate.h>
> @@ -26,6 +27,7 @@
>  #include <asm/hvm/support.h>
>  #include <asm/hvm/svm/svm.h>
>  #include <asm/vm_event.h>
> +#include <asm/altp2m.h>

In both cases please try to insert at least half way alphabetically
(I didn't check if the directives are fully sorted already), rather
than blindly at the end.

> @@ -619,6 +621,68 @@ static int hvmemul_linear_to_phys(
>      return X86EMUL_OKAY;
>  }
>  
> +static bool hvmemul_send_vm_event(unsigned long gla,
> +                                  uint32_t pfec, unsigned int bytes,
> +                                  struct hvm_emulate_ctxt ctxt)
> +{
> +    xenmem_access_t access;
> +    vm_event_request_t req = {};
> +    gfn_t gfn;
> +    paddr_t gpa;
> +    unsigned long reps = 1;
> +    int rc;
> +
> +    if ( !ctxt.send_event || !pfec )

Why the !pfec part of the condition?

> +        return false;
> +
> +    rc = hvmemul_linear_to_phys(gla, &gpa, bytes, &reps, pfec, &ctxt);

As said before - I don't think it's a good idea to do the page walk
twice: This and the pre-existing one can easily return different
results.

Additionally, as also said before (I think), the function may raise
#PF, which you don't seem to deal with despite discarding the
X86EMUL_EXCEPTION return value ...

> +    if ( rc != X86EMUL_OKAY )
> +        return false;

... here.

> +    gfn = gaddr_to_gfn(gpa);
> +
> +    if ( p2m_get_mem_access(current->domain, gfn, &access,
> +                            altp2m_vcpu_idx(current)) != 0 )
> +        return false;
> +
> +    switch ( access ) {
> +    case XENMEM_access_x:
> +    case XENMEM_access_rx:
> +        if ( pfec & PFEC_write_access )
> +            req.u.mem_access.flags = MEM_ACCESS_R | MEM_ACCESS_W;
> +        break;
> +
> +    case XENMEM_access_w:
> +    case XENMEM_access_rw:
> +        if ( pfec & PFEC_insn_fetch )
> +            req.u.mem_access.flags = MEM_ACCESS_X;
> +        break;
> +
> +    case XENMEM_access_r:
> +    case XENMEM_access_n:
> +        if ( pfec & PFEC_write_access )
> +            req.u.mem_access.flags |= MEM_ACCESS_R | MEM_ACCESS_W;
> +        if ( pfec & PFEC_insn_fetch )
> +            req.u.mem_access.flags |= MEM_ACCESS_X;
> +        break;
> +
> +    default:
> +        return false;
> +    }

Aren't you looking at the leaf page here, rather than at any of the
involved page tables? Or am I misunderstanding the description
saying "page-walks on access protected pages"?

> @@ -636,6 +700,7 @@ static void *hvmemul_map_linear_addr(
>      unsigned int nr_frames = ((linear + bytes - !!bytes) >> PAGE_SHIFT) -
>          (linear >> PAGE_SHIFT) + 1;
>      unsigned int i;
> +    gfn_t gfn;
>  
>      /*
>       * mfn points to the next free slot.  All used slots have a page reference
> @@ -674,7 +739,7 @@ static void *hvmemul_map_linear_addr(
>          ASSERT(mfn_x(*mfn) == 0);
>  
>          res = hvm_translate_get_page(curr, addr, true, pfec,
> -                                     &pfinfo, &page, NULL, &p2mt);
> +                                     &pfinfo, &page, &gfn, &p2mt);
>  
>          switch ( res )
>          {

Are these two hunks leftovers? You don't use "gfn" anywhere.

> @@ -1248,7 +1318,21 @@ int hvmemul_insn_fetch(
>          container_of(ctxt, struct hvm_emulate_ctxt, ctxt);
>      /* Careful, as offset can wrap or truncate WRT insn_buf_eip. */
>      uint8_t insn_off = offset - hvmemul_ctxt->insn_buf_eip;
> +    uint32_t pfec = PFEC_page_present | PFEC_insn_fetch;
> +    unsigned long addr, reps = 1;
> +    int rc = 0;
> +
> +    rc = hvmemul_virtual_to_linear(
> +        seg, offset, bytes, &reps, hvm_access_insn_fetch, hvmemul_ctxt, &addr);
> +
> +    if ( rc != X86EMUL_OKAY || !bytes )
> +        return rc;
> +
> +    if ( hvmemul_ctxt->seg_reg[x86_seg_ss].dpl == 3 )
> +        pfec |= PFEC_user_mode;
>  
> +    if ( hvmemul_send_vm_event(addr, pfec, bytes, *hvmemul_ctxt) )
> +        return X86EMUL_ACCESS_EXCEPTION;
>      /*
>       * Fall back if requested bytes are not in the prefetch cache.
>       * But always perform the (fake) read when bytes == 0.

Despite what was said before you're still doing things a 2nd time
here just because of hvmemul_send_vm_event()'s needs, even
if that function ends up bailing right away.

Also please don't lose the blank line ahead of the comment you
add code ahead of.

> --- a/xen/arch/x86/x86_emulate/x86_emulate.h
> +++ b/xen/arch/x86/x86_emulate/x86_emulate.h
> @@ -162,6 +162,8 @@ struct x86_emul_fpu_aux {
>  #define X86EMUL_UNRECOGNIZED   X86EMUL_UNIMPLEMENTED
>   /* (cmpxchg accessor): CMPXCHG failed. */
>  #define X86EMUL_CMPXCHG_FAILED 7
> +/* Emulator tried to access a protected page. */
> +#define X86EMUL_ACCESS_EXCEPTION 8

This still doesn't make clear what the difference is to
X86EMUL_EXCEPTION.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

WARNING: multiple messages have this Message-ID (diff)
From: "Jan Beulich" <JBeulich@suse.com>
To: <aisaila@bitdefender.com>
Cc: Tamas K Lengyel <tamas@tklengyel.com>,
	Wei Liu <wei.liu2@citrix.com>,
	Razvan Cojocaru <rcojocaru@bitdefender.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Paul Durrant <paul.durrant@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH v4 2/2] x86/emulate: Send vm_event from emulate
Date: Wed, 22 May 2019 03:56:13 -0600	[thread overview]
Message-ID: <5CE51CBD0200007800231438@prv1-mh.provo.novell.com> (raw)
Message-ID: <20190522095613.0QKhaeY0srurOfmkww6gOljQaFgFouYlcahVF47jB2w@z> (raw)
In-Reply-To: <20190520125454.14805-2-aisaila@bitdefender.com>

>>> On 20.05.19 at 14:55, <aisaila@bitdefender.com> wrote:
> This patch aims to have mem access vm events sent from the emulator.
> This is useful in the case of emulated instructions that cause
> page-walks on access protected pages.
> 
> We use hvmemul_map_linear_addr() ro intercept r/w access and
> hvmemul_insn_fetch() to intercept exec access.

I'm afraid I don't understand this sentence. Or wait - is this a
simple typo, and you mean "to" instead of "ro"?

> First we try to send a vm event and if the event is sent then emulation
> returns X86EMUL_ACCESS_EXCEPTION. If the event is not sent then the
> emulation goes on as expected.

Perhaps it's obvious for a vm-event person why successful sending
of an event is to result in X86EMUL_ACCESS_EXCEPTION, but it's not
to me, despite having looked at prior versions. Can this (odd at the
first glance) behavior please be briefly explained here?

> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -15,6 +15,7 @@
>  #include <xen/paging.h>
>  #include <xen/trace.h>
>  #include <xen/vm_event.h>
> +#include <xen/monitor.h>
>  #include <asm/event.h>
>  #include <asm/i387.h>
>  #include <asm/xstate.h>
> @@ -26,6 +27,7 @@
>  #include <asm/hvm/support.h>
>  #include <asm/hvm/svm/svm.h>
>  #include <asm/vm_event.h>
> +#include <asm/altp2m.h>

In both cases please try to insert at least half way alphabetically
(I didn't check if the directives are fully sorted already), rather
than blindly at the end.

> @@ -619,6 +621,68 @@ static int hvmemul_linear_to_phys(
>      return X86EMUL_OKAY;
>  }
>  
> +static bool hvmemul_send_vm_event(unsigned long gla,
> +                                  uint32_t pfec, unsigned int bytes,
> +                                  struct hvm_emulate_ctxt ctxt)
> +{
> +    xenmem_access_t access;
> +    vm_event_request_t req = {};
> +    gfn_t gfn;
> +    paddr_t gpa;
> +    unsigned long reps = 1;
> +    int rc;
> +
> +    if ( !ctxt.send_event || !pfec )

Why the !pfec part of the condition?

> +        return false;
> +
> +    rc = hvmemul_linear_to_phys(gla, &gpa, bytes, &reps, pfec, &ctxt);

As said before - I don't think it's a good idea to do the page walk
twice: This and the pre-existing one can easily return different
results.

Additionally, as also said before (I think), the function may raise
#PF, which you don't seem to deal with despite discarding the
X86EMUL_EXCEPTION return value ...

> +    if ( rc != X86EMUL_OKAY )
> +        return false;

... here.

> +    gfn = gaddr_to_gfn(gpa);
> +
> +    if ( p2m_get_mem_access(current->domain, gfn, &access,
> +                            altp2m_vcpu_idx(current)) != 0 )
> +        return false;
> +
> +    switch ( access ) {
> +    case XENMEM_access_x:
> +    case XENMEM_access_rx:
> +        if ( pfec & PFEC_write_access )
> +            req.u.mem_access.flags = MEM_ACCESS_R | MEM_ACCESS_W;
> +        break;
> +
> +    case XENMEM_access_w:
> +    case XENMEM_access_rw:
> +        if ( pfec & PFEC_insn_fetch )
> +            req.u.mem_access.flags = MEM_ACCESS_X;
> +        break;
> +
> +    case XENMEM_access_r:
> +    case XENMEM_access_n:
> +        if ( pfec & PFEC_write_access )
> +            req.u.mem_access.flags |= MEM_ACCESS_R | MEM_ACCESS_W;
> +        if ( pfec & PFEC_insn_fetch )
> +            req.u.mem_access.flags |= MEM_ACCESS_X;
> +        break;
> +
> +    default:
> +        return false;
> +    }

Aren't you looking at the leaf page here, rather than at any of the
involved page tables? Or am I misunderstanding the description
saying "page-walks on access protected pages"?

> @@ -636,6 +700,7 @@ static void *hvmemul_map_linear_addr(
>      unsigned int nr_frames = ((linear + bytes - !!bytes) >> PAGE_SHIFT) -
>          (linear >> PAGE_SHIFT) + 1;
>      unsigned int i;
> +    gfn_t gfn;
>  
>      /*
>       * mfn points to the next free slot.  All used slots have a page reference
> @@ -674,7 +739,7 @@ static void *hvmemul_map_linear_addr(
>          ASSERT(mfn_x(*mfn) == 0);
>  
>          res = hvm_translate_get_page(curr, addr, true, pfec,
> -                                     &pfinfo, &page, NULL, &p2mt);
> +                                     &pfinfo, &page, &gfn, &p2mt);
>  
>          switch ( res )
>          {

Are these two hunks leftovers? You don't use "gfn" anywhere.

> @@ -1248,7 +1318,21 @@ int hvmemul_insn_fetch(
>          container_of(ctxt, struct hvm_emulate_ctxt, ctxt);
>      /* Careful, as offset can wrap or truncate WRT insn_buf_eip. */
>      uint8_t insn_off = offset - hvmemul_ctxt->insn_buf_eip;
> +    uint32_t pfec = PFEC_page_present | PFEC_insn_fetch;
> +    unsigned long addr, reps = 1;
> +    int rc = 0;
> +
> +    rc = hvmemul_virtual_to_linear(
> +        seg, offset, bytes, &reps, hvm_access_insn_fetch, hvmemul_ctxt, &addr);
> +
> +    if ( rc != X86EMUL_OKAY || !bytes )
> +        return rc;
> +
> +    if ( hvmemul_ctxt->seg_reg[x86_seg_ss].dpl == 3 )
> +        pfec |= PFEC_user_mode;
>  
> +    if ( hvmemul_send_vm_event(addr, pfec, bytes, *hvmemul_ctxt) )
> +        return X86EMUL_ACCESS_EXCEPTION;
>      /*
>       * Fall back if requested bytes are not in the prefetch cache.
>       * But always perform the (fake) read when bytes == 0.

Despite what was said before you're still doing things a 2nd time
here just because of hvmemul_send_vm_event()'s needs, even
if that function ends up bailing right away.

Also please don't lose the blank line ahead of the comment you
add code ahead of.

> --- a/xen/arch/x86/x86_emulate/x86_emulate.h
> +++ b/xen/arch/x86/x86_emulate/x86_emulate.h
> @@ -162,6 +162,8 @@ struct x86_emul_fpu_aux {
>  #define X86EMUL_UNRECOGNIZED   X86EMUL_UNIMPLEMENTED
>   /* (cmpxchg accessor): CMPXCHG failed. */
>  #define X86EMUL_CMPXCHG_FAILED 7
> +/* Emulator tried to access a protected page. */
> +#define X86EMUL_ACCESS_EXCEPTION 8

This still doesn't make clear what the difference is to
X86EMUL_EXCEPTION.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  reply	other threads:[~2019-05-22  9:56 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-05-20 12:55 [PATCH v4 1/2] x86/emulate: Move hvmemul_linear_to_phys Alexandru Stefan ISAILA
2019-05-20 12:55 ` [Xen-devel] " Alexandru Stefan ISAILA
2019-05-20 12:55 ` [PATCH v4 2/2] x86/emulate: Send vm_event from emulate Alexandru Stefan ISAILA
2019-05-20 12:55   ` [Xen-devel] " Alexandru Stefan ISAILA
2019-05-22  9:56   ` Jan Beulich [this message]
2019-05-22  9:56     ` Jan Beulich
2019-05-22 12:59     ` Alexandru Stefan ISAILA
2019-05-22 12:59       ` [Xen-devel] " Alexandru Stefan ISAILA
2019-05-22 13:34       ` Jan Beulich
2019-05-22 13:34         ` [Xen-devel] " Jan Beulich
2019-05-22 13:50         ` Alexandru Stefan ISAILA
2019-05-22 13:50           ` [Xen-devel] " Alexandru Stefan ISAILA
2019-05-22 13:57           ` Jan Beulich
2019-05-22 13:57             ` [Xen-devel] " Jan Beulich
2019-05-30  8:59     ` Alexandru Stefan ISAILA
2019-05-30  8:59       ` [Xen-devel] " Alexandru Stefan ISAILA
2019-05-31  9:16       ` Jan Beulich
2019-05-31  9:16         ` [Xen-devel] " Jan Beulich
2019-05-22 13:13 ` [PATCH v4 1/2] x86/emulate: Move hvmemul_linear_to_phys Paul Durrant
2019-05-22 13:13   ` [Xen-devel] " Paul Durrant
2019-05-22 13:55   ` George Dunlap
2019-05-22 13:55     ` [Xen-devel] " George Dunlap

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5CE51CBD0200007800231438@prv1-mh.provo.novell.com \
    --to=jbeulich@suse.com \
    --cc=George.Dunlap@eu.citrix.com \
    --cc=aisaila@bitdefender.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=paul.durrant@citrix.com \
    --cc=rcojocaru@bitdefender.com \
    --cc=roger.pau@citrix.com \
    --cc=tamas@tklengyel.com \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.