All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/3] x86: insn-fetch related emulation adjustments
@ 2021-12-03 11:18 Jan Beulich
  2021-12-03 11:21 ` [PATCH 1/3] x86/HVM: permit CLFLUSH{,OPT} on execute-only code segments Jan Beulich
                   ` (3 more replies)
  0 siblings, 4 replies; 11+ messages in thread
From: Jan Beulich @ 2021-12-03 11:18 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Roger Pau Monné, Paul Durrant

Two fixes and some tidying.

1: HVM: permit CLFLUSH{,OPT} on execute-only code segments
2: HVM: fail virt-to-linear conversion for insn fetches from non-code segments
3: emul: drop "seg" parameter from insn_fetch() hook

Jan



^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 1/3] x86/HVM: permit CLFLUSH{,OPT} on execute-only code segments
  2021-12-03 11:18 [PATCH 0/3] x86: insn-fetch related emulation adjustments Jan Beulich
@ 2021-12-03 11:21 ` Jan Beulich
  2021-12-03 11:48   ` Andrew Cooper
  2021-12-10 12:53   ` Durrant, Paul
  2021-12-03 11:22 ` [PATCH 2/3] x86/HVM: fail virt-to-linear conversion for insn fetches from non-code segments Jan Beulich
                   ` (2 subsequent siblings)
  3 siblings, 2 replies; 11+ messages in thread
From: Jan Beulich @ 2021-12-03 11:21 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Roger Pau Monné, Paul Durrant

The SDM explicitly permits this, and since that's sensible behavior
don't special case AMD (where the PM doesn't explicitly say so).

Fixes: 52dba7bd0b36 ("x86emul: generalize wbinvd() hook")
Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -2310,7 +2310,9 @@ static int hvmemul_cache_op(
         ASSERT(!is_x86_system_segment(seg));
 
         rc = hvmemul_virtual_to_linear(seg, offset, 0, NULL,
-                                       hvm_access_read, hvmemul_ctxt, &addr);
+                                       op != x86emul_clwb ? hvm_access_none
+                                                          : hvm_access_read,
+                                       hvmemul_ctxt, &addr);
         if ( rc != X86EMUL_OKAY )
             break;
 



^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 2/3] x86/HVM: fail virt-to-linear conversion for insn fetches from non-code segments
  2021-12-03 11:18 [PATCH 0/3] x86: insn-fetch related emulation adjustments Jan Beulich
  2021-12-03 11:21 ` [PATCH 1/3] x86/HVM: permit CLFLUSH{,OPT} on execute-only code segments Jan Beulich
@ 2021-12-03 11:22 ` Jan Beulich
  2021-12-03 11:49   ` Andrew Cooper
  2021-12-03 11:23 ` [PATCH 3/3] x86emul: drop "seg" parameter from insn_fetch() hook Jan Beulich
  2021-12-10  9:43 ` Ping: [PATCH 0/3] x86: insn-fetch related emulation adjustments Jan Beulich
  3 siblings, 1 reply; 11+ messages in thread
From: Jan Beulich @ 2021-12-03 11:22 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Roger Pau Monné, Paul Durrant

Just like (in protected mode) reads may not go to exec-only segments and
writes may not go to non-writable ones, insn fetches may not access data
segments.

Fixes: 623e83716791 ("hvm: Support hardware task switching")
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -2551,6 +2551,9 @@ bool hvm_vcpu_virtual_to_linear(
      */
     ASSERT(seg < x86_seg_none);
 
+    /* However, check that insn fetches only ever specify CS. */
+    ASSERT(access_type != hvm_access_insn_fetch || seg == x86_seg_cs);
+
     if ( !(v->arch.hvm.guest_cr[0] & X86_CR0_PE) )
     {
         /*
@@ -2615,10 +2618,17 @@ bool hvm_vcpu_virtual_to_linear(
                 if ( (reg->type & 0xa) == 0x8 )
                     goto out; /* execute-only code segment */
                 break;
+
             case hvm_access_write:
                 if ( (reg->type & 0xa) != 0x2 )
                     goto out; /* not a writable data segment */
                 break;
+
+            case hvm_access_insn_fetch:
+                if ( !(reg->type & 0x8) )
+                    goto out; /* not a code segment */
+                break;
+
             default:
                 break;
             }



^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 3/3] x86emul: drop "seg" parameter from insn_fetch() hook
  2021-12-03 11:18 [PATCH 0/3] x86: insn-fetch related emulation adjustments Jan Beulich
  2021-12-03 11:21 ` [PATCH 1/3] x86/HVM: permit CLFLUSH{,OPT} on execute-only code segments Jan Beulich
  2021-12-03 11:22 ` [PATCH 2/3] x86/HVM: fail virt-to-linear conversion for insn fetches from non-code segments Jan Beulich
@ 2021-12-03 11:23 ` Jan Beulich
  2021-12-03 12:24   ` Andrew Cooper
  2021-12-10 12:56   ` Durrant, Paul
  2021-12-10  9:43 ` Ping: [PATCH 0/3] x86: insn-fetch related emulation adjustments Jan Beulich
  3 siblings, 2 replies; 11+ messages in thread
From: Jan Beulich @ 2021-12-03 11:23 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Roger Pau Monné, Paul Durrant

This is specified (and asserted for in a number of places) to always be
CS. Passing this as an argument in various places is therefore
pointless. The price to pay is two simple new functions, with the
benefit of the PTWR case now gaining a more appropriate error code.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
In principle in the PTWR case I think we ought to set PFEC_insn_fetch in
the error code only when NX is seen as available by the guest. Otoh I'd
kind of expect x86_emul_pagefault() to abstract away this detail.
Thoughts?

Note: While probably trivial to re-base ahead, for now this depends on
      "x86emul: a few small steps towards disintegration"
      (https://lists.xen.org/archives/html/xen-devel/2021-08/msg00367.html).

--- a/tools/fuzz/x86_instruction_emulator/fuzz-emul.c
+++ b/tools/fuzz/x86_instruction_emulator/fuzz-emul.c
@@ -197,14 +197,11 @@ static int fuzz_read_io(
 }
 
 static int fuzz_insn_fetch(
-    enum x86_segment seg,
     unsigned long offset,
     void *p_data,
     unsigned int bytes,
     struct x86_emulate_ctxt *ctxt)
 {
-    assert(seg == x86_seg_cs);
-
     /* Minimal segment limit checking, until full one is being put in place. */
     if ( ctxt->addr_size < 64 && (offset >> 32) )
     {
@@ -222,7 +219,7 @@ static int fuzz_insn_fetch(
         return maybe_fail(ctxt, "insn_fetch", true);
     }
 
-    return data_read(ctxt, seg, "insn_fetch", p_data, bytes);
+    return data_read(ctxt, x86_seg_cs, "insn_fetch", p_data, bytes);
 }
 
 static int _fuzz_rep_read(struct x86_emulate_ctxt *ctxt,
--- a/tools/tests/x86_emulator/predicates.c
+++ b/tools/tests/x86_emulator/predicates.c
@@ -2049,8 +2049,7 @@ static void print_insn(const uint8_t *in
 
 void do_test(uint8_t *instr, unsigned int len, unsigned int modrm,
              enum mem_access mem, struct x86_emulate_ctxt *ctxt,
-             int (*fetch)(enum x86_segment seg,
-                          unsigned long offset,
+             int (*fetch)(unsigned long offset,
                           void *p_data,
                           unsigned int bytes,
                           struct x86_emulate_ctxt *ctxt))
@@ -2110,8 +2109,7 @@ void do_test(uint8_t *instr, unsigned in
 }
 
 void predicates_test(void *instr, struct x86_emulate_ctxt *ctxt,
-                     int (*fetch)(enum x86_segment seg,
-                                  unsigned long offset,
+                     int (*fetch)(unsigned long offset,
                                   void *p_data,
                                   unsigned int bytes,
                                   struct x86_emulate_ctxt *ctxt))
--- a/tools/tests/x86_emulator/test_x86_emulator.c
+++ b/tools/tests/x86_emulator/test_x86_emulator.c
@@ -594,14 +594,13 @@ static int read(
 }
 
 static int fetch(
-    enum x86_segment seg,
     unsigned long offset,
     void *p_data,
     unsigned int bytes,
     struct x86_emulate_ctxt *ctxt)
 {
     if ( verbose )
-        printf("** %s(%u, %p,, %u,)\n", __func__, seg, (void *)offset, bytes);
+        printf("** %s(CS:%p,, %u,)\n", __func__, (void *)offset, bytes);
 
     memcpy(p_data, (void *)offset, bytes);
     return X86EMUL_OKAY;
--- a/tools/tests/x86_emulator/x86-emulate.h
+++ b/tools/tests/x86_emulator/x86-emulate.h
@@ -113,8 +113,7 @@ WRAP(puts);
 void evex_disp8_test(void *instr, struct x86_emulate_ctxt *ctxt,
                      const struct x86_emulate_ops *ops);
 void predicates_test(void *instr, struct x86_emulate_ctxt *ctxt,
-                     int (*fetch)(enum x86_segment seg,
-                                  unsigned long offset,
+                     int (*fetch)(unsigned long offset,
                                   void *p_data,
                                   unsigned int bytes,
                                   struct x86_emulate_ctxt *ctxt));
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -1294,7 +1294,6 @@ static int hvmemul_read(
 }
 
 int hvmemul_insn_fetch(
-    enum x86_segment seg,
     unsigned long offset,
     void *p_data,
     unsigned int bytes,
@@ -1312,7 +1311,7 @@ int hvmemul_insn_fetch(
     if ( !bytes ||
          unlikely((insn_off + bytes) > hvmemul_ctxt->insn_buf_bytes) )
     {
-        int rc = __hvmemul_read(seg, offset, p_data, bytes,
+        int rc = __hvmemul_read(x86_seg_cs, offset, p_data, bytes,
                                 hvm_access_insn_fetch, hvmemul_ctxt);
 
         if ( rc == X86EMUL_OKAY && bytes )
--- a/xen/arch/x86/mm/shadow/hvm.c
+++ b/xen/arch/x86/mm/shadow/hvm.c
@@ -162,8 +162,7 @@ hvm_emulate_read(enum x86_segment seg,
 }
 
 static int
-hvm_emulate_insn_fetch(enum x86_segment seg,
-                       unsigned long offset,
+hvm_emulate_insn_fetch(unsigned long offset,
                        void *p_data,
                        unsigned int bytes,
                        struct x86_emulate_ctxt *ctxt)
@@ -172,11 +171,9 @@ hvm_emulate_insn_fetch(enum x86_segment
         container_of(ctxt, struct sh_emulate_ctxt, ctxt);
     unsigned int insn_off = offset - sh_ctxt->insn_buf_eip;
 
-    ASSERT(seg == x86_seg_cs);
-
     /* Fall back if requested bytes are not in the prefetch cache. */
     if ( unlikely((insn_off + bytes) > sh_ctxt->insn_buf_bytes) )
-        return hvm_read(seg, offset, p_data, bytes,
+        return hvm_read(x86_seg_cs, offset, p_data, bytes,
                         hvm_access_insn_fetch, sh_ctxt);
 
     /* Hit the cache. Simple memcpy. */
--- a/xen/arch/x86/pv/emul-gate-op.c
+++ b/xen/arch/x86/pv/emul-gate-op.c
@@ -163,6 +163,12 @@ static int read_mem(enum x86_segment seg
     return X86EMUL_OKAY;
 }
 
+static int fetch(unsigned long offset, void *p_data,
+                 unsigned int bytes, struct x86_emulate_ctxt *ctxt)
+{
+    return read_mem(x86_seg_cs, offset, p_data, bytes, ctxt);
+}
+
 void pv_emulate_gate_op(struct cpu_user_regs *regs)
 {
     struct vcpu *v = current;
@@ -205,7 +211,7 @@ void pv_emulate_gate_op(struct cpu_user_
 
     ctxt.ctxt.addr_size = ar & _SEGMENT_DB ? 32 : 16;
     /* Leave zero in ctxt.ctxt.sp_size, as it's not needed for decoding. */
-    state = x86_decode_insn(&ctxt.ctxt, read_mem);
+    state = x86_decode_insn(&ctxt.ctxt, fetch);
     ctxt.insn_fetch = false;
     if ( IS_ERR_OR_NULL(state) )
     {
--- a/xen/arch/x86/pv/emul-priv-op.c
+++ b/xen/arch/x86/pv/emul-priv-op.c
@@ -1258,8 +1258,7 @@ static int validate(const struct x86_emu
     return X86EMUL_UNHANDLEABLE;
 }
 
-static int insn_fetch(enum x86_segment seg,
-                      unsigned long offset,
+static int insn_fetch(unsigned long offset,
                       void *p_data,
                       unsigned int bytes,
                       struct x86_emulate_ctxt *ctxt)
@@ -1269,8 +1268,6 @@ static int insn_fetch(enum x86_segment s
     unsigned int rc;
     unsigned long addr = poc->cs.base + offset;
 
-    ASSERT(seg == x86_seg_cs);
-
     /* We don't mean to emulate any branches. */
     if ( !bytes )
         return X86EMUL_UNHANDLEABLE;
--- a/xen/arch/x86/pv/ro-page-fault.c
+++ b/xen/arch/x86/pv/ro-page-fault.c
@@ -52,6 +52,21 @@ static int ptwr_emulated_read(enum x86_s
     return X86EMUL_OKAY;
 }
 
+static int ptwr_emulated_insn_fetch(unsigned long offset,
+                                    void *p_data, unsigned int bytes,
+                                    struct x86_emulate_ctxt *ctxt)
+{
+    unsigned int rc = copy_from_guest_pv(p_data, (void *)offset, bytes);
+
+    if ( rc )
+    {
+        x86_emul_pagefault(PFEC_insn_fetch, offset + bytes - rc, ctxt);
+        return X86EMUL_EXCEPTION;
+    }
+
+    return X86EMUL_OKAY;
+}
+
 /*
  * p_old being NULL indicates a plain write to occur, while a non-NULL
  * input requests a CMPXCHG-based update.
@@ -247,7 +262,7 @@ static int ptwr_emulated_cmpxchg(enum x8
 
 static const struct x86_emulate_ops ptwr_emulate_ops = {
     .read       = ptwr_emulated_read,
-    .insn_fetch = ptwr_emulated_read,
+    .insn_fetch = ptwr_emulated_insn_fetch,
     .write      = ptwr_emulated_write,
     .cmpxchg    = ptwr_emulated_cmpxchg,
     .validate   = pv_emul_is_mem_write,
@@ -290,14 +305,14 @@ static int ptwr_do_page_fault(struct x86
 
 static const struct x86_emulate_ops mmio_ro_emulate_ops = {
     .read       = x86emul_unhandleable_rw,
-    .insn_fetch = ptwr_emulated_read,
+    .insn_fetch = ptwr_emulated_insn_fetch,
     .write      = mmio_ro_emulated_write,
     .validate   = pv_emul_is_mem_write,
 };
 
 static const struct x86_emulate_ops mmcfg_intercept_ops = {
     .read       = x86emul_unhandleable_rw,
-    .insn_fetch = ptwr_emulated_read,
+    .insn_fetch = ptwr_emulated_insn_fetch,
     .write      = mmcfg_intercept_write,
     .validate   = pv_emul_is_mem_write,
 };
--- a/xen/arch/x86/x86_emulate/decode.c
+++ b/xen/arch/x86/x86_emulate/decode.c
@@ -34,8 +34,7 @@ struct x86_emulate_state *
 x86_decode_insn(
     struct x86_emulate_ctxt *ctxt,
     int (*insn_fetch)(
-        enum x86_segment seg, unsigned long offset,
-        void *p_data, unsigned int bytes,
+        unsigned long offset, void *p_data, unsigned int bytes,
         struct x86_emulate_ctxt *ctxt))
 {
     static DEFINE_PER_CPU(struct x86_emulate_state, state);
@@ -618,7 +617,7 @@ static unsigned int decode_disp8scale(en
    generate_exception_if((uint8_t)(s->ip -                            \
                                    ctxt->regs->r(ip)) > MAX_INST_LEN, \
                          X86_EXC_GP, 0);                              \
-   rc = ops->insn_fetch(x86_seg_cs, _ip, &_x, _size, ctxt);           \
+   rc = ops->insn_fetch(_ip, &_x, _size, ctxt);                       \
    if ( rc ) goto done;                                               \
    _x;                                                                \
 })
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -357,7 +357,7 @@ do {
         ip = (uint16_t)ip;                                              \
     else if ( !mode_64bit() )                                           \
         ip = (uint32_t)ip;                                              \
-    rc = ops->insn_fetch(x86_seg_cs, ip, NULL, 0, ctxt);                \
+    rc = ops->insn_fetch(ip, NULL, 0, ctxt);                            \
     if ( rc ) goto done;                                                \
     _regs.r(ip) = ip;                                                   \
     singlestep = _regs.eflags & X86_EFLAGS_TF;                          \
@@ -2301,7 +2301,7 @@ x86_emulate(
                    ? 8 : op_bytes;
         if ( (rc = read_ulong(x86_seg_ss, sp_post_inc(op_bytes + src.val),
                               &dst.val, op_bytes, ctxt, ops)) != 0 ||
-             (rc = ops->insn_fetch(x86_seg_cs, dst.val, NULL, 0, ctxt)) )
+             (rc = ops->insn_fetch(dst.val, NULL, 0, ctxt)) )
             goto done;
         _regs.r(ip) = dst.val;
         adjust_bnd(ctxt, ops, vex.pfx);
@@ -2822,14 +2822,14 @@ x86_emulate(
             break;
         case 2: /* call (near) */
             dst.val = _regs.r(ip);
-            if ( (rc = ops->insn_fetch(x86_seg_cs, src.val, NULL, 0, ctxt)) )
+            if ( (rc = ops->insn_fetch(src.val, NULL, 0, ctxt)) )
                 goto done;
             _regs.r(ip) = src.val;
             src.val = dst.val;
             adjust_bnd(ctxt, ops, vex.pfx);
             goto push;
         case 4: /* jmp (near) */
-            if ( (rc = ops->insn_fetch(x86_seg_cs, src.val, NULL, 0, ctxt)) )
+            if ( (rc = ops->insn_fetch(src.val, NULL, 0, ctxt)) )
                 goto done;
             _regs.r(ip) = src.val;
             dst.type = OP_NONE;
--- a/xen/arch/x86/x86_emulate/x86_emulate.h
+++ b/xen/arch/x86/x86_emulate/x86_emulate.h
@@ -254,13 +254,12 @@ struct x86_emulate_ops
 
     /*
      * insn_fetch: Emulate fetch from instruction byte stream.
-     *  Except for @bytes, all parameters are the same as for 'read'.
+     *  Except for @bytes and missing @seg, all parameters are the same as for
+     *  'read'.
      *  @bytes: Access length (0 <= @bytes < 16, with zero meaning
      *  "validate address only").
-     *  @seg is always x86_seg_cs.
      */
     int (*insn_fetch)(
-        enum x86_segment seg,
         unsigned long offset,
         void *p_data,
         unsigned int bytes,
@@ -750,8 +749,7 @@ struct x86_emulate_state *
 x86_decode_insn(
     struct x86_emulate_ctxt *ctxt,
     int (*insn_fetch)(
-        enum x86_segment seg, unsigned long offset,
-        void *p_data, unsigned int bytes,
+        unsigned long offset, void *p_data, unsigned int bytes,
         struct x86_emulate_ctxt *ctxt));
 
 unsigned int
--- a/xen/include/asm-x86/hvm/emulate.h
+++ b/xen/include/asm-x86/hvm/emulate.h
@@ -92,8 +92,7 @@ static inline bool handle_mmio(void)
     return hvm_emulate_one_insn(x86_insn_is_mem_access, "MMIO");
 }
 
-int hvmemul_insn_fetch(enum x86_segment seg,
-                       unsigned long offset,
+int hvmemul_insn_fetch(unsigned long offset,
                        void *p_data,
                        unsigned int bytes,
                        struct x86_emulate_ctxt *ctxt);



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/3] x86/HVM: permit CLFLUSH{,OPT} on execute-only code segments
  2021-12-03 11:21 ` [PATCH 1/3] x86/HVM: permit CLFLUSH{,OPT} on execute-only code segments Jan Beulich
@ 2021-12-03 11:48   ` Andrew Cooper
  2021-12-03 11:55     ` Jan Beulich
  2021-12-10 12:53   ` Durrant, Paul
  1 sibling, 1 reply; 11+ messages in thread
From: Andrew Cooper @ 2021-12-03 11:48 UTC (permalink / raw)
  To: Jan Beulich, xen-devel
  Cc: Andrew Cooper, Wei Liu, Roger Pau Monné, Paul Durrant

On 03/12/2021 11:21, Jan Beulich wrote:
> The SDM explicitly permits this, and since that's sensible behavior
> don't special case AMD (where the PM doesn't explicitly say so).

APM explicitly says so too.

"The CLFLUSH instruction executes at any privilege level. CLFLUSH
performs all the segmentation and paging checks that a 1-byte read would
perform, except that it also allows references to execute-only segments."

and

"The CLFLUSHOPT instruction executes at any privilege level. CLFLUSHOPT
performs all the segmentation and paging checks that a 1-byte read would
perform, except that it also allows references to execute-only segments."

> Fixes: 52dba7bd0b36 ("x86emul: generalize wbinvd() hook")
> Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

With the commit message tweaked, Reviewed-by: Andrew Cooper
<andrew.cooper3@citrix.com>.  Far less invasive than I was fearing.

~Andrew


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/3] x86/HVM: fail virt-to-linear conversion for insn fetches from non-code segments
  2021-12-03 11:22 ` [PATCH 2/3] x86/HVM: fail virt-to-linear conversion for insn fetches from non-code segments Jan Beulich
@ 2021-12-03 11:49   ` Andrew Cooper
  0 siblings, 0 replies; 11+ messages in thread
From: Andrew Cooper @ 2021-12-03 11:49 UTC (permalink / raw)
  To: Jan Beulich, xen-devel
  Cc: Andrew Cooper, Wei Liu, Roger Pau Monné, Paul Durrant

On 03/12/2021 11:22, Jan Beulich wrote:
> Just like (in protected mode) reads may not go to exec-only segments and
> writes may not go to non-writable ones, insn fetches may not access data
> segments.
>
> Fixes: 623e83716791 ("hvm: Support hardware task switching")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/3] x86/HVM: permit CLFLUSH{,OPT} on execute-only code segments
  2021-12-03 11:48   ` Andrew Cooper
@ 2021-12-03 11:55     ` Jan Beulich
  0 siblings, 0 replies; 11+ messages in thread
From: Jan Beulich @ 2021-12-03 11:55 UTC (permalink / raw)
  To: Andrew Cooper, xen-devel
  Cc: Andrew Cooper, Wei Liu, Roger Pau Monné, Paul Durrant

On 03.12.2021 12:48, Andrew Cooper wrote:
> On 03/12/2021 11:21, Jan Beulich wrote:
>> The SDM explicitly permits this, and since that's sensible behavior
>> don't special case AMD (where the PM doesn't explicitly say so).
> 
> APM explicitly says so too.
> 
> "The CLFLUSH instruction executes at any privilege level. CLFLUSH
> performs all the segmentation and paging checks that a 1-byte read would
> perform, except that it also allows references to execute-only segments."
> 
> and
> 
> "The CLFLUSHOPT instruction executes at any privilege level. CLFLUSHOPT
> performs all the segmentation and paging checks that a 1-byte read would
> perform, except that it also allows references to execute-only segments."

Somehow I didn't read further after the page table related paragraph,
perhaps on the assumption that like in the SDM it would be all in one
paragraph.

>> Fixes: 52dba7bd0b36 ("x86emul: generalize wbinvd() hook")
>> Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> With the commit message tweaked, Reviewed-by: Andrew Cooper
> <andrew.cooper3@citrix.com>.  Far less invasive than I was fearing.

Thanks. I've switched to simply saying "Both SDM and PM explicitly
permit this."

Jan



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 3/3] x86emul: drop "seg" parameter from insn_fetch() hook
  2021-12-03 11:23 ` [PATCH 3/3] x86emul: drop "seg" parameter from insn_fetch() hook Jan Beulich
@ 2021-12-03 12:24   ` Andrew Cooper
  2021-12-10 12:56   ` Durrant, Paul
  1 sibling, 0 replies; 11+ messages in thread
From: Andrew Cooper @ 2021-12-03 12:24 UTC (permalink / raw)
  To: Jan Beulich, xen-devel
  Cc: Andrew Cooper, Wei Liu, Roger Pau Monné, Paul Durrant

On 03/12/2021 11:23, Jan Beulich wrote:
> This is specified (and asserted for in a number of places) to always be
> CS. Passing this as an argument in various places is therefore
> pointless. The price to pay is two simple new functions,

This is actually a very interesting case study.

Both are indirect targets, so need cf_check (or rather, will do
imminently.  I'll fold a suitable fix when I rebase the CET series).

On the face of it, there's now a pile of parameter shuffling just to get
a 0 in %rdi, which isn't ideal.

However, for fine grained CFI schemes using a type hash, it actually
prevents mixing and matching of read/fetch hooks, so ends up as a
hardening improvement too.

>  with the
> benefit of the PTWR case now gaining a more appropriate error code.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

> ---
> In principle in the PTWR case I think we ought to set PFEC_insn_fetch in
> the error code only when NX is seen as available by the guest. Otoh I'd
> kind of expect x86_emul_pagefault() to abstract away this detail.
> Thoughts?

I have mixed feelings.  x86_emul_pagefault() is the wrong place to put
such logic because it, like its neighbours, is just a thin wrapper for
filling the pending event information.

Architecturally, PFEC_insn_fetch is visible for NX || SMEP, and we do
have logic to make this happen correctly for HVM guests (confirmed by my
XTF test, which I *still* need to get around to adding to CI).  I think
it's all contained in the main pagewalk, but I can't remember offhand.

However, PV guests explicitly share their paging settings with Xen, and
we don't hide EFER.NX based on CPUID, although we do appear to hide
CR4.SMEP unilaterally (hardly surprising).

Given the ubiquity of NX these days, and the fact that PV guests are
known-fuzzy in the pagetable department, I'm not sure it's worth the
overhead of trying to hide.

~Andrew


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Ping: [PATCH 0/3] x86: insn-fetch related emulation adjustments
  2021-12-03 11:18 [PATCH 0/3] x86: insn-fetch related emulation adjustments Jan Beulich
                   ` (2 preceding siblings ...)
  2021-12-03 11:23 ` [PATCH 3/3] x86emul: drop "seg" parameter from insn_fetch() hook Jan Beulich
@ 2021-12-10  9:43 ` Jan Beulich
  3 siblings, 0 replies; 11+ messages in thread
From: Jan Beulich @ 2021-12-10  9:43 UTC (permalink / raw)
  To: Paul Durrant; +Cc: Andrew Cooper, Wei Liu, Roger Pau Monné, xen-devel

Paul,

On 03.12.2021 12:18, Jan Beulich wrote:
> Two fixes and some tidying.
> 
> 1: HVM: permit CLFLUSH{,OPT} on execute-only code segments
> 2: HVM: fail virt-to-linear conversion for insn fetches from non-code segments
> 3: emul: drop "seg" parameter from insn_fetch() hook

may I please ask for an ack or otherwise on patches 1 and 3 here?

Thanks, Jan



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/3] x86/HVM: permit CLFLUSH{,OPT} on execute-only code segments
  2021-12-03 11:21 ` [PATCH 1/3] x86/HVM: permit CLFLUSH{,OPT} on execute-only code segments Jan Beulich
  2021-12-03 11:48   ` Andrew Cooper
@ 2021-12-10 12:53   ` Durrant, Paul
  1 sibling, 0 replies; 11+ messages in thread
From: Durrant, Paul @ 2021-12-10 12:53 UTC (permalink / raw)
  To: Jan Beulich, xen-devel
  Cc: Andrew Cooper, Wei Liu, Roger Pau Monné, Paul Durrant

On 03/12/2021 03:21, Jan Beulich wrote:
> The SDM explicitly permits this, and since that's sensible behavior
> don't special case AMD (where the PM doesn't explicitly say so).
> 
> Fixes: 52dba7bd0b36 ("x86emul: generalize wbinvd() hook")
> Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Paul Durrant <paul@xen.org>


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 3/3] x86emul: drop "seg" parameter from insn_fetch() hook
  2021-12-03 11:23 ` [PATCH 3/3] x86emul: drop "seg" parameter from insn_fetch() hook Jan Beulich
  2021-12-03 12:24   ` Andrew Cooper
@ 2021-12-10 12:56   ` Durrant, Paul
  1 sibling, 0 replies; 11+ messages in thread
From: Durrant, Paul @ 2021-12-10 12:56 UTC (permalink / raw)
  To: Jan Beulich, xen-devel
  Cc: Andrew Cooper, Wei Liu, Roger Pau Monné, Paul Durrant

On 03/12/2021 03:23, Jan Beulich wrote:
> This is specified (and asserted for in a number of places) to always be
> CS. Passing this as an argument in various places is therefore
> pointless. The price to pay is two simple new functions, with the
> benefit of the PTWR case now gaining a more appropriate error code.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

HVM emulate parts...

Acked-by: Paul Durrant <paul@xen.org>


^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2021-12-10 12:57 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-12-03 11:18 [PATCH 0/3] x86: insn-fetch related emulation adjustments Jan Beulich
2021-12-03 11:21 ` [PATCH 1/3] x86/HVM: permit CLFLUSH{,OPT} on execute-only code segments Jan Beulich
2021-12-03 11:48   ` Andrew Cooper
2021-12-03 11:55     ` Jan Beulich
2021-12-10 12:53   ` Durrant, Paul
2021-12-03 11:22 ` [PATCH 2/3] x86/HVM: fail virt-to-linear conversion for insn fetches from non-code segments Jan Beulich
2021-12-03 11:49   ` Andrew Cooper
2021-12-03 11:23 ` [PATCH 3/3] x86emul: drop "seg" parameter from insn_fetch() hook Jan Beulich
2021-12-03 12:24   ` Andrew Cooper
2021-12-10 12:56   ` Durrant, Paul
2021-12-10  9:43 ` Ping: [PATCH 0/3] x86: insn-fetch related emulation adjustments Jan Beulich

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.