All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/3] x86/shadow: sh_page_fault() adjustments
@ 2023-01-23 14:25 Jan Beulich
  2023-01-23 14:26 ` [PATCH v2 1/3] x86/shadow: move dm-mmio handling code in sh_page_fault() Jan Beulich
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Jan Beulich @ 2023-01-23 14:25 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Wei Liu, Roger Pau Monné, Tim Deegan, George Dunlap

The original 2nd patch of v1 was split into two and extended by a 3rd
(1st one here) one.

1: move dm-mmio handling code in sh_page_fault()
2: mark more of sh_page_fault() HVM-only
3: drop dead code from HVM-only sh_page_fault() pieces

Jan


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v2 1/3] x86/shadow: move dm-mmio handling code in sh_page_fault()
  2023-01-23 14:25 [PATCH v2 0/3] x86/shadow: sh_page_fault() adjustments Jan Beulich
@ 2023-01-23 14:26 ` Jan Beulich
  2023-01-23 15:08   ` Jan Beulich
  2023-01-23 14:27 ` [PATCH v2 2/3] x86/shadow: mark more of sh_page_fault() HVM-only Jan Beulich
  2023-01-23 14:27 ` [PATCH v2 3/3] x86/shadow: drop dead code from HVM-only sh_page_fault() pieces Jan Beulich
  2 siblings, 1 reply; 8+ messages in thread
From: Jan Beulich @ 2023-01-23 14:26 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Wei Liu, Roger Pau Monné, Tim Deegan, George Dunlap

Do away with the partly mis-named "mmio" label there, which really is
only about emulated MMIO. Move the code to the place where the sole
"goto" was. Re-order steps slightly: Assertion first, perfc increment
outside of the locked region, and "gpa" calculation closer to the first
use of the variable. Also make the HVM conditional cover the entire
if(), as p2m_mmio_dm isn't applicable to PV; specifically get_gfn()
won't ever return this type for PV domains.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: New.

--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -2588,13 +2588,33 @@ static int cf_check sh_page_fault(
         goto emulate;
     }
 
+#ifdef CONFIG_HVM
+
     /* Need to hand off device-model MMIO to the device model */
     if ( p2mt == p2m_mmio_dm )
     {
+        ASSERT(is_hvm_vcpu(v));
+        if ( !guest_mode(regs) )
+            goto not_a_shadow_fault;
+
+        sh_audit_gw(v, &gw);
         gpa = guest_walk_to_gpa(&gw);
-        goto mmio;
+        SHADOW_PRINTK("mmio %#"PRIpaddr"\n", gpa);
+        shadow_audit_tables(v);
+        sh_reset_early_unshadow(v);
+
+        paging_unlock(d);
+        put_gfn(d, gfn_x(gfn));
+
+        perfc_incr(shadow_fault_mmio);
+        trace_shadow_gen(TRC_SHADOW_MMIO, va);
+
+        return handle_mmio_with_translation(va, gpa >> PAGE_SHIFT, access)
+               ? EXCRET_fault_fixed : 0;
     }
 
+#endif /* CONFIG_HVM */
+
     /* Ignore attempts to write to read-only memory. */
     if ( p2m_is_readonly(p2mt) && (ft == ft_demand_write) )
         goto emulate_readonly; /* skip over the instruction */
@@ -2867,25 +2887,6 @@ static int cf_check sh_page_fault(
     return EXCRET_fault_fixed;
 #endif /* CONFIG_HVM */
 
- mmio:
-    if ( !guest_mode(regs) )
-        goto not_a_shadow_fault;
-#ifdef CONFIG_HVM
-    ASSERT(is_hvm_vcpu(v));
-    perfc_incr(shadow_fault_mmio);
-    sh_audit_gw(v, &gw);
-    SHADOW_PRINTK("mmio %#"PRIpaddr"\n", gpa);
-    shadow_audit_tables(v);
-    sh_reset_early_unshadow(v);
-    paging_unlock(d);
-    put_gfn(d, gfn_x(gfn));
-    trace_shadow_gen(TRC_SHADOW_MMIO, va);
-    return (handle_mmio_with_translation(va, gpa >> PAGE_SHIFT, access)
-            ? EXCRET_fault_fixed : 0);
-#else
-    BUG();
-#endif
-
  not_a_shadow_fault:
     sh_audit_gw(v, &gw);
     SHADOW_PRINTK("not a shadow fault\n");



^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v2 2/3] x86/shadow: mark more of sh_page_fault() HVM-only
  2023-01-23 14:25 [PATCH v2 0/3] x86/shadow: sh_page_fault() adjustments Jan Beulich
  2023-01-23 14:26 ` [PATCH v2 1/3] x86/shadow: move dm-mmio handling code in sh_page_fault() Jan Beulich
@ 2023-01-23 14:27 ` Jan Beulich
  2023-02-24 16:39   ` Andrew Cooper
  2023-01-23 14:27 ` [PATCH v2 3/3] x86/shadow: drop dead code from HVM-only sh_page_fault() pieces Jan Beulich
  2 siblings, 1 reply; 8+ messages in thread
From: Jan Beulich @ 2023-01-23 14:27 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Wei Liu, Roger Pau Monné, Tim Deegan, George Dunlap

The types p2m_is_readonly() checks for aren't applicable to PV;
specifically get_gfn() won't ever return any such type for PV domains.
Extend the HVM-conditional block of code, also past the subsequent HVM-
only if(). This way the "emulate_readonly" also becomes unreachable when
!HVM, so move the conditional there upwards as well. Noticing the
earlier shadow_mode_refcounts() check, move it up even further, right
after that check. With that, the "done" label also needs marking as
potentially unused.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Parts split off to a subsequent patch.

--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -2613,8 +2613,6 @@ static int cf_check sh_page_fault(
                ? EXCRET_fault_fixed : 0;
     }
 
-#endif /* CONFIG_HVM */
-
     /* Ignore attempts to write to read-only memory. */
     if ( p2m_is_readonly(p2mt) && (ft == ft_demand_write) )
         goto emulate_readonly; /* skip over the instruction */
@@ -2633,12 +2631,14 @@ static int cf_check sh_page_fault(
         goto emulate;
     }
 
+#endif /* CONFIG_HVM */
+
     perfc_incr(shadow_fault_fixed);
     d->arch.paging.log_dirty.fault_count++;
     sh_reset_early_unshadow(v);
 
     trace_shadow_fixup(gw.l1e, va);
- done:
+ done: __maybe_unused;
     sh_audit_gw(v, &gw);
     SHADOW_PRINTK("fixed\n");
     shadow_audit_tables(v);
@@ -2650,6 +2650,7 @@ static int cf_check sh_page_fault(
     if ( !shadow_mode_refcounts(d) || !guest_mode(regs) )
         goto not_a_shadow_fault;
 
+#ifdef CONFIG_HVM
     /*
      * We do not emulate user writes. Instead we use them as a hint that the
      * page is no longer a page table. This behaviour differs from native, but
@@ -2677,7 +2678,6 @@ static int cf_check sh_page_fault(
         goto not_a_shadow_fault;
     }
 
-#ifdef CONFIG_HVM
     /* Unshadow if we are writing to a toplevel pagetable that is
      * flagged as a dying process, and that is not currently used. */
     if ( sh_mfn_is_a_page_table(gmfn) && is_hvm_domain(d) &&



^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v2 3/3] x86/shadow: drop dead code from HVM-only sh_page_fault() pieces
  2023-01-23 14:25 [PATCH v2 0/3] x86/shadow: sh_page_fault() adjustments Jan Beulich
  2023-01-23 14:26 ` [PATCH v2 1/3] x86/shadow: move dm-mmio handling code in sh_page_fault() Jan Beulich
  2023-01-23 14:27 ` [PATCH v2 2/3] x86/shadow: mark more of sh_page_fault() HVM-only Jan Beulich
@ 2023-01-23 14:27 ` Jan Beulich
  2023-02-24 16:40   ` Andrew Cooper
  2 siblings, 1 reply; 8+ messages in thread
From: Jan Beulich @ 2023-01-23 14:27 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Wei Liu, Roger Pau Monné, Tim Deegan, George Dunlap

The shadow_mode_refcounts() check immediately ahead of the "emulate"
label renders redundant two subsequent is_hvm_domain() checks (the
latter of which was already redundant with the former).

Also guest_mode() checks are pointless when we already know we're
dealing with a HVM domain.

Finally style-adjust a comment which otherwise would be fully visible as
patch context anyway.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: New, split off from earlier patch.

--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -2594,8 +2594,6 @@ static int cf_check sh_page_fault(
     if ( p2mt == p2m_mmio_dm )
     {
         ASSERT(is_hvm_vcpu(v));
-        if ( !guest_mode(regs) )
-            goto not_a_shadow_fault;
 
         sh_audit_gw(v, &gw);
         gpa = guest_walk_to_gpa(&gw);
@@ -2647,7 +2645,7 @@ static int cf_check sh_page_fault(
     return EXCRET_fault_fixed;
 
  emulate:
-    if ( !shadow_mode_refcounts(d) || !guest_mode(regs) )
+    if ( !shadow_mode_refcounts(d) )
         goto not_a_shadow_fault;
 
 #ifdef CONFIG_HVM
@@ -2672,16 +2670,11 @@ static int cf_check sh_page_fault(
      * caught by user-mode page-table check above.
      */
  emulate_readonly:
-    if ( !is_hvm_domain(d) )
-    {
-        ASSERT_UNREACHABLE();
-        goto not_a_shadow_fault;
-    }
-
-    /* Unshadow if we are writing to a toplevel pagetable that is
-     * flagged as a dying process, and that is not currently used. */
-    if ( sh_mfn_is_a_page_table(gmfn) && is_hvm_domain(d) &&
-         mfn_to_page(gmfn)->pagetable_dying )
+    /*
+     * Unshadow if we are writing to a toplevel pagetable that is
+     * flagged as a dying process, and that is not currently used.
+     */
+    if ( sh_mfn_is_a_page_table(gmfn) && mfn_to_page(gmfn)->pagetable_dying )
     {
         int used = 0;
         struct vcpu *tmp;



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 1/3] x86/shadow: move dm-mmio handling code in sh_page_fault()
  2023-01-23 14:26 ` [PATCH v2 1/3] x86/shadow: move dm-mmio handling code in sh_page_fault() Jan Beulich
@ 2023-01-23 15:08   ` Jan Beulich
  2023-02-24 16:37     ` Andrew Cooper
  0 siblings, 1 reply; 8+ messages in thread
From: Jan Beulich @ 2023-01-23 15:08 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Wei Liu, Roger Pau Monné, Tim Deegan, George Dunlap

On 23.01.2023 15:26, Jan Beulich wrote:
> Do away with the partly mis-named "mmio" label there, which really is
> only about emulated MMIO. Move the code to the place where the sole
> "goto" was. Re-order steps slightly: Assertion first, perfc increment
> outside of the locked region, and "gpa" calculation closer to the first
> use of the variable. Also make the HVM conditional cover the entire
> if(), as p2m_mmio_dm isn't applicable to PV; specifically get_gfn()
> won't ever return this type for PV domains.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> v2: New.
> 
> --- a/xen/arch/x86/mm/shadow/multi.c
> +++ b/xen/arch/x86/mm/shadow/multi.c

I've sent a stale patch, I'm sorry. This further hunk is needed to keep
!HVM builds working:

@@ -2144,8 +2144,8 @@ static int cf_check sh_page_fault(
     gfn_t gfn = _gfn(0);
     mfn_t gmfn, sl1mfn = _mfn(0);
     shadow_l1e_t sl1e, *ptr_sl1e;
-    paddr_t gpa;
 #ifdef CONFIG_HVM
+    paddr_t gpa;
     struct sh_emulate_ctxt emul_ctxt;
     const struct x86_emulate_ops *emul_ops;
     int r;

Jan

> @@ -2588,13 +2588,33 @@ static int cf_check sh_page_fault(
>          goto emulate;
>      }
>  
> +#ifdef CONFIG_HVM
> +
>      /* Need to hand off device-model MMIO to the device model */
>      if ( p2mt == p2m_mmio_dm )
>      {
> +        ASSERT(is_hvm_vcpu(v));
> +        if ( !guest_mode(regs) )
> +            goto not_a_shadow_fault;
> +
> +        sh_audit_gw(v, &gw);
>          gpa = guest_walk_to_gpa(&gw);
> -        goto mmio;
> +        SHADOW_PRINTK("mmio %#"PRIpaddr"\n", gpa);
> +        shadow_audit_tables(v);
> +        sh_reset_early_unshadow(v);
> +
> +        paging_unlock(d);
> +        put_gfn(d, gfn_x(gfn));
> +
> +        perfc_incr(shadow_fault_mmio);
> +        trace_shadow_gen(TRC_SHADOW_MMIO, va);
> +
> +        return handle_mmio_with_translation(va, gpa >> PAGE_SHIFT, access)
> +               ? EXCRET_fault_fixed : 0;
>      }
>  
> +#endif /* CONFIG_HVM */
> +
>      /* Ignore attempts to write to read-only memory. */
>      if ( p2m_is_readonly(p2mt) && (ft == ft_demand_write) )
>          goto emulate_readonly; /* skip over the instruction */
> @@ -2867,25 +2887,6 @@ static int cf_check sh_page_fault(
>      return EXCRET_fault_fixed;
>  #endif /* CONFIG_HVM */
>  
> - mmio:
> -    if ( !guest_mode(regs) )
> -        goto not_a_shadow_fault;
> -#ifdef CONFIG_HVM
> -    ASSERT(is_hvm_vcpu(v));
> -    perfc_incr(shadow_fault_mmio);
> -    sh_audit_gw(v, &gw);
> -    SHADOW_PRINTK("mmio %#"PRIpaddr"\n", gpa);
> -    shadow_audit_tables(v);
> -    sh_reset_early_unshadow(v);
> -    paging_unlock(d);
> -    put_gfn(d, gfn_x(gfn));
> -    trace_shadow_gen(TRC_SHADOW_MMIO, va);
> -    return (handle_mmio_with_translation(va, gpa >> PAGE_SHIFT, access)
> -            ? EXCRET_fault_fixed : 0);
> -#else
> -    BUG();
> -#endif
> -
>   not_a_shadow_fault:
>      sh_audit_gw(v, &gw);
>      SHADOW_PRINTK("not a shadow fault\n");
> 
> 



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 1/3] x86/shadow: move dm-mmio handling code in sh_page_fault()
  2023-01-23 15:08   ` Jan Beulich
@ 2023-02-24 16:37     ` Andrew Cooper
  0 siblings, 0 replies; 8+ messages in thread
From: Andrew Cooper @ 2023-02-24 16:37 UTC (permalink / raw)
  To: Jan Beulich, xen-devel
  Cc: Wei Liu, Roger Pau Monné, Tim Deegan, George Dunlap

On 23/01/2023 3:08 pm, Jan Beulich wrote:
> On 23.01.2023 15:26, Jan Beulich wrote:
>> Do away with the partly mis-named "mmio" label there, which really is
>> only about emulated MMIO. Move the code to the place where the sole
>> "goto" was. Re-order steps slightly: Assertion first, perfc increment
>> outside of the locked region, and "gpa" calculation closer to the first
>> use of the variable. Also make the HVM conditional cover the entire
>> if(), as p2m_mmio_dm isn't applicable to PV; specifically get_gfn()
>> won't ever return this type for PV domains.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> v2: New.
>>
>> --- a/xen/arch/x86/mm/shadow/multi.c
>> +++ b/xen/arch/x86/mm/shadow/multi.c
> I've sent a stale patch, I'm sorry. This further hunk is needed to keep
> !HVM builds working:
>
> @@ -2144,8 +2144,8 @@ static int cf_check sh_page_fault(
>      gfn_t gfn = _gfn(0);
>      mfn_t gmfn, sl1mfn = _mfn(0);
>      shadow_l1e_t sl1e, *ptr_sl1e;
> -    paddr_t gpa;
>  #ifdef CONFIG_HVM
> +    paddr_t gpa;
>      struct sh_emulate_ctxt emul_ctxt;
>      const struct x86_emulate_ops *emul_ops;
>      int r;

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 2/3] x86/shadow: mark more of sh_page_fault() HVM-only
  2023-01-23 14:27 ` [PATCH v2 2/3] x86/shadow: mark more of sh_page_fault() HVM-only Jan Beulich
@ 2023-02-24 16:39   ` Andrew Cooper
  0 siblings, 0 replies; 8+ messages in thread
From: Andrew Cooper @ 2023-02-24 16:39 UTC (permalink / raw)
  To: Jan Beulich, xen-devel
  Cc: Wei Liu, Roger Pau Monné, Tim Deegan, George Dunlap

On 23/01/2023 2:27 pm, Jan Beulich wrote:
> The types p2m_is_readonly() checks for aren't applicable to PV;
> specifically get_gfn() won't ever return any such type for PV domains.
> Extend the HVM-conditional block of code, also past the subsequent HVM-
> only if(). This way the "emulate_readonly" also becomes unreachable when
> !HVM, so move the conditional there upwards as well. Noticing the
> earlier shadow_mode_refcounts() check, move it up even further, right
> after that check. With that, the "done" label also needs marking as
> potentially unused.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 3/3] x86/shadow: drop dead code from HVM-only sh_page_fault() pieces
  2023-01-23 14:27 ` [PATCH v2 3/3] x86/shadow: drop dead code from HVM-only sh_page_fault() pieces Jan Beulich
@ 2023-02-24 16:40   ` Andrew Cooper
  0 siblings, 0 replies; 8+ messages in thread
From: Andrew Cooper @ 2023-02-24 16:40 UTC (permalink / raw)
  To: Jan Beulich, xen-devel
  Cc: Wei Liu, Roger Pau Monné, Tim Deegan, George Dunlap

On 23/01/2023 2:27 pm, Jan Beulich wrote:
> The shadow_mode_refcounts() check immediately ahead of the "emulate"
> label renders redundant two subsequent is_hvm_domain() checks (the
> latter of which was already redundant with the former).
>
> Also guest_mode() checks are pointless when we already know we're
> dealing with a HVM domain.
>
> Finally style-adjust a comment which otherwise would be fully visible as
> patch context anyway.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

Thanks for splitting apart - it's much easier to follow like this.


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2023-02-24 16:41 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-01-23 14:25 [PATCH v2 0/3] x86/shadow: sh_page_fault() adjustments Jan Beulich
2023-01-23 14:26 ` [PATCH v2 1/3] x86/shadow: move dm-mmio handling code in sh_page_fault() Jan Beulich
2023-01-23 15:08   ` Jan Beulich
2023-02-24 16:37     ` Andrew Cooper
2023-01-23 14:27 ` [PATCH v2 2/3] x86/shadow: mark more of sh_page_fault() HVM-only Jan Beulich
2023-02-24 16:39   ` Andrew Cooper
2023-01-23 14:27 ` [PATCH v2 3/3] x86/shadow: drop dead code from HVM-only sh_page_fault() pieces Jan Beulich
2023-02-24 16:40   ` Andrew Cooper

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.