All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/2] x86/mm: address observation made while working on XSA-387
@ 2021-12-01 10:34 Jan Beulich
  2021-12-01 10:35 ` [PATCH 1/2] x86/shadow: defer/avoid paging_mfn_is_dirty() invocation Jan Beulich
  2021-12-01 10:35 ` [PATCH 2/2] x86/paging: tidy paging_mfn_is_dirty() Jan Beulich
  0 siblings, 2 replies; 8+ messages in thread
From: Jan Beulich @ 2021-12-01 10:34 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Roger Pau Monné, George Dunlap

1: shadow: defer/avoid paging_mfn_is_dirty() invocation
2: paging: tidy paging_mfn_is_dirty()

Jan



^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 1/2] x86/shadow: defer/avoid paging_mfn_is_dirty() invocation
  2021-12-01 10:34 [PATCH 0/2] x86/mm: address observation made while working on XSA-387 Jan Beulich
@ 2021-12-01 10:35 ` Jan Beulich
  2021-12-01 18:33   ` Andrew Cooper
  2021-12-02 19:09   ` Tim Deegan
  2021-12-01 10:35 ` [PATCH 2/2] x86/paging: tidy paging_mfn_is_dirty() Jan Beulich
  1 sibling, 2 replies; 8+ messages in thread
From: Jan Beulich @ 2021-12-01 10:35 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Wei Liu, Roger Pau Monné, George Dunlap, Tim Deegan

paging_mfn_is_dirty() is moderately expensive, so avoid its use unless
its result might actually change anything. This means moving the
surrounding if() down below all other checks that can result in clearing
_PAGE_RW from sflags, in order to then check whether _PAGE_RW is
actually still set there before calling the function.

While moving the block of code, fold two if()s and make a few style
adjustments.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
TBD: Perhaps the dirty-VRAM piece also wants moving down a little, such
     that all three "level == 1" conditionals can be folded?

--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -604,23 +604,6 @@ _sh_propagate(struct vcpu *v,
                   && !(gflags & _PAGE_DIRTY)) )
         sflags &= ~_PAGE_RW;
 
-    // shadow_mode_log_dirty support
-    //
-    // Only allow the guest write access to a page a) on a demand fault,
-    // or b) if the page is already marked as dirty.
-    //
-    // (We handle log-dirty entirely inside the shadow code, without using the
-    // p2m_ram_logdirty p2m type: only HAP uses that.)
-    if ( unlikely((level == 1) && shadow_mode_log_dirty(d)) )
-    {
-        if ( mfn_valid(target_mfn) ) {
-            if ( ft & FETCH_TYPE_WRITE )
-                paging_mark_dirty(d, target_mfn);
-            else if ( !paging_mfn_is_dirty(d, target_mfn) )
-                sflags &= ~_PAGE_RW;
-        }
-    }
-
 #ifdef CONFIG_HVM
     if ( unlikely(level == 1) && is_hvm_domain(d) )
     {
@@ -661,6 +644,25 @@ _sh_propagate(struct vcpu *v,
                   ) )
         sflags &= ~_PAGE_RW;
 
+    /*
+     * shadow_mode_log_dirty support
+     *
+     * Only allow the guest write access to a page a) on a demand fault,
+     * or b) if the page is already marked as dirty.
+     *
+     * (We handle log-dirty entirely inside the shadow code, without using the
+     * p2m_ram_logdirty p2m type: only HAP uses that.)
+     */
+    if ( level == 1 && unlikely(shadow_mode_log_dirty(d)) &&
+         mfn_valid(target_mfn) )
+    {
+        if ( ft & FETCH_TYPE_WRITE )
+            paging_mark_dirty(d, target_mfn);
+        else if ( (sflags & _PAGE_RW) &&
+                  !paging_mfn_is_dirty(d, target_mfn) )
+            sflags &= ~_PAGE_RW;
+    }
+
     // PV guests in 64-bit mode use two different page tables for user vs
     // supervisor permissions, making the guest's _PAGE_USER bit irrelevant.
     // It is always shadowed as present...



^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 2/2] x86/paging: tidy paging_mfn_is_dirty()
  2021-12-01 10:34 [PATCH 0/2] x86/mm: address observation made while working on XSA-387 Jan Beulich
  2021-12-01 10:35 ` [PATCH 1/2] x86/shadow: defer/avoid paging_mfn_is_dirty() invocation Jan Beulich
@ 2021-12-01 10:35 ` Jan Beulich
  2021-12-01 16:06   ` Andrew Cooper
  1 sibling, 1 reply; 8+ messages in thread
From: Jan Beulich @ 2021-12-01 10:35 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Roger Pau Monné, George Dunlap

The function returning a boolean indicator, make it return bool. Also
constify its struct domain parameter, albeit requiring to also adjust
mm_locked_by_me(). Furthermore the function is used by shadow code only.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm/mm-locks.h
+++ b/xen/arch/x86/mm/mm-locks.h
@@ -40,7 +40,7 @@ static inline void mm_lock_init(mm_lock_
     l->unlock_level = 0;
 }
 
-static inline int mm_locked_by_me(mm_lock_t *l)
+static inline int mm_locked_by_me(const mm_lock_t *l)
 {
     return (l->lock.recurse_cpu == current->processor);
 }
--- a/xen/arch/x86/mm/paging.c
+++ b/xen/arch/x86/mm/paging.c
@@ -351,14 +351,14 @@ void paging_mark_dirty(struct domain *d,
     paging_mark_pfn_dirty(d, pfn);
 }
 
-
+#ifdef CONFIG_SHADOW_PAGING
 /* Is this guest page dirty? */
-int paging_mfn_is_dirty(struct domain *d, mfn_t gmfn)
+bool paging_mfn_is_dirty(const struct domain *d, mfn_t gmfn)
 {
     pfn_t pfn;
     mfn_t mfn, *l4, *l3, *l2;
     unsigned long *l1;
-    int rv;
+    bool dirty;
 
     ASSERT(paging_locked_by_me(d));
     ASSERT(paging_mode_log_dirty(d));
@@ -367,36 +367,37 @@ int paging_mfn_is_dirty(struct domain *d
     pfn = _pfn(get_gpfn_from_mfn(mfn_x(gmfn)));
     /* Invalid pages can't be dirty. */
     if ( unlikely(!VALID_M2P(pfn_x(pfn))) )
-        return 0;
+        return false;
 
     mfn = d->arch.paging.log_dirty.top;
     if ( !mfn_valid(mfn) )
-        return 0;
+        return false;
 
     l4 = map_domain_page(mfn);
     mfn = l4[L4_LOGDIRTY_IDX(pfn)];
     unmap_domain_page(l4);
     if ( !mfn_valid(mfn) )
-        return 0;
+        return false;
 
     l3 = map_domain_page(mfn);
     mfn = l3[L3_LOGDIRTY_IDX(pfn)];
     unmap_domain_page(l3);
     if ( !mfn_valid(mfn) )
-        return 0;
+        return false;
 
     l2 = map_domain_page(mfn);
     mfn = l2[L2_LOGDIRTY_IDX(pfn)];
     unmap_domain_page(l2);
     if ( !mfn_valid(mfn) )
-        return 0;
+        return false;
 
     l1 = map_domain_page(mfn);
-    rv = test_bit(L1_LOGDIRTY_IDX(pfn), l1);
+    dirty = test_bit(L1_LOGDIRTY_IDX(pfn), l1);
     unmap_domain_page(l1);
-    return rv;
-}
 
+    return dirty;
+}
+#endif
 
 /* Read a domain's log-dirty bitmap and stats.  If the operation is a CLEAN,
  * clear the bitmap and stats as well. */
--- a/xen/include/asm-x86/paging.h
+++ b/xen/include/asm-x86/paging.h
@@ -173,7 +173,7 @@ void paging_mark_pfn_dirty(struct domain
 
 /* is this guest page dirty? 
  * This is called from inside paging code, with the paging lock held. */
-int paging_mfn_is_dirty(struct domain *d, mfn_t gmfn);
+bool paging_mfn_is_dirty(const struct domain *d, mfn_t gmfn);
 
 /*
  * Log-dirty radix tree indexing:



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/2] x86/paging: tidy paging_mfn_is_dirty()
  2021-12-01 10:35 ` [PATCH 2/2] x86/paging: tidy paging_mfn_is_dirty() Jan Beulich
@ 2021-12-01 16:06   ` Andrew Cooper
  2021-12-02  8:38     ` Jan Beulich
  0 siblings, 1 reply; 8+ messages in thread
From: Andrew Cooper @ 2021-12-01 16:06 UTC (permalink / raw)
  To: Jan Beulich, xen-devel
  Cc: Andrew Cooper, Wei Liu, Roger Pau Monné, George Dunlap

On 01/12/2021 10:35, Jan Beulich wrote:
> The function returning a boolean indicator, make it return bool. Also
> constify its struct domain parameter, albeit requiring to also adjust
> mm_locked_by_me(). Furthermore the function is used by shadow code only.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>
> --- a/xen/arch/x86/mm/mm-locks.h
> +++ b/xen/arch/x86/mm/mm-locks.h
> @@ -40,7 +40,7 @@ static inline void mm_lock_init(mm_lock_
>      l->unlock_level = 0;
>  }
>  
> -static inline int mm_locked_by_me(mm_lock_t *l)
> +static inline int mm_locked_by_me(const mm_lock_t *l)

bool too?

>  {
>      return (l->lock.recurse_cpu == current->processor);
>  }
> --- a/xen/arch/x86/mm/paging.c
> +++ b/xen/arch/x86/mm/paging.c
> @@ -351,14 +351,14 @@ void paging_mark_dirty(struct domain *d,
>      paging_mark_pfn_dirty(d, pfn);
>  }
>  
> -
> +#ifdef CONFIG_SHADOW_PAGING
>  /* Is this guest page dirty? */
> -int paging_mfn_is_dirty(struct domain *d, mfn_t gmfn)
> +bool paging_mfn_is_dirty(const struct domain *d, mfn_t gmfn)
>  {
>      pfn_t pfn;
>      mfn_t mfn, *l4, *l3, *l2;
>      unsigned long *l1;
> -    int rv;
> +    bool dirty;
>  
>      ASSERT(paging_locked_by_me(d));
>      ASSERT(paging_mode_log_dirty(d));
> @@ -367,36 +367,37 @@ int paging_mfn_is_dirty(struct domain *d
>      pfn = _pfn(get_gpfn_from_mfn(mfn_x(gmfn)));

There's nothing inherently paging.c related about this function. 
Thoughts on moving the implementation across, rather than #ifdef-ing it?

If not, can we at least correct gmfn to mfn here and in the prototype?

Alternatively, do we want to start putting things like this in a real
library so we can have the toolchain figure this out automatically?

>      /* Invalid pages can't be dirty. */
>      if ( unlikely(!VALID_M2P(pfn_x(pfn))) )
> -        return 0;
> +        return false;
>  
>      mfn = d->arch.paging.log_dirty.top;
>      if ( !mfn_valid(mfn) )

These don't need to be mfn_valid().  They can be checks against
MFN_INVALID instead, because the logdirty bitmap is a Xen internal
structure.

In response to your comment in the previous patch, this would
substantially reduce the overhead of paging_mark_pfn_dirty() and
paging_mfn_is_dirty().

~Andrew


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/2] x86/shadow: defer/avoid paging_mfn_is_dirty() invocation
  2021-12-01 10:35 ` [PATCH 1/2] x86/shadow: defer/avoid paging_mfn_is_dirty() invocation Jan Beulich
@ 2021-12-01 18:33   ` Andrew Cooper
  2021-12-02  8:15     ` Jan Beulich
  2021-12-02 19:09   ` Tim Deegan
  1 sibling, 1 reply; 8+ messages in thread
From: Andrew Cooper @ 2021-12-01 18:33 UTC (permalink / raw)
  To: Jan Beulich, xen-devel
  Cc: Andrew Cooper, Wei Liu, Roger Pau Monné, George Dunlap, Tim Deegan

On 01/12/2021 10:35, Jan Beulich wrote:
> paging_mfn_is_dirty() is moderately expensive, so avoid its use unless
> its result might actually change anything. This means moving the
> surrounding if() down below all other checks that can result in clearing
> _PAGE_RW from sflags, in order to then check whether _PAGE_RW is
> actually still set there before calling the function.
>
> While moving the block of code, fold two if()s and make a few style
> adjustments.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> TBD: Perhaps the dirty-VRAM piece also wants moving down a little, such
>      that all three "level == 1" conditionals can be folded?

TBH, lots of the layout looks dubious, but I'm not sure how worthwhile
micro-optimising it is.  This particular rearrangement is surely
unmeasurable.

Fixing the (mis)use of mfn_valid() in the logdirty infrastructure will
gain a far larger improvement, because that's dropping a fair number of
lfence's from multiple paths, but it's still the case that anything here
is rare-to-non-existent in production workloads, and vastly dominated by
the VMExit cost even when migrating shadow VMs.

>
> --- a/xen/arch/x86/mm/shadow/multi.c
> +++ b/xen/arch/x86/mm/shadow/multi.c
> @@ -604,23 +604,6 @@ _sh_propagate(struct vcpu *v,
>                    && !(gflags & _PAGE_DIRTY)) )
>          sflags &= ~_PAGE_RW;
>  
> -    // shadow_mode_log_dirty support
> -    //
> -    // Only allow the guest write access to a page a) on a demand fault,
> -    // or b) if the page is already marked as dirty.
> -    //
> -    // (We handle log-dirty entirely inside the shadow code, without using the
> -    // p2m_ram_logdirty p2m type: only HAP uses that.)
> -    if ( unlikely((level == 1) && shadow_mode_log_dirty(d)) )
> -    {
> -        if ( mfn_valid(target_mfn) ) {
> -            if ( ft & FETCH_TYPE_WRITE )
> -                paging_mark_dirty(d, target_mfn);
> -            else if ( !paging_mfn_is_dirty(d, target_mfn) )
> -                sflags &= ~_PAGE_RW;
> -        }
> -    }
> -
>  #ifdef CONFIG_HVM
>      if ( unlikely(level == 1) && is_hvm_domain(d) )
>      {
> @@ -661,6 +644,25 @@ _sh_propagate(struct vcpu *v,
>                    ) )
>          sflags &= ~_PAGE_RW;
>  
> +    /*
> +     * shadow_mode_log_dirty support
> +     *
> +     * Only allow the guest write access to a page a) on a demand fault,
> +     * or b) if the page is already marked as dirty.
> +     *
> +     * (We handle log-dirty entirely inside the shadow code, without using the
> +     * p2m_ram_logdirty p2m type: only HAP uses that.)
> +     */
> +    if ( level == 1 && unlikely(shadow_mode_log_dirty(d)) &&
> +         mfn_valid(target_mfn) )
> +    {
> +        if ( ft & FETCH_TYPE_WRITE )
> +            paging_mark_dirty(d, target_mfn);
> +        else if ( (sflags & _PAGE_RW) &&
> +                  !paging_mfn_is_dirty(d, target_mfn) )
> +            sflags &= ~_PAGE_RW;

This is almost crying out for a logdirty_test_and_maybe_set() helper,
because the decent of the logdirty trie is common between the two, but
as this would be the only user, probably not worth it.

However, the more I look at the logdirty logic, the more it is clear
that all the mfn_valid() tests should change to MFN_INVALID, and the
result will be far more efficient.

~Andrew

> +    }
> +
>      // PV guests in 64-bit mode use two different page tables for user vs
>      // supervisor permissions, making the guest's _PAGE_USER bit irrelevant.
>      // It is always shadowed as present...
>
>



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/2] x86/shadow: defer/avoid paging_mfn_is_dirty() invocation
  2021-12-01 18:33   ` Andrew Cooper
@ 2021-12-02  8:15     ` Jan Beulich
  0 siblings, 0 replies; 8+ messages in thread
From: Jan Beulich @ 2021-12-02  8:15 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Andrew Cooper, Wei Liu, Roger Pau Monné,
	George Dunlap, Tim Deegan, xen-devel

On 01.12.2021 19:33, Andrew Cooper wrote:
> On 01/12/2021 10:35, Jan Beulich wrote:
>> paging_mfn_is_dirty() is moderately expensive, so avoid its use unless
>> its result might actually change anything. This means moving the
>> surrounding if() down below all other checks that can result in clearing
>> _PAGE_RW from sflags, in order to then check whether _PAGE_RW is
>> actually still set there before calling the function.
>>
>> While moving the block of code, fold two if()s and make a few style
>> adjustments.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> TBD: Perhaps the dirty-VRAM piece also wants moving down a little, such
>>      that all three "level == 1" conditionals can be folded?
> 
> TBH, lots of the layout looks dubious, but I'm not sure how worthwhile
> micro-optimising it is.  This particular rearrangement is surely
> unmeasurable.

The point of the rearrangement suggestion was source tidying far more
than micro-optimizing.

> Fixing the (mis)use of mfn_valid() in the logdirty infrastructure will
> gain a far larger improvement, because that's dropping a fair number of
> lfence's from multiple paths, but it's still the case that anything here
> is rare-to-non-existent in production workloads, and vastly dominated by
> the VMExit cost even when migrating shadow VMs.

Quite possible, but entirely orthogonal to this change.

>> @@ -661,6 +644,25 @@ _sh_propagate(struct vcpu *v,
>>                    ) )
>>          sflags &= ~_PAGE_RW;
>>  
>> +    /*
>> +     * shadow_mode_log_dirty support
>> +     *
>> +     * Only allow the guest write access to a page a) on a demand fault,
>> +     * or b) if the page is already marked as dirty.
>> +     *
>> +     * (We handle log-dirty entirely inside the shadow code, without using the
>> +     * p2m_ram_logdirty p2m type: only HAP uses that.)
>> +     */
>> +    if ( level == 1 && unlikely(shadow_mode_log_dirty(d)) &&
>> +         mfn_valid(target_mfn) )
>> +    {
>> +        if ( ft & FETCH_TYPE_WRITE )
>> +            paging_mark_dirty(d, target_mfn);
>> +        else if ( (sflags & _PAGE_RW) &&
>> +                  !paging_mfn_is_dirty(d, target_mfn) )
>> +            sflags &= ~_PAGE_RW;
> 
> This is almost crying out for a logdirty_test_and_maybe_set() helper,
> because the decent of the logdirty trie is common between the two, but
> as this would be the only user, probably not worth it.

I'm struggling to see how this would improve the code construct without
altering the behavior. But then you say anyway that introducing one
probably isn't worth it as long as there's just one use site (and there
aren't ever two decents of the trie, so there's not really anything to
be saved performance wise).

> However, the more I look at the logdirty logic, the more it is clear
> that all the mfn_valid() tests should change to MFN_INVALID, and the
> result will be far more efficient.

As said - an orthogonal change; nothing to be folded into here imo.

Jan



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/2] x86/paging: tidy paging_mfn_is_dirty()
  2021-12-01 16:06   ` Andrew Cooper
@ 2021-12-02  8:38     ` Jan Beulich
  0 siblings, 0 replies; 8+ messages in thread
From: Jan Beulich @ 2021-12-02  8:38 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Andrew Cooper, Wei Liu, Roger Pau Monné, George Dunlap, xen-devel

On 01.12.2021 17:06, Andrew Cooper wrote:
> On 01/12/2021 10:35, Jan Beulich wrote:
>> --- a/xen/arch/x86/mm/mm-locks.h
>> +++ b/xen/arch/x86/mm/mm-locks.h
>> @@ -40,7 +40,7 @@ static inline void mm_lock_init(mm_lock_
>>      l->unlock_level = 0;
>>  }
>>  
>> -static inline int mm_locked_by_me(mm_lock_t *l)
>> +static inline int mm_locked_by_me(const mm_lock_t *l)
> 
> bool too?

Oh, indeed. Will do.

>> --- a/xen/arch/x86/mm/paging.c
>> +++ b/xen/arch/x86/mm/paging.c
>> @@ -351,14 +351,14 @@ void paging_mark_dirty(struct domain *d,
>>      paging_mark_pfn_dirty(d, pfn);
>>  }
>>  
>> -
>> +#ifdef CONFIG_SHADOW_PAGING
>>  /* Is this guest page dirty? */
>> -int paging_mfn_is_dirty(struct domain *d, mfn_t gmfn)
>> +bool paging_mfn_is_dirty(const struct domain *d, mfn_t gmfn)
>>  {
>>      pfn_t pfn;
>>      mfn_t mfn, *l4, *l3, *l2;
>>      unsigned long *l1;
>> -    int rv;
>> +    bool dirty;
>>  
>>      ASSERT(paging_locked_by_me(d));
>>      ASSERT(paging_mode_log_dirty(d));
>> @@ -367,36 +367,37 @@ int paging_mfn_is_dirty(struct domain *d
>>      pfn = _pfn(get_gpfn_from_mfn(mfn_x(gmfn)));
> 
> There's nothing inherently paging.c related about this function. 
> Thoughts on moving the implementation across, rather than #ifdef-ing it?

I wouldn't strictly object to a move, but doing so would disconnect this
function from paging_mark_dirty() and other log-dirty code. Please
clarify whether you've explicitly considered this aspect when making the
suggestion (I did when deciding to use #ifdef-s).

Also, to make the changes reasonable to spot, that would be a separate
patch then.

> If not, can we at least correct gmfn to mfn here and in the prototype?

Hmm, that would incur further changes, which I'd prefer to avoid at this
point: The function already has a local variable named "mfn" (as can be
seen in context above).

I also don't see how this request is related to the question of moving
the function.

> Alternatively, do we want to start putting things like this in a real
> library so we can have the toolchain figure this out automatically?

I wouldn't want to go this far with a function like this one. To me it
doesn't feel like code which is actually 

>>      /* Invalid pages can't be dirty. */
>>      if ( unlikely(!VALID_M2P(pfn_x(pfn))) )
>> -        return 0;
>> +        return false;
>>  
>>      mfn = d->arch.paging.log_dirty.top;
>>      if ( !mfn_valid(mfn) )
> 
> These don't need to be mfn_valid().  They can be checks against
> MFN_INVALID instead, because the logdirty bitmap is a Xen internal
> structure.
> 
> In response to your comment in the previous patch, this would
> substantially reduce the overhead of paging_mark_pfn_dirty() and
> paging_mfn_is_dirty().

May I suggest that the conversion from mfn_valid() be a separate
topic and (set of) change(s)? I'd be happy to add a further patch
here to at least deal with its unnecessary uses on log_dirty.top
and alike. Of course such a change won't alter the "moderately
expensive" comment in the earlier patch - it'll be less expensive
then, but still not cheap. Even less so as soon as
map_domain_page() loses its shortcut in release builds (when the
direct map has disappeared).

Jan



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/2] x86/shadow: defer/avoid paging_mfn_is_dirty() invocation
  2021-12-01 10:35 ` [PATCH 1/2] x86/shadow: defer/avoid paging_mfn_is_dirty() invocation Jan Beulich
  2021-12-01 18:33   ` Andrew Cooper
@ 2021-12-02 19:09   ` Tim Deegan
  1 sibling, 0 replies; 8+ messages in thread
From: Tim Deegan @ 2021-12-02 19:09 UTC (permalink / raw)
  To: Jan Beulich
  Cc: xen-devel, Andrew Cooper, Wei Liu, Roger Pau Monné, George Dunlap

At 11:35 +0100 on 01 Dec (1638358515), Jan Beulich wrote:
> paging_mfn_is_dirty() is moderately expensive, so avoid its use unless
> its result might actually change anything. This means moving the
> surrounding if() down below all other checks that can result in clearing
> _PAGE_RW from sflags, in order to then check whether _PAGE_RW is
> actually still set there before calling the function.
> 
> While moving the block of code, fold two if()s and make a few style
> adjustments.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Tim Deegan <tim@xen.org>

> ---
> TBD: Perhaps the dirty-VRAM piece also wants moving down a little, such
>      that all three "level == 1" conditionals can be folded?

I have no strong feelings on that either way.

Cheers,

Tim.


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2021-12-02 19:10 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-12-01 10:34 [PATCH 0/2] x86/mm: address observation made while working on XSA-387 Jan Beulich
2021-12-01 10:35 ` [PATCH 1/2] x86/shadow: defer/avoid paging_mfn_is_dirty() invocation Jan Beulich
2021-12-01 18:33   ` Andrew Cooper
2021-12-02  8:15     ` Jan Beulich
2021-12-02 19:09   ` Tim Deegan
2021-12-01 10:35 ` [PATCH 2/2] x86/paging: tidy paging_mfn_is_dirty() Jan Beulich
2021-12-01 16:06   ` Andrew Cooper
2021-12-02  8:38     ` Jan Beulich

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.