All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] x86/EPT: flush cache when (potentially) limiting cachability
@ 2014-04-25 12:13 Jan Beulich
  2014-05-01 13:39 ` Tim Deegan
  2014-05-30  2:07 ` Liu, SongtaoX
  0 siblings, 2 replies; 16+ messages in thread
From: Jan Beulich @ 2014-04-25 12:13 UTC (permalink / raw)
  To: xen-devel; +Cc: Keir Fraser, Kevin Tian, Eddie Dong, Jun Nakajima, Tim Deegan

[-- Attachment #1: Type: text/plain, Size: 1902 bytes --]

While generally such guest side changes ought to be followed by guest
initiated flushes, we're flushing the cache under similar conditions
elsewhere (e.g. when the guest sets CR0.CD), so let's do so here too.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
Note that this goes on top of the still pending series titled
"x86/EPT: miscellaneous further fixes to EMT determination" (see
http://lists.xenproject.org/archives/html/xen-devel/2014-04/msg02932.html).

It would need to be determined whether we should gate all of these
flushes on need_iommu() and/or cache_flush_permitted(). Otoh the
changes they hang off of are infrequent, so there's no severe
performance penalty.

--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -649,8 +649,11 @@ int32_t hvm_set_mem_pinned_cacheattr(
             {
                 rcu_read_unlock(&pinned_cacheattr_rcu_lock);
                 list_del_rcu(&range->list);
+                type = range->type;
                 call_rcu(&range->rcu, free_pinned_cacheattr_entry);
                 p2m_memory_type_changed(d);
+                if ( type != PAT_TYPE_UNCACHABLE )
+                    flush_all(FLUSH_CACHE);
                 return 0;
             }
         rcu_read_unlock(&pinned_cacheattr_rcu_lock);
@@ -697,6 +700,8 @@ int32_t hvm_set_mem_pinned_cacheattr(
 
     list_add_rcu(&range->list, &d->arch.hvm_domain.pinned_cacheattr_ranges);
     p2m_memory_type_changed(d);
+    if ( type != PAT_TYPE_WRBACK )
+        flush_all(FLUSH_CACHE);
 
     return 0;
 }
@@ -786,7 +791,10 @@ HVM_REGISTER_SAVE_RESTORE(MTRR, hvm_save
 void memory_type_changed(struct domain *d)
 {
     if ( iommu_enabled && d->vcpu && d->vcpu[0] )
+    {
         p2m_memory_type_changed(d);
+        flush_all(FLUSH_CACHE);
+    }
 }
 
 int epte_get_entry_emt(struct domain *d, unsigned long gfn, mfn_t mfn,




[-- Attachment #2: EPT-flush-cache.patch --]
[-- Type: text/plain, Size: 1960 bytes --]

x86/EPT: flush cache when (potentially) limiting cachability

While generally such guest side changes ought to be followed by guest
initiated flushes, we're flushing the cache under similar conditions
elsewhere (e.g. when the guest sets CR0.CD), so let's do so here too.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
Note that this goes on top of the still pending series titled
"x86/EPT: miscellaneous further fixes to EMT determination" (see
http://lists.xenproject.org/archives/html/xen-devel/2014-04/msg02932.html).

It would need to be determined whether we should gate all of these
flushes on need_iommu() and/or cache_flush_permitted(). Otoh the
changes they hang off of are infrequent, so there's no severe
performance penalty.

--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -649,8 +649,11 @@ int32_t hvm_set_mem_pinned_cacheattr(
             {
                 rcu_read_unlock(&pinned_cacheattr_rcu_lock);
                 list_del_rcu(&range->list);
+                type = range->type;
                 call_rcu(&range->rcu, free_pinned_cacheattr_entry);
                 p2m_memory_type_changed(d);
+                if ( type != PAT_TYPE_UNCACHABLE )
+                    flush_all(FLUSH_CACHE);
                 return 0;
             }
         rcu_read_unlock(&pinned_cacheattr_rcu_lock);
@@ -697,6 +700,8 @@ int32_t hvm_set_mem_pinned_cacheattr(
 
     list_add_rcu(&range->list, &d->arch.hvm_domain.pinned_cacheattr_ranges);
     p2m_memory_type_changed(d);
+    if ( type != PAT_TYPE_WRBACK )
+        flush_all(FLUSH_CACHE);
 
     return 0;
 }
@@ -786,7 +791,10 @@ HVM_REGISTER_SAVE_RESTORE(MTRR, hvm_save
 void memory_type_changed(struct domain *d)
 {
     if ( iommu_enabled && d->vcpu && d->vcpu[0] )
+    {
         p2m_memory_type_changed(d);
+        flush_all(FLUSH_CACHE);
+    }
 }
 
 int epte_get_entry_emt(struct domain *d, unsigned long gfn, mfn_t mfn,

[-- Attachment #3: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2014-06-18 10:02 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-04-25 12:13 [PATCH] x86/EPT: flush cache when (potentially) limiting cachability Jan Beulich
2014-05-01 13:39 ` Tim Deegan
2014-05-30  2:07 ` Liu, SongtaoX
2014-05-30  6:26   ` Jan Beulich
2014-05-30  7:02     ` Jan Beulich
2014-05-30  7:34       ` Liu, SongtaoX
2014-05-30 11:20         ` Jan Beulich
2014-06-12  1:28           ` Liu, SongtaoX
2014-06-12  8:43             ` Ian Campbell
2014-06-16 13:01             ` Jan Beulich
2014-06-17  2:19               ` Liu, SongtaoX
2014-06-17  7:11                 ` Jan Beulich
2014-06-17  8:33                   ` Liu, SongtaoX
2014-06-17 15:34                     ` Jan Beulich
2014-06-18  3:09                       ` Liu, SongtaoX
2014-06-18 10:02                         ` Jan Beulich

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.