From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: "Andrew Cooper" <andrew.cooper3@citrix.com>,
"Wei Liu" <wl@xen.org>, "Roger Pau Monné" <roger.pau@citrix.com>,
"Tim Deegan" <tim@xen.org>,
"George Dunlap" <george.dunlap@citrix.com>
Subject: [PATCH 11/17] x86/shadow: polish shadow_write_entries()
Date: Thu, 14 Jan 2021 16:08:30 +0100 [thread overview]
Message-ID: <57495f9e-4a03-0317-6985-d32a694194ef@suse.com> (raw)
In-Reply-To: <4f1975a9-bdd9-f556-9db5-eb6c428f258f@suse.com>
First of all, avoid the initial dummy write: Try to write the actual
new value instead, and start the loop from 1 if this was successful.
Further, drop safe_write_entry() and use write_atomic() instead. This
eliminates the need for the BUILD_BUG_ON() there at the same time.
Then
- use const and unsigned,
- drop a redundant NULL check,
- don't open-code PAGE_OFFSET() and IS_ALIGNED(),
- adjust comment style.
Signed-off-by: Jan Beulich <jbeulich@suse.com>
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -746,50 +746,50 @@ l1e_propagate_from_guest(struct vcpu *v,
* functions which ever write (non-zero) data onto a shadow page.
*/
-static inline void safe_write_entry(void *dst, void *src)
-/* Copy one PTE safely when processors might be running on the
- * destination pagetable. This does *not* give safety against
- * concurrent writes (that's what the paging lock is for), just
- * stops the hardware picking up partially written entries. */
-{
- volatile unsigned long *d = dst;
- unsigned long *s = src;
- ASSERT(!((unsigned long) d & (sizeof (shadow_l1e_t) - 1)));
- /* In 64-bit, sizeof(pte) == sizeof(ulong) == 1 word,
- * which will be an atomic write, since the entry is aligned. */
- BUILD_BUG_ON(sizeof (shadow_l1e_t) != sizeof (unsigned long));
- *d = *s;
-}
-
-
static inline void
-shadow_write_entries(void *d, void *s, int entries, mfn_t mfn)
-/* This function does the actual writes to shadow pages.
+shadow_write_entries(void *d, const void *s, unsigned int entries, mfn_t mfn)
+/*
+ * This function does the actual writes to shadow pages.
* It must not be called directly, since it doesn't do the bookkeeping
- * that shadow_set_l*e() functions do. */
+ * that shadow_set_l*e() functions do.
+ *
+ * Copy PTEs safely when processors might be running on the
+ * destination pagetable. This does *not* give safety against
+ * concurrent writes (that's what the paging lock is for), just
+ * stops the hardware picking up partially written entries.
+ */
{
shadow_l1e_t *dst = d;
- shadow_l1e_t *src = s;
+ const shadow_l1e_t *src = s;
void *map = NULL;
- int i;
+ unsigned int i = 0;
- /* Because we mirror access rights at all levels in the shadow, an
+ /*
+ * Because we mirror access rights at all levels in the shadow, an
* l2 (or higher) entry with the RW bit cleared will leave us with
* no write access through the linear map.
* We detect that by writing to the shadow with put_unsafe() and
- * using map_domain_page() to get a writeable mapping if we need to. */
- if ( put_unsafe(*dst, dst) )
+ * using map_domain_page() to get a writeable mapping if we need to.
+ */
+ if ( put_unsafe(*src, dst) )
{
perfc_incr(shadow_linear_map_failed);
map = map_domain_page(mfn);
- dst = map + ((unsigned long)dst & (PAGE_SIZE - 1));
+ dst = map + PAGE_OFFSET(dst);
+ }
+ else
+ {
+ ++src;
+ ++dst;
+ i = 1;
}
+ ASSERT(IS_ALIGNED((unsigned long)dst, sizeof(*dst)));
- for ( i = 0; i < entries; i++ )
- safe_write_entry(dst++, src++);
+ for ( ; i < entries; i++ )
+ write_atomic(&dst++->l1, src++->l1);
- if ( map != NULL ) unmap_domain_page(map);
+ unmap_domain_page(map);
}
/* type is only used to distinguish grant map pages from ordinary RAM
next prev parent reply other threads:[~2021-01-14 15:08 UTC|newest]
Thread overview: 46+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-01-14 15:01 [PATCH 00/17] x86/PV: avoid speculation abuse through guest accessors plus Jan Beulich
2021-01-14 15:03 ` [PATCH 01/17] x86/shadow: use __put_user() instead of __copy_to_user() Jan Beulich
2021-01-14 15:04 ` [PATCH 02/17] x86: split __{get,put}_user() into "guest" and "unsafe" variants Jan Beulich
2021-02-05 15:43 ` Roger Pau Monné
2021-02-05 16:13 ` Jan Beulich
2021-02-05 16:18 ` Roger Pau Monné
2021-02-05 16:26 ` Jan Beulich
2021-02-09 13:07 ` Roger Pau Monné
2021-02-09 13:15 ` Jan Beulich
2021-02-09 14:46 ` Roger Pau Monné
2021-02-09 14:57 ` Jan Beulich
2021-02-09 15:23 ` Roger Pau Monné
2021-02-09 14:55 ` Roger Pau Monné
2021-02-09 15:14 ` Jan Beulich
2021-02-09 15:27 ` Roger Pau Monné
2021-01-14 15:04 ` [PATCH 03/17] x86: split __copy_{from,to}_user() " Jan Beulich
2021-02-09 16:06 ` Roger Pau Monné
2021-02-09 17:03 ` Jan Beulich
2021-01-14 15:04 ` [PATCH 04/17] x86/PV: harden guest memory accesses against speculative abuse Jan Beulich
2021-02-09 16:26 ` Roger Pau Monné
2021-02-10 16:55 ` Jan Beulich
2021-02-11 8:11 ` Roger Pau Monné
2021-02-11 11:28 ` Jan Beulich
2021-02-12 10:41 ` Roger Pau Monné
2021-02-12 12:48 ` Jan Beulich
2021-02-12 13:02 ` Roger Pau Monné
2021-02-12 13:15 ` Jan Beulich
2021-01-14 15:05 ` [PATCH 05/17] x86: rename {get,put}_user() to {get,put}_guest() Jan Beulich
2021-01-14 15:05 ` [PATCH 06/17] x86/gdbsx: convert "user" to "guest" accesses Jan Beulich
2021-01-14 15:06 ` [PATCH 07/17] x86: rename copy_{from,to}_user() to copy_{from,to}_guest_pv() Jan Beulich
2021-01-14 15:07 ` [PATCH 08/17] x86: move stac()/clac() from {get,put}_unsafe_asm() Jan Beulich
2021-01-14 15:07 ` [PATCH 09/17] x86/PV: use get_unsafe() instead of copy_from_unsafe() Jan Beulich
2021-01-14 15:08 ` [PATCH 10/17] x86/shadow: " Jan Beulich
2021-01-14 15:08 ` Jan Beulich [this message]
2021-01-14 15:09 ` [PATCH 12/17] x86/shadow: move shadow_set_l<N>e() to their own source file Jan Beulich
2021-01-14 15:09 ` [PATCH 13/17] x86/shadow: don't open-code SHF_* shorthands Jan Beulich
2021-01-14 15:10 ` [PATCH 14/17] x86/shadow: SH_type_l2h_shadow is PV-only Jan Beulich
2021-01-14 15:10 ` [PATCH 15/17] x86/shadow: drop SH_type_l2h_pae_shadow Jan Beulich
2021-01-22 13:11 ` Tim Deegan
2021-01-22 16:31 ` Jan Beulich
2021-01-22 20:02 ` Tim Deegan
2021-01-25 11:09 ` Jan Beulich
2021-01-25 11:33 ` Jan Beulich
2021-01-14 15:10 ` [PATCH 16/17] x86/shadow: only 4-level guest code needs building when !HVM Jan Beulich
2021-01-14 15:11 ` [PATCH 17/17] x86/shadow: adjust is_pv_*() checks Jan Beulich
2021-01-22 13:18 ` [PATCH 00/17] x86/PV: avoid speculation abuse through guest accessors plus Tim Deegan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=57495f9e-4a03-0317-6985-d32a694194ef@suse.com \
--to=jbeulich@suse.com \
--cc=andrew.cooper3@citrix.com \
--cc=george.dunlap@citrix.com \
--cc=roger.pau@citrix.com \
--cc=tim@xen.org \
--cc=wl@xen.org \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).