From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: "Andrew Cooper" <andrew.cooper3@citrix.com>,
"Wei Liu" <wl@xen.org>, "Jan Beulich" <JBeulich@suse.com>,
"Roger Pau Monné" <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH] x86/boot: Simplify %fs setup in trampoline_setup
Date: Mon, 12 Aug 2019 16:10:32 +0100 [thread overview]
Message-ID: <20190812151032.9353-1-andrew.cooper3@citrix.com> (raw)
mov/shr is easier to follow than shld, and doesn't have a merge dependency on
the previous value of %edx. Shorten the rest of the code by streamlining the
comments.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
In addition to being clearer to follow, mov/shr is faster than shld to decode
and execute. See https://godbolt.org/z/A5kvuC for the latency/throughput/port
analysis, the Intel Optimisation guide which classifes them as "Slow Int"
instructions, or the AMD Optimisation guide which specifically has a section
entitled "Alternatives to SHLD Instruction".
---
xen/arch/x86/boot/head.S | 27 ++++++++++-----------------
1 file changed, 10 insertions(+), 17 deletions(-)
diff --git a/xen/arch/x86/boot/head.S b/xen/arch/x86/boot/head.S
index 782deac0d4..26b680521d 100644
--- a/xen/arch/x86/boot/head.S
+++ b/xen/arch/x86/boot/head.S
@@ -556,24 +556,17 @@ trampoline_setup:
/*
* Called on legacy BIOS and EFI platforms.
*
- * Initialize bits 0-15 of BOOT_FS segment descriptor base address.
+ * Set the BOOT_FS descriptor base address to %esi.
*/
- mov %si,BOOT_FS+2+sym_esi(trampoline_gdt)
-
- /* Initialize bits 16-23 of BOOT_FS segment descriptor base address. */
- shld $16,%esi,%edx
- mov %dl,BOOT_FS+4+sym_esi(trampoline_gdt)
-
- /* Initialize bits 24-31 of BOOT_FS segment descriptor base address. */
- mov %dh,BOOT_FS+7+sym_esi(trampoline_gdt)
-
- /*
- * Initialize %fs and later use it to access Xen data where possible.
- * According to Intel 64 and IA-32 Architectures Software Developer's
- * Manual it is safe to do that without reloading GDTR before.
- */
- mov $BOOT_FS,%edx
- mov %edx,%fs
+ mov %esi, %edx
+ shr $16, %edx
+ mov %si, BOOT_FS + 2 + sym_esi(trampoline_gdt) /* Bits 0-15 */
+ mov %dl, BOOT_FS + 4 + sym_esi(trampoline_gdt) /* Bits 16-23 */
+ mov %dh, BOOT_FS + 7 + sym_esi(trampoline_gdt) /* Bits 24-31 */
+
+ /* Load %fs to allow for access to Xen data. */
+ mov $BOOT_FS, %edx
+ mov %edx, %fs
/* Save Xen image load base address for later use. */
mov %esi,sym_fs(xen_phys_start)
--
2.11.0
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
next reply other threads:[~2019-08-12 15:11 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-08-12 15:10 Andrew Cooper [this message]
2019-08-13 11:05 ` [Xen-devel] [PATCH] x86/boot: Simplify %fs setup in trampoline_setup Wei Liu
2019-08-27 14:24 ` Jan Beulich
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190812151032.9353-1-andrew.cooper3@citrix.com \
--to=andrew.cooper3@citrix.com \
--cc=JBeulich@suse.com \
--cc=roger.pau@citrix.com \
--cc=wl@xen.org \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).