Xen-Devel Archive on lore.kernel.org
 help / color / Atom feed
* [Xen-devel] [PATCH v3 0/5] Clean up x86_64 boot code
@ 2019-08-21 16:35 David Woodhouse
  2019-08-21 16:35 ` [Xen-devel] [PATCH v3 1/5] x86/boot: Only jump into low trampoline code for real-mode boot David Woodhouse
  0 siblings, 1 reply; 19+ messages in thread
From: David Woodhouse @ 2019-08-21 16:35 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

[-- Attachment #1.1: Type: text/plain, Size: 3585 bytes --]

Some cleanups for the boot path, originally inspired by an attempt to
avoid scribbling on arbitrarily-chosen low memory.

In the no-real-mode case we don't need to bounce through low memory at
all; we can run the 32-bit trampoline in-place in the Xen image.

The variables containing information which is optionally discovered by
the real-mode boot code can be put back in place in the Xen image and we
can dispense with the bootsym() pointer gymnastics in C code which
access them in low memory.

I haven't yet got to reloc(), which I think exists only to ensure that
the various breadcrumbs left all over the place by the Multiboot
bootloader aren't scribbled on when we copy the 16-bit boot trampoline
into low memory. I'd quite like to kill reloc() and pass the original
pointer up to 64-bit code to be handled in C.

That would require finding a *safe* location to put the 16-bit boot
trampoline though, which doesn't already contain anything that the
bootloader created for us.

In fact, isn't there already a chance that head.S will choose a location
for the trampoline which is already part of a module or contains one of
the Multiboot breadcrumbs?

https://xenbits.xen.org/gitweb/?p=people/dwmw2/xen.git;a=shortlog;h=refs/heads/bootcleanup

v2: Patch #1 of the first series is already merged.
    Fold in minor fixup from Andrew to what is now patch #1.
    Verbally agree to overcome Jan's objections to patch #1, in Chicago.

v3: Another patch merged from v2 posting. And then there were five.
    Update to staging branch, especially commit c3cfa5b30 ("x86: Restore
      IA32_MISC_ENABLE on wakeup").
    Minor bikeshedding and commit comment improvements.
    Ignore Andy's conflicting cleanups except to use them as an excuse for
      not making the GDT sizing automatic as discussed.

David Woodhouse (5):
      x86/boot: Only jump into low trampoline code for real-mode boot
      x86/boot: Split bootsym() into four types of relocations
      x86/boot: Rename trampoline_{start,end} to boot_trampoline_{start,end}
      x86/boot: Copy 16-bit boot variables back up to Xen image
      x86/boot: Do not use trampoline for no-real-mode boot paths

 xen/arch/x86/acpi/power.c         |   6 +++---
 xen/arch/x86/boot/edd.S           |  18 ++++++++++--------
 xen/arch/x86/boot/head.S          |  89 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-------------------------
 xen/arch/x86/boot/mem.S           |  14 ++++++++------
 xen/arch/x86/boot/trampoline.S    | 146 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++------------------------------
 xen/arch/x86/boot/video.S         |  36 +++++++++++++++++++-----------------
 xen/arch/x86/boot/wakeup.S        |  16 ++++++++--------
 xen/arch/x86/cpu/common.c         |   2 +-
 xen/arch/x86/cpu/intel.c          |   2 +-
 xen/arch/x86/efi/efi-boot.h       |  31 +++----------------------------
 xen/arch/x86/platform_hypercall.c |  18 +++++++++---------
 xen/arch/x86/setup.c              |  68 ++++++++++++++++++++++++++++++++++++++++++++++++++++++--------------
 xen/arch/x86/smpboot.c            |   6 +++---
 xen/arch/x86/tboot.c              |   6 +++---
 xen/arch/x86/x86_64/mm.c          |   2 +-
 xen/arch/x86/xen.lds.S            |  28 +++++++++++++++++++++-------
 xen/include/asm-x86/acpi.h        |   2 +-
 xen/include/asm-x86/config.h      |  12 ++++++------
 xen/include/asm-x86/edd.h         |   1 -
 19 files changed, 331 insertions(+), 172 deletions(-)

[-- Attachment #1.2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5174 bytes --]

[-- Attachment #2: Type: text/plain, Size: 157 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [Xen-devel] [PATCH v3 1/5] x86/boot: Only jump into low trampoline code for real-mode boot
  2019-08-21 16:35 [Xen-devel] [PATCH v3 0/5] Clean up x86_64 boot code David Woodhouse
@ 2019-08-21 16:35 ` David Woodhouse
  2019-08-21 16:35   ` [Xen-devel] [PATCH v3 2/5] x86/boot: Split bootsym() into four types of relocations David Woodhouse
                     ` (4 more replies)
  0 siblings, 5 replies; 19+ messages in thread
From: David Woodhouse @ 2019-08-21 16:35 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

From: David Woodhouse <dwmw@amazon.co.uk>

If the no-real-mode flag is set, don't go there at all. This is a prelude
to not even putting it there in the first place.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 xen/arch/x86/boot/head.S       | 10 ++++++++++
 xen/arch/x86/boot/trampoline.S |  4 ----
 2 files changed, 10 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/boot/head.S b/xen/arch/x86/boot/head.S
index 26b680521d..d303379083 100644
--- a/xen/arch/x86/boot/head.S
+++ b/xen/arch/x86/boot/head.S
@@ -727,7 +727,17 @@ trampoline_setup:
         /* Switch to low-memory stack which lives at the end of trampoline region. */
         mov     sym_fs(trampoline_phys),%edi
         lea     TRAMPOLINE_SPACE+TRAMPOLINE_STACK_SPACE(%edi),%esp
+        cmpb    $0, sym_fs(skip_realmode)
+        jz      1f
+        /* If no-real-mode, jump straight to trampoline_protmode_entry */
+        lea     trampoline_protmode_entry-trampoline_start(%edi),%eax
+        /* EBX == 0 indicates we are the BP (Boot Processor). */
+        xor     %ebx,%ebx
+        jmp     2f
+1:
+        /* Go via 16-bit code in trampoline_boot_cpu_entry */
         lea     trampoline_boot_cpu_entry-trampoline_start(%edi),%eax
+2:
         pushl   $BOOT_CS32
         push    %eax
 
diff --git a/xen/arch/x86/boot/trampoline.S b/xen/arch/x86/boot/trampoline.S
index 7c6a2328d2..429a088b19 100644
--- a/xen/arch/x86/boot/trampoline.S
+++ b/xen/arch/x86/boot/trampoline.S
@@ -194,9 +194,6 @@ gdt_48: .word   6*8-1
 
         .code32
 trampoline_boot_cpu_entry:
-        cmpb    $0,bootsym_rel(skip_realmode,5)
-        jnz     .Lskip_realmode
-
         /* Load pseudo-real-mode segments. */
         mov     $BOOT_PSEUDORM_DS,%eax
         mov     %eax,%ds
@@ -276,7 +273,6 @@ trampoline_boot_cpu_entry:
         mov     %eax,%gs
         mov     %eax,%ss
 
-.Lskip_realmode:
         /* EBX == 0 indicates we are the BP (Boot Processor). */
         xor     %ebx,%ebx
 
-- 
2.21.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [Xen-devel] [PATCH v3 2/5] x86/boot: Split bootsym() into four types of relocations
  2019-08-21 16:35 ` [Xen-devel] [PATCH v3 1/5] x86/boot: Only jump into low trampoline code for real-mode boot David Woodhouse
@ 2019-08-21 16:35   ` David Woodhouse
  2019-08-30 15:10     ` Jan Beulich
  2019-08-21 16:35   ` [Xen-devel] [PATCH v3 3/5] x86/boot: Rename trampoline_{start, end} to boot_trampoline_{start, end} David Woodhouse
                     ` (3 subsequent siblings)
  4 siblings, 1 reply; 19+ messages in thread
From: David Woodhouse @ 2019-08-21 16:35 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

From: David Woodhouse <dwmw@amazon.co.uk>

As a first step toward using the low-memory trampoline only when necessary
for a legacy boot without no-real-mode, clean up the relocations into
three separate groups.

 • bootsym() is now used only at boot time when no-real-mode isn't set.

 • bootdatasym() is for variables containing information discovered by
   the 16-bit boot code. This is currently accessed directly in place
   in low memory by Xen at runtime, but will be copied back to its
   location in high memory to avoid the pointer gymnastics (and because
   a subsequent patch will stop copying the 16-bit boot code into low
   memory at all when it isn't being used).

 • trampsym() is for the permanent 16-bit trampoline used for AP startup
   and for wake from sleep. This is not used at boot, and can be copied
   into (properly allocated) low memory once the system is running.

 • tramp32sym() is used both at boot and for AP startup/wakeup. During
   boot it can be used in-place, running from the physical address of
   the Xen image. For AP startup it can't, because at that point there
   isn't a full 1:1 mapping of all memory; only the low trampoline page
   is mapped.

No (intentional) functional change yet; just a "cleanup" to allow the
various parts to be treated separately in subsequent patches.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 xen/arch/x86/boot/edd.S        | 16 +++----
 xen/arch/x86/boot/head.S       | 22 +++++++--
 xen/arch/x86/boot/mem.S        | 12 ++---
 xen/arch/x86/boot/trampoline.S | 85 ++++++++++++++++++++++++++--------
 xen/arch/x86/boot/video.S      |  6 +--
 xen/arch/x86/boot/wakeup.S     | 16 +++----
 xen/arch/x86/efi/efi-boot.h    |  8 ++--
 xen/arch/x86/xen.lds.S         | 15 ++++--
 8 files changed, 124 insertions(+), 56 deletions(-)

diff --git a/xen/arch/x86/boot/edd.S b/xen/arch/x86/boot/edd.S
index 3df712bce1..434bbbd960 100644
--- a/xen/arch/x86/boot/edd.S
+++ b/xen/arch/x86/boot/edd.S
@@ -41,7 +41,7 @@ get_edd:
 # This code is sensitive to the size of the structs in edd.h
 edd_start:
         /* ds:si points at fn48 results. Fn41 results go immediately before. */
-        movw    $bootsym(boot_edd_info)+EDDEXTSIZE, %si
+        movw    $bootdatasym(boot_edd_info)+EDDEXTSIZE, %si
         movb    $0x80, %dl                      # BIOS device 0x80
 
 edd_check_ext:
@@ -56,7 +56,7 @@ edd_check_ext:
         movb    %dl, %ds:-8(%si)                # store device number
         movb    %ah, %ds:-7(%si)                # store version
         movw    %cx, %ds:-6(%si)                # store extensions
-        incb    bootsym(boot_edd_info_nr)       # note that we stored something
+        incb    bootdatasym(boot_edd_info_nr)   # note that we stored something
 
 edd_get_device_params:
         movw    $EDDPARMSIZE, %ds:(%si)         # put size
@@ -97,7 +97,7 @@ edd_legacy_done:
 edd_next:
         incb    %dl                             # increment to next device
         jz      edd_done
-        cmpb    $EDD_INFO_MAX,bootsym(boot_edd_info_nr)
+        cmpb    $EDD_INFO_MAX,bootdatasym(boot_edd_info_nr)
         jb      edd_check_ext
 
 edd_done:
@@ -108,11 +108,11 @@ edd_done:
 .Ledd_mbr_sig_start:
         pushw   %es
         movb    $0x80, %dl                      # from device 80
-        movw    $bootsym(boot_mbr_signature), %bx # store buffer ptr in bx
+        movw    $bootdatasym(boot_mbr_signature), %bx # store buffer ptr in bx
 .Ledd_mbr_sig_read:
         pushw   %bx
-        movw    $bootsym(boot_edd_info), %bx
-        movzbw  bootsym(boot_edd_info_nr), %cx
+        movw    $bootdatasym(boot_edd_info), %bx
+        movzbw  bootdatasym(boot_edd_info_nr), %cx
         jcxz    .Ledd_mbr_sig_default
 .Ledd_mbr_sig_find_info:
         cmpb    %dl, (%bx)
@@ -151,12 +151,12 @@ edd_done:
         jne     .Ledd_mbr_sig_next
         movb    %dl, (%bx)                      # store BIOS drive number
         movl    %ecx, 4(%bx)                    # store signature from MBR
-        incb    bootsym(boot_mbr_signature_nr)  # note that we stored something
+        incb    bootdatasym(boot_mbr_signature_nr) # note that we stored something
         addw    $8, %bx                         # increment sig buffer ptr
 .Ledd_mbr_sig_next:
         incb    %dl                             # increment to next device
         jz      .Ledd_mbr_sig_done
-        cmpb    $EDD_MBR_SIG_MAX, bootsym(boot_mbr_signature_nr)
+        cmpb    $EDD_MBR_SIG_MAX, bootdatasym(boot_mbr_signature_nr)
         jb      .Ledd_mbr_sig_read
 .Ledd_mbr_sig_done:
         popw    %es
diff --git a/xen/arch/x86/boot/head.S b/xen/arch/x86/boot/head.S
index d303379083..e19b31fb85 100644
--- a/xen/arch/x86/boot/head.S
+++ b/xen/arch/x86/boot/head.S
@@ -699,14 +699,30 @@ trampoline_setup:
         cmp     $sym_offs(__trampoline_rel_stop),%edi
         jb      1b
 
-        /* Patch in the trampoline segment. */
+        mov     $sym_offs(__trampoline32_rel_start),%edi
+1:
+        mov     %fs:(%edi),%eax
+        add     %edx,%fs:(%edi,%eax)
+        add     $4,%edi
+        cmp     $sym_offs(__trampoline32_rel_stop),%edi
+        jb      1b
+
+        mov     $sym_offs(__bootsym_rel_start),%edi
+1:
+        mov     %fs:(%edi),%eax
+        add     %edx,%fs:(%edi,%eax)
+        add     $4,%edi
+        cmp     $sym_offs(__bootsym_rel_stop),%edi
+        jb      1b
+
+        /* Patch in the boot trampoline segment. */
         shr     $4,%edx
-        mov     $sym_offs(__trampoline_seg_start),%edi
+        mov     $sym_offs(__bootsym_seg_start),%edi
 1:
         mov     %fs:(%edi),%eax
         mov     %dx,%fs:(%edi,%eax)
         add     $4,%edi
-        cmp     $sym_offs(__trampoline_seg_stop),%edi
+        cmp     $sym_offs(__bootsym_seg_stop),%edi
         jb      1b
 
         /* Do not parse command line on EFI platform here. */
diff --git a/xen/arch/x86/boot/mem.S b/xen/arch/x86/boot/mem.S
index 5b9ab5c1de..c5bc774325 100644
--- a/xen/arch/x86/boot/mem.S
+++ b/xen/arch/x86/boot/mem.S
@@ -7,7 +7,7 @@ get_memory_map:
 
 .Lmeme820:
         xorl    %ebx, %ebx                      # continuation counter
-        movw    $bootsym(bios_e820map), %di     # point into the whitelist
+        movw    $bootdatasym(bios_e820map), %di # point into the whitelist
                                                 # so we can have the bios
                                                 # directly write into it.
 
@@ -22,8 +22,8 @@ get_memory_map:
         cmpl    $SMAP,%eax                      # check the return is `SMAP'
         jne     .Lmem88
 
-        incw    bootsym(bios_e820nr)
-        cmpw    $E820_BIOS_MAX, bootsym(bios_e820nr) # up to this many entries
+        incw    bootdatasym(bios_e820nr)
+        cmpw    $E820_BIOS_MAX, bootdatasym(bios_e820nr) # up to this many entries
         jae     .Lmem88
 
         movw    %di,%ax
@@ -35,7 +35,7 @@ get_memory_map:
 .Lmem88:
         movb    $0x88, %ah
         int     $0x15
-        movw    %ax,bootsym(highmem_kb)
+        movw    %ax,bootdatasym(highmem_kb)
 
 .Lmeme801:
         stc                                     # fix to work around buggy
@@ -58,11 +58,11 @@ get_memory_map:
         shll    $6,%edx                         # and go from 64k to 1k chunks
         movzwl  %cx, %ecx
         addl    %ecx, %edx                      # add in lower memory
-        movl    %edx,bootsym(highmem_kb)        # store extended memory size
+        movl    %edx,bootdatasym(highmem_kb)    # store extended memory size
 
 .Lint12:
         int     $0x12
-        movw    %ax,bootsym(lowmem_kb)
+        movw    %ax,bootdatasym(lowmem_kb)
 
         ret
 
diff --git a/xen/arch/x86/boot/trampoline.S b/xen/arch/x86/boot/trampoline.S
index 429a088b19..8537aeb917 100644
--- a/xen/arch/x86/boot/trampoline.S
+++ b/xen/arch/x86/boot/trampoline.S
@@ -16,21 +16,62 @@
  * not guaranteed to persist.
  */
 
-/* NB. bootsym() is only usable in real mode, or via BOOT_PSEUDORM_DS. */
+/*
+ * There are four sets of relocations:
+ *
+ * bootsym():     Boot-time code relocated to low memory and run only once.
+ *                Only usable at boot; in real mode or via BOOT_PSEUDORM_DS.
+ * bootdatasym(): Boot-time BIOS-discovered data, relocated back up to Xen
+ *                image after discovery.
+ * trampsym():    Permanent trampoline code relocated into low memory for AP
+ *                startup and wakeup.
+ * tramp32sym():  32-bit trampoline code which at boot can be used directly
+ *                from the Xen image in memory, but which will need to be
+ *                relocated into low (well, into *mapped*) memory in order
+ *                to be used for AP startup.
+ */
 #undef bootsym
 #define bootsym(s) ((s)-trampoline_start)
 
 #define bootsym_rel(sym, off, opnd...)     \
         bootsym(sym),##opnd;               \
 111:;                                      \
-        .pushsection .trampoline_rel, "a"; \
+        .pushsection .bootsym_rel, "a";    \
         .long 111b - (off) - .;            \
         .popsection
 
 #define bootsym_segrel(sym, off)           \
         $0,$bootsym(sym);                  \
 111:;                                      \
-        .pushsection .trampoline_seg, "a"; \
+        .pushsection .bootsym_seg, "a";    \
+        .long 111b - (off) - .;            \
+        .popsection
+
+#define bootdatasym(s) ((s)-trampoline_start)
+#define bootdatasym_rel(sym, off, opnd...) \
+        bootdatasym(sym),##opnd;           \
+111:;                                      \
+        .pushsection .bootdatasym_rel, "a";\
+        .long 111b - (off) - .;            \
+        .popsection
+
+#undef trampsym
+#define trampsym(s) ((s)-trampoline_start)
+
+#define trampsym_rel(sym, off, opnd...)    \
+        trampsym(sym),##opnd;              \
+111:;                                      \
+        .pushsection .trampsym_rel, "a";   \
+        .long 111b - (off) - .;            \
+        .popsection
+
+#undef tramp32sym
+#define tramp32sym(s) ((s)-trampoline_start)
+
+#define tramp32sym_rel(sym, off, opnd...)  \
+        tramp32sym(sym),##opnd;            \
+111:;                                      \
+        .pushsection .tramp32sym_rel, "a"; \
         .long 111b - (off) - .;            \
         .popsection
 
@@ -48,16 +89,19 @@
 GLOBAL(trampoline_realmode_entry)
         mov     %cs,%ax
         mov     %ax,%ds
-        movb    $0xA5,bootsym(trampoline_cpu_started)
+        movb    $0xA5,trampsym(trampoline_cpu_started)
         cld
         cli
-        lidt    bootsym(idt_48)
-        lgdt    bootsym(gdt_48)
+        lidt    trampsym(idt_48)
+        lgdt    trampsym(gdt_48)
         mov     $1,%bl                    # EBX != 0 indicates we are an AP
         xor     %ax, %ax
         inc     %ax
         lmsw    %ax                       # CR0.PE = 1 (enter protected mode)
-        ljmpl   $BOOT_CS32,$bootsym_rel(trampoline_protmode_entry,6)
+        ljmpl   $BOOT_CS32,$tramp32sym_rel(trampoline_protmode_entry,6)
+
+GLOBAL(trampoline_cpu_started)
+        .byte   0
 
 trampoline_gdt:
         /* 0x0000: unused */
@@ -79,8 +123,12 @@ trampoline_gdt:
          * address is computed at runtime.
          */
         .quad   0x00c0920000000fff
-
-        .pushsection .trampoline_rel, "a"
+        /*
+         * BOOT_PSEUDORM_CS and BOOT_PSEUDORM_DS are usable only at boot time,
+         * and their base addresses must reference the low location to which
+         * the boot-time code was loaded. Hence bootsym.
+         */
+        .pushsection .bootsym_rel, "a"
         .long   trampoline_gdt + BOOT_PSEUDORM_CS + 2 - .
         .long   trampoline_gdt + BOOT_PSEUDORM_DS + 2 - .
         .popsection
@@ -94,9 +142,6 @@ GLOBAL(cpuid_ext_features)
 GLOBAL(trampoline_xen_phys_start)
         .long   0
 
-GLOBAL(trampoline_cpu_started)
-        .byte   0
-
         .code32
 trampoline_protmode_entry:
         /* Set up a few descriptors: on entry only CS is guaranteed good. */
@@ -113,12 +158,12 @@ trampoline_protmode_entry:
 
         /* Load pagetable base register. */
         mov     $sym_offs(idle_pg_table),%eax
-        add     bootsym_rel(trampoline_xen_phys_start,4,%eax)
+        add     tramp32sym_rel(trampoline_xen_phys_start,4,%eax)
         mov     %eax,%cr3
 
         /* Adjust IA32_MISC_ENABLE if needed (for NX enabling below). */
-        mov     bootsym_rel(trampoline_misc_enable_off,4,%esi)
-        mov     bootsym_rel(trampoline_misc_enable_off+4,4,%edi)
+        mov     tramp32sym_rel(trampoline_misc_enable_off,4,%esi)
+        mov     tramp32sym_rel(trampoline_misc_enable_off+4,4,%edi)
         mov     %esi,%eax
         or      %edi,%eax
         jz      1f
@@ -132,7 +177,7 @@ trampoline_protmode_entry:
 1:
 
         /* Set up EFER (Extended Feature Enable Register). */
-        mov     bootsym_rel(cpuid_ext_features,4,%edi)
+        mov     tramp32sym_rel(cpuid_ext_features,4,%edi)
         movl    $MSR_EFER,%ecx
         rdmsr
         or      $EFER_LME|EFER_SCE,%eax   /* Long Mode + SYSCALL/SYSRET */
@@ -148,7 +193,7 @@ trampoline_protmode_entry:
 1:
 
         /* Now in compatibility mode. Long-jump into 64-bit mode. */
-        ljmp    $BOOT_CS64,$bootsym_rel(start64,6)
+        ljmp    $BOOT_CS64,$tramp32sym_rel(start64,6)
 
         .code64
 start64:
@@ -183,7 +228,7 @@ start64:
 idt_48: .word   0, 0, 0 # base = limit = 0
         .word   0
 gdt_48: .word   6*8-1
-        .long   bootsym_rel(trampoline_gdt,4)
+        .long   tramp32sym_rel(trampoline_gdt,4)
 
 /* The first page of trampoline is permanent, the rest boot-time only. */
 /* Reuse the boot trampoline on the 1st trampoline page as stack for wakeup. */
@@ -249,7 +294,7 @@ trampoline_boot_cpu_entry:
 
         mov     $0x0200,%ax
         int     $0x16
-        mov     %al,bootsym(kbd_shift_flags)
+        mov     %al,bootdatasym(kbd_shift_flags)
 
         /* Disable irqs before returning to protected mode. */
         cli
@@ -294,7 +339,7 @@ opt_edid:
         .byte   0
 
 #ifdef CONFIG_VIDEO
-GLOBAL(boot_vid_mode)
+boot_vid_mode:
         .word   VIDEO_80x25                     /* If we don't run at all, assume basic video mode 3 at 80x25. */
 vesa_size:
         .word   0,0,0                           /* width x depth x height */
diff --git a/xen/arch/x86/boot/video.S b/xen/arch/x86/boot/video.S
index 335a51c9b5..03907e9e9a 100644
--- a/xen/arch/x86/boot/video.S
+++ b/xen/arch/x86/boot/video.S
@@ -45,7 +45,7 @@
 #define PARAM_VESAPM_SEG        0x24
 #define PARAM_VESAPM_OFF        0x26
 #define PARAM_VESA_ATTRIB       0x28
-#define _param(param) bootsym(boot_vid_info)+(param)
+#define _param(param) bootdatasym(boot_vid_info)+(param)
 
 video:  xorw    %ax, %ax
         movw    %ax, %gs        # GS is zero
@@ -917,7 +917,7 @@ store_edid:
         cmpw    $0x004f, %ax            # Call failed?
         jne     .Lno_edid
 
-        movw    %bx, bootsym(boot_edid_caps)
+        movw    %bx, bootdatasym(boot_edid_caps)
 
         cmpb    $2, bootsym(opt_edid)   # EDID forced on cmdline (edid=force)?
         je      .Lforce_edid
@@ -933,7 +933,7 @@ store_edid:
         movw    $0x01, %bx
         movw    $0x00, %cx
         movw    $0x00, %dx
-        movw    $bootsym(boot_edid_info), %di
+        movw    $bootdatasym(boot_edid_info), %di
         int     $0x10
 
 .Lno_edid:
diff --git a/xen/arch/x86/boot/wakeup.S b/xen/arch/x86/boot/wakeup.S
index 090487ba78..35cf83b80e 100644
--- a/xen/arch/x86/boot/wakeup.S
+++ b/xen/arch/x86/boot/wakeup.S
@@ -53,7 +53,7 @@ ENTRY(wakeup_start)
 
         movw    $1, %ax
         lmsw    %ax             # Turn on CR0.PE 
-        ljmpl   $BOOT_CS32, $bootsym_rel(wakeup_32, 6)
+        ljmpl   $BOOT_CS32, $trampsym_rel(wakeup_32, 6)
 
 /* This code uses an extended set of video mode numbers. These include:
  * Aliases for standard modes
@@ -118,11 +118,11 @@ wakeup_32:
         mov     $BOOT_DS, %eax
         mov     %eax, %ds
         mov     %eax, %ss
-        mov     $bootsym_rel(wakeup_stack, 4, %esp)
+        mov     $trampsym_rel(wakeup_stack, 4, %esp)
 
         # check saved magic again
         mov     $sym_offs(saved_magic),%eax
-        add     bootsym_rel(trampoline_xen_phys_start, 4, %eax)
+        add     trampsym_rel(trampoline_xen_phys_start, 4, %eax)
         mov     (%eax), %eax
         cmp     $0x9abcdef0, %eax
         jne     bogus_saved_magic
@@ -135,12 +135,12 @@ wakeup_32:
 
         /* Load pagetable base register */
         mov     $sym_offs(idle_pg_table),%eax
-        add     bootsym_rel(trampoline_xen_phys_start,4,%eax)
+        add     trampsym_rel(trampoline_xen_phys_start,4,%eax)
         mov     %eax,%cr3
 
         /* Reapply IA32_MISC_ENABLE modifications from early_init_intel(). */
-        mov     bootsym_rel(trampoline_misc_enable_off, 4, %esi)
-        mov     bootsym_rel(trampoline_misc_enable_off + 4, 4, %edi)
+        mov     trampsym_rel(trampoline_misc_enable_off, 4, %esi)
+        mov     trampsym_rel(trampoline_misc_enable_off + 4, 4, %edi)
         mov     %esi, %eax
         or      %edi, %eax
         jz      1f
@@ -155,7 +155,7 @@ wakeup_32:
 
         /* Will cpuid feature change after resume? */
         /* Set up EFER (Extended Feature Enable Register). */
-        mov     bootsym_rel(cpuid_ext_features,4,%edi)
+        mov     trampsym_rel(cpuid_ext_features,4,%edi)
         test    $0x20100800,%edi /* SYSCALL/SYSRET, No Execute, Long Mode? */
         jz      .Lskip_eferw
         movl    $MSR_EFER,%ecx
@@ -177,7 +177,7 @@ wakeup_32:
 1:
 
         /* Now in compatibility mode. Long-jump to 64-bit mode */
-        ljmp    $BOOT_CS64, $bootsym_rel(wakeup_64,6)
+        ljmp    $BOOT_CS64, $trampsym_rel(wakeup_64,6)
 
         .code64
 wakeup_64:
diff --git a/xen/arch/x86/efi/efi-boot.h b/xen/arch/x86/efi/efi-boot.h
index a0737f98c3..8aefd7bfb2 100644
--- a/xen/arch/x86/efi/efi-boot.h
+++ b/xen/arch/x86/efi/efi-boot.h
@@ -99,7 +99,7 @@ static void __init efi_arch_relocate_image(unsigned long delta)
 }
 
 extern const s32 __trampoline_rel_start[], __trampoline_rel_stop[];
-extern const s32 __trampoline_seg_start[], __trampoline_seg_stop[];
+extern const s32 __trampoline32_rel_start[], __trampoline32_rel_stop[];
 
 static void __init relocate_trampoline(unsigned long phys)
 {
@@ -115,10 +115,10 @@ static void __init relocate_trampoline(unsigned long phys)
           trampoline_ptr < __trampoline_rel_stop;
           ++trampoline_ptr )
         *(u32 *)(*trampoline_ptr + (long)trampoline_ptr) += phys;
-    for ( trampoline_ptr = __trampoline_seg_start;
-          trampoline_ptr < __trampoline_seg_stop;
+    for ( trampoline_ptr = __trampoline32_rel_start;
+          trampoline_ptr < __trampoline32_rel_stop;
           ++trampoline_ptr )
-        *(u16 *)(*trampoline_ptr + (long)trampoline_ptr) = phys >> 4;
+        *(u32 *)(*trampoline_ptr + (long)trampoline_ptr) += phys;
 }
 
 static void __init place_string(u32 *addr, const char *s)
diff --git a/xen/arch/x86/xen.lds.S b/xen/arch/x86/xen.lds.S
index 87fa02b9b5..abf46347ec 100644
--- a/xen/arch/x86/xen.lds.S
+++ b/xen/arch/x86/xen.lds.S
@@ -236,11 +236,18 @@ SECTIONS
        *(.init.data.rel.*)
        . = ALIGN(4);
        __trampoline_rel_start = .;
-       *(.trampoline_rel)
+       *(.trampsym_rel)
        __trampoline_rel_stop = .;
-       __trampoline_seg_start = .;
-       *(.trampoline_seg)
-       __trampoline_seg_stop = .;
+       __trampoline32_rel_start = .;
+       *(.tramp32sym_rel)
+       __trampoline32_rel_stop = .;
+       __bootsym_rel_start = .;
+       *(.bootsym_rel)
+       *(.bootdatasym_rel)
+       __bootsym_rel_stop = .;
+       __bootsym_seg_start = .;
+       *(.bootsym_seg)
+       __bootsym_seg_stop = .;
        /*
         * struct alt_inst entries. From the header (alternative.h):
         * "Alternative instructions for different CPU types or capabilities"
-- 
2.21.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [Xen-devel] [PATCH v3 3/5] x86/boot: Rename trampoline_{start, end} to boot_trampoline_{start, end}
  2019-08-21 16:35 ` [Xen-devel] [PATCH v3 1/5] x86/boot: Only jump into low trampoline code for real-mode boot David Woodhouse
  2019-08-21 16:35   ` [Xen-devel] [PATCH v3 2/5] x86/boot: Split bootsym() into four types of relocations David Woodhouse
@ 2019-08-21 16:35   ` David Woodhouse
  2019-08-21 16:35   ` [Xen-devel] [PATCH v3 4/5] x86/boot: Copy 16-bit boot variables back up to Xen image David Woodhouse
                     ` (2 subsequent siblings)
  4 siblings, 0 replies; 19+ messages in thread
From: David Woodhouse @ 2019-08-21 16:35 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

From: David Woodhouse <dwmw@amazon.co.uk>

In preparation for splitting the boot and permanent trampolines from
each other. Some of these will change back, but most are boot so do the
plain search/replace that way first, then a subsequent patch will extract
the permanent trampoline code.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 xen/arch/x86/boot/head.S       | 12 ++++++------
 xen/arch/x86/boot/trampoline.S | 10 +++++-----
 xen/arch/x86/boot/video.S      |  4 ++--
 xen/arch/x86/efi/efi-boot.h    |  4 ++--
 xen/arch/x86/setup.c           |  4 ++--
 xen/arch/x86/tboot.c           |  6 +++---
 xen/arch/x86/x86_64/mm.c       |  2 +-
 xen/arch/x86/xen.lds.S         |  6 +++---
 xen/include/asm-x86/config.h   |  6 +++---
 9 files changed, 27 insertions(+), 27 deletions(-)

diff --git a/xen/arch/x86/boot/head.S b/xen/arch/x86/boot/head.S
index e19b31fb85..4118f73683 100644
--- a/xen/arch/x86/boot/head.S
+++ b/xen/arch/x86/boot/head.S
@@ -746,20 +746,20 @@ trampoline_setup:
         cmpb    $0, sym_fs(skip_realmode)
         jz      1f
         /* If no-real-mode, jump straight to trampoline_protmode_entry */
-        lea     trampoline_protmode_entry-trampoline_start(%edi),%eax
+        lea     trampoline_protmode_entry-boot_trampoline_start(%edi),%eax
         /* EBX == 0 indicates we are the BP (Boot Processor). */
         xor     %ebx,%ebx
         jmp     2f
 1:
         /* Go via 16-bit code in trampoline_boot_cpu_entry */
-        lea     trampoline_boot_cpu_entry-trampoline_start(%edi),%eax
+        lea     trampoline_boot_cpu_entry-boot_trampoline_start(%edi),%eax
 2:
         pushl   $BOOT_CS32
         push    %eax
 
         /* Copy bootstrap trampoline to low memory, below 1MB. */
-        mov     $sym_offs(trampoline_start),%esi
-        mov     $((trampoline_end - trampoline_start) / 4),%ecx
+        mov     $sym_offs(boot_trampoline_start),%esi
+        mov     $((boot_trampoline_end - boot_trampoline_start) / 4),%ecx
         rep movsl %fs:(%esi),%es:(%edi)
 
         /* Jump into the relocated trampoline. */
@@ -771,8 +771,8 @@ cmdline_parse_early:
 reloc:
 #include "reloc.S"
 
-ENTRY(trampoline_start)
+ENTRY(boot_trampoline_start)
 #include "trampoline.S"
-ENTRY(trampoline_end)
+ENTRY(boot_trampoline_end)
 
 #include "x86_64.S"
diff --git a/xen/arch/x86/boot/trampoline.S b/xen/arch/x86/boot/trampoline.S
index 8537aeb917..71d5424179 100644
--- a/xen/arch/x86/boot/trampoline.S
+++ b/xen/arch/x86/boot/trampoline.S
@@ -31,7 +31,7 @@
  *                to be used for AP startup.
  */
 #undef bootsym
-#define bootsym(s) ((s)-trampoline_start)
+#define bootsym(s) ((s)-boot_trampoline_start)
 
 #define bootsym_rel(sym, off, opnd...)     \
         bootsym(sym),##opnd;               \
@@ -47,7 +47,7 @@
         .long 111b - (off) - .;            \
         .popsection
 
-#define bootdatasym(s) ((s)-trampoline_start)
+#define bootdatasym(s) ((s)-boot_trampoline_start)
 #define bootdatasym_rel(sym, off, opnd...) \
         bootdatasym(sym),##opnd;           \
 111:;                                      \
@@ -56,7 +56,7 @@
         .popsection
 
 #undef trampsym
-#define trampsym(s) ((s)-trampoline_start)
+#define trampsym(s) ((s)-boot_trampoline_start)
 
 #define trampsym_rel(sym, off, opnd...)    \
         trampsym(sym),##opnd;              \
@@ -66,7 +66,7 @@
         .popsection
 
 #undef tramp32sym
-#define tramp32sym(s) ((s)-trampoline_start)
+#define tramp32sym(s) ((s)-boot_trampoline_start)
 
 #define tramp32sym_rel(sym, off, opnd...)  \
         tramp32sym(sym),##opnd;            \
@@ -232,7 +232,7 @@ gdt_48: .word   6*8-1
 
 /* The first page of trampoline is permanent, the rest boot-time only. */
 /* Reuse the boot trampoline on the 1st trampoline page as stack for wakeup. */
-        .equ    wakeup_stack, trampoline_start + PAGE_SIZE
+        .equ    wakeup_stack, boot_trampoline_start + PAGE_SIZE
         .global wakeup_stack
 
 /* From here on early boot only. */
diff --git a/xen/arch/x86/boot/video.S b/xen/arch/x86/boot/video.S
index 03907e9e9a..5087c6a4d5 100644
--- a/xen/arch/x86/boot/video.S
+++ b/xen/arch/x86/boot/video.S
@@ -15,8 +15,8 @@
 
 #include "video.h"
 
-/* Scratch space layout: trampoline_end to trampoline_end+0x1000. */
-#define modelist       bootsym(trampoline_end)   /* 2kB (256 entries) */
+/* Scratch space layout: boot_trampoline_end to boot_trampoline_end+0x1000. */
+#define modelist       bootsym(boot_trampoline_end)   /* 2kB (256 entries) */
 #define vesa_glob_info (modelist + 0x800)        /* 1kB */
 #define vesa_mode_info (vesa_glob_info + 0x400)  /* 1kB */
 
diff --git a/xen/arch/x86/efi/efi-boot.h b/xen/arch/x86/efi/efi-boot.h
index 8aefd7bfb2..fac2b5e12a 100644
--- a/xen/arch/x86/efi/efi-boot.h
+++ b/xen/arch/x86/efi/efi-boot.h
@@ -232,7 +232,7 @@ static void __init noreturn efi_arch_post_exit_boot(void)
     u64 cr4 = XEN_MINIMAL_CR4 & ~X86_CR4_PGE, efer;
 
     efi_arch_relocate_image(__XEN_VIRT_START - xen_phys_start);
-    memcpy((void *)trampoline_phys, trampoline_start, cfg.size);
+    memcpy((void *)trampoline_phys, boot_trampoline_start, cfg.size);
 
     /* Set system registers and transfer control. */
     asm volatile("pushq $0\n\tpopfq");
@@ -566,7 +566,7 @@ static void __init efi_arch_memory_setup(void)
     cfg.addr = 0x100000;
 
     if ( efi_enabled(EFI_LOADER) )
-        cfg.size = trampoline_end - trampoline_start;
+        cfg.size = boot_trampoline_end - boot_trampoline_start;
     else
         cfg.size = TRAMPOLINE_SPACE + TRAMPOLINE_STACK_SPACE;
 
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index 87fc7c90da..910e5302d3 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -1876,8 +1876,8 @@ int __hwdom_init xen_in_range(unsigned long mfn)
     if ( !xen_regions[0].s )
     {
         /* S3 resume code (and other real mode trampoline code) */
-        xen_regions[region_s3].s = bootsym_phys(trampoline_start);
-        xen_regions[region_s3].e = bootsym_phys(trampoline_end);
+        xen_regions[region_s3].s = bootsym_phys(boot_trampoline_start);
+        xen_regions[region_s3].e = bootsym_phys(boot_trampoline_end);
 
         /*
          * This needs to remain in sync with the uses of the same symbols in
diff --git a/xen/arch/x86/tboot.c b/xen/arch/x86/tboot.c
index 30d159cc62..2769e1ccd9 100644
--- a/xen/arch/x86/tboot.c
+++ b/xen/arch/x86/tboot.c
@@ -369,9 +369,9 @@ void tboot_shutdown(uint32_t shutdown_type)
          */
         g_tboot_shared->num_mac_regions = 3;
         /* S3 resume code (and other real mode trampoline code) */
-        g_tboot_shared->mac_regions[0].start = bootsym_phys(trampoline_start);
-        g_tboot_shared->mac_regions[0].size = bootsym_phys(trampoline_end) -
-                                              bootsym_phys(trampoline_start);
+        g_tboot_shared->mac_regions[0].start = bootsym_phys(boot_trampoline_start);
+        g_tboot_shared->mac_regions[0].size = bootsym_phys(boot_trampoline_end) -
+                                              bootsym_phys(boot_trampoline_start);
         /* hypervisor .text + .rodata */
         g_tboot_shared->mac_regions[1].start = (uint64_t)__pa(&_stext);
         g_tboot_shared->mac_regions[1].size = __pa(&__2M_rodata_end) -
diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 1919cae18b..149cc4f7b5 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -697,7 +697,7 @@ void __init zap_low_mappings(void)
 
     /* Replace with mapping of the boot trampoline only. */
     map_pages_to_xen(trampoline_phys, maddr_to_mfn(trampoline_phys),
-                     PFN_UP(trampoline_end - trampoline_start),
+                     PFN_UP(boot_trampoline_end - boot_trampoline_start),
                      __PAGE_HYPERVISOR);
 }
 
diff --git a/xen/arch/x86/xen.lds.S b/xen/arch/x86/xen.lds.S
index abf46347ec..310cc525ac 100644
--- a/xen/arch/x86/xen.lds.S
+++ b/xen/arch/x86/xen.lds.S
@@ -378,12 +378,12 @@ ASSERT(IS_ALIGNED(cpu0_stack, STACK_SIZE), "cpu0_stack misaligned")
 ASSERT(IS_ALIGNED(__init_begin, PAGE_SIZE), "__init_begin misaligned")
 ASSERT(IS_ALIGNED(__init_end,   PAGE_SIZE), "__init_end misaligned")
 
-ASSERT(IS_ALIGNED(trampoline_start, 4), "trampoline_start misaligned")
-ASSERT(IS_ALIGNED(trampoline_end,   4), "trampoline_end misaligned")
+ASSERT(IS_ALIGNED(boot_trampoline_start, 4), "boot_trampoline_start misaligned")
+ASSERT(IS_ALIGNED(boot_trampoline_end,   4), "boot_trampoline_end misaligned")
 ASSERT(IS_ALIGNED(__bss_start,      8), "__bss_start misaligned")
 ASSERT(IS_ALIGNED(__bss_end,        8), "__bss_end misaligned")
 
-ASSERT((trampoline_end - trampoline_start) < TRAMPOLINE_SPACE - MBI_SPACE_MIN,
+ASSERT((boot_trampoline_end - boot_trampoline_start) < TRAMPOLINE_SPACE - MBI_SPACE_MIN,
     "not enough room for trampoline and mbi data")
 ASSERT((wakeup_stack - wakeup_stack_start) >= WAKEUP_STACK_MIN,
     "wakeup stack too small")
diff --git a/xen/include/asm-x86/config.h b/xen/include/asm-x86/config.h
index 22dc795eea..50481748f6 100644
--- a/xen/include/asm-x86/config.h
+++ b/xen/include/asm-x86/config.h
@@ -90,11 +90,11 @@
 #ifndef __ASSEMBLY__
 extern unsigned long trampoline_phys;
 #define bootsym_phys(sym)                                 \
-    (((unsigned long)&(sym)-(unsigned long)&trampoline_start)+trampoline_phys)
+    (((unsigned long)&(sym)-(unsigned long)&boot_trampoline_start)+trampoline_phys)
 #define bootsym(sym)                                      \
     (*RELOC_HIDE((typeof(&(sym)))__va(__pa(&(sym))),      \
-                 trampoline_phys-__pa(trampoline_start)))
-extern char trampoline_start[], trampoline_end[];
+                 trampoline_phys-__pa(boot_trampoline_start)))
+extern char boot_trampoline_start[], boot_trampoline_end[];
 extern char trampoline_realmode_entry[];
 extern unsigned int trampoline_xen_phys_start;
 extern unsigned char trampoline_cpu_started;
-- 
2.21.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [Xen-devel] [PATCH v3 4/5] x86/boot: Copy 16-bit boot variables back up to Xen image
  2019-08-21 16:35 ` [Xen-devel] [PATCH v3 1/5] x86/boot: Only jump into low trampoline code for real-mode boot David Woodhouse
  2019-08-21 16:35   ` [Xen-devel] [PATCH v3 2/5] x86/boot: Split bootsym() into four types of relocations David Woodhouse
  2019-08-21 16:35   ` [Xen-devel] [PATCH v3 3/5] x86/boot: Rename trampoline_{start, end} to boot_trampoline_{start, end} David Woodhouse
@ 2019-08-21 16:35   ` David Woodhouse
  2019-08-30 15:43     ` Jan Beulich
  2019-08-21 16:35   ` [Xen-devel] [PATCH v3 5/5] x86/boot: Do not use trampoline for no-real-mode boot paths David Woodhouse
  2019-08-30 14:25   ` [Xen-devel] [PATCH v3 1/5] x86/boot: Only jump into low trampoline code for real-mode boot Jan Beulich
  4 siblings, 1 reply; 19+ messages in thread
From: David Woodhouse @ 2019-08-21 16:35 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

From: David Woodhouse <dwmw@amazon.co.uk>

Ditch the bootsym() access from C code for the variables populated by
16-bit boot code. As well as being cleaner this also paves the way for
not having the 16-bit boot code in low memory for no-real-mode or EFI
loader boots at all.

These variables are put into a separate .data.boot16 section and
accessed in low memory during the real-mode boot, then copied back to
their native location in the Xen image when real mode has finished.

Fix the limit in gdt_48 to admit that trampoline_gdt actually includes
7 entries, since we do now use the seventh (BOOT_FS) in late code so it
matters. Andrew has a patch to further tidy up the GDT and initialise
accessed bits etc., so I won't go overboard with more than the trivial
size fix for now.

The bootsym() macro remains in C code purely for the variables which
are written for the later AP startup and wakeup trampoline to use.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 xen/arch/x86/boot/edd.S           |  2 ++
 xen/arch/x86/boot/head.S          | 16 +++++++++++++++
 xen/arch/x86/boot/mem.S           |  2 ++
 xen/arch/x86/boot/trampoline.S    | 33 ++++++++++++++++++++++++++++---
 xen/arch/x86/boot/video.S         | 30 +++++++++++++++-------------
 xen/arch/x86/platform_hypercall.c | 18 ++++++++---------
 xen/arch/x86/setup.c              | 22 ++++++++++-----------
 xen/arch/x86/xen.lds.S            |  9 ++++++++-
 xen/include/asm-x86/edd.h         |  1 -
 9 files changed, 94 insertions(+), 39 deletions(-)

diff --git a/xen/arch/x86/boot/edd.S b/xen/arch/x86/boot/edd.S
index 434bbbd960..138d04c964 100644
--- a/xen/arch/x86/boot/edd.S
+++ b/xen/arch/x86/boot/edd.S
@@ -163,6 +163,7 @@ edd_done:
 .Ledd_mbr_sig_skip:
         ret
 
+        .pushsection .data.boot16, "aw", @progbits
 GLOBAL(boot_edd_info_nr)
         .byte   0
 GLOBAL(boot_mbr_signature_nr)
@@ -171,3 +172,4 @@ GLOBAL(boot_mbr_signature)
         .fill   EDD_MBR_SIG_MAX*8,1,0
 GLOBAL(boot_edd_info)
         .fill   EDD_INFO_MAX * (EDDEXTSIZE + EDDPARMSIZE), 1, 0
+        .popsection
diff --git a/xen/arch/x86/boot/head.S b/xen/arch/x86/boot/head.S
index 4118f73683..6d315020d2 100644
--- a/xen/arch/x86/boot/head.S
+++ b/xen/arch/x86/boot/head.S
@@ -725,6 +725,17 @@ trampoline_setup:
         cmp     $sym_offs(__bootsym_seg_stop),%edi
         jb      1b
 
+        /* Relocations for the boot data section. */
+        mov     sym_fs(trampoline_phys),%edx
+        add     $(boot_trampoline_end - boot_trampoline_start),%edx
+        mov     $sym_offs(__bootdatasym_rel_start),%edi
+1:
+        mov     %fs:(%edi),%eax
+        add     %edx,%fs:(%edi,%eax)
+        add     $4,%edi
+        cmp     $sym_offs(__bootdatasym_rel_stop),%edi
+        jb      1b
+
         /* Do not parse command line on EFI platform here. */
         cmpb    $0,sym_fs(efi_platform)
         jnz     1f
@@ -762,6 +773,11 @@ trampoline_setup:
         mov     $((boot_trampoline_end - boot_trampoline_start) / 4),%ecx
         rep movsl %fs:(%esi),%es:(%edi)
 
+        /* Copy boot data template to low memory. */
+        mov     $sym_offs(bootdata_start),%esi
+        mov     $((bootdata_end - bootdata_start) / 4),%ecx
+        rep movsl %fs:(%esi),%es:(%edi)
+
         /* Jump into the relocated trampoline. */
         lret
 
diff --git a/xen/arch/x86/boot/mem.S b/xen/arch/x86/boot/mem.S
index c5bc774325..9eec1c6ae2 100644
--- a/xen/arch/x86/boot/mem.S
+++ b/xen/arch/x86/boot/mem.S
@@ -67,6 +67,7 @@ get_memory_map:
         ret
 
         .align  4
+        .pushsection .data.boot16, "aw", @progbits
 GLOBAL(bios_e820map)
         .fill   E820_BIOS_MAX*20,1,0
 GLOBAL(bios_e820nr)
@@ -75,3 +76,4 @@ GLOBAL(lowmem_kb)
         .long   0
 GLOBAL(highmem_kb)
         .long   0
+	.popsection
diff --git a/xen/arch/x86/boot/trampoline.S b/xen/arch/x86/boot/trampoline.S
index 71d5424179..c747d70404 100644
--- a/xen/arch/x86/boot/trampoline.S
+++ b/xen/arch/x86/boot/trampoline.S
@@ -47,11 +47,15 @@
         .long 111b - (off) - .;            \
         .popsection
 
-#define bootdatasym(s) ((s)-boot_trampoline_start)
+        .pushsection .data.boot16, "aw", @progbits
+GLOBAL(bootdata_start)
+        .popsection
+
+#define bootdatasym(s) ((s)-bootdata_start+(boot_trampoline_end-boot_trampoline_start))
 #define bootdatasym_rel(sym, off, opnd...) \
         bootdatasym(sym),##opnd;           \
 111:;                                      \
-        .pushsection .bootdatasym_rel, "a";\
+        .pushsection .bootsym_rel, "a";\
         .long 111b - (off) - .;            \
         .popsection
 
@@ -227,7 +231,7 @@ start64:
         .word   0
 idt_48: .word   0, 0, 0 # base = limit = 0
         .word   0
-gdt_48: .word   6*8-1
+gdt_48: .word   7*8-1
         .long   tramp32sym_rel(trampoline_gdt,4)
 
 /* The first page of trampoline is permanent, the rest boot-time only. */
@@ -318,6 +322,23 @@ trampoline_boot_cpu_entry:
         mov     %eax,%gs
         mov     %eax,%ss
 
+        /*
+         * Copy locally-gathered data back up into the Xen physical image
+         */
+        mov     $BOOT_FS,%eax
+        mov     %eax,%es
+
+        mov     $sym_offs(bootdata_end),%ecx
+        mov     $sym_offs(bootdata_start),%edi
+        sub     %edi,%ecx
+        mov     $bootdatasym_rel(bootdata_start,4,%esi)
+        rep movsb %ds:(%esi),%es:(%edi)
+
+        /*
+         * %es still points to BOOT_FS but trampoline_protmode_entry
+         * reloads it anyway.
+         */
+
         /* EBX == 0 indicates we are the BP (Boot Processor). */
         xor     %ebx,%ebx
 
@@ -345,8 +366,10 @@ vesa_size:
         .word   0,0,0                           /* width x depth x height */
 #endif
 
+        .pushsection .data.boot16, "aw", @progbits
 GLOBAL(kbd_shift_flags)
         .byte   0
+        .popsection
 
 rm_idt: .word   256*4-1, 0, 0
 
@@ -355,3 +378,7 @@ rm_idt: .word   256*4-1, 0, 0
 #ifdef CONFIG_VIDEO
 #include "video.S"
 #endif
+
+        .pushsection .data.boot16, "aw", @progbits
+GLOBAL(bootdata_end)
+        .popsection
diff --git a/xen/arch/x86/boot/video.S b/xen/arch/x86/boot/video.S
index 5087c6a4d5..4608464b77 100644
--- a/xen/arch/x86/boot/video.S
+++ b/xen/arch/x86/boot/video.S
@@ -15,10 +15,10 @@
 
 #include "video.h"
 
-/* Scratch space layout: boot_trampoline_end to boot_trampoline_end+0x1000. */
-#define modelist       bootsym(boot_trampoline_end)   /* 2kB (256 entries) */
-#define vesa_glob_info (modelist + 0x800)        /* 1kB */
-#define vesa_mode_info (vesa_glob_info + 0x400)  /* 1kB */
+/* Scratch space layout: bootdata_end to bootdata_end+0x1000. */
+#define modelist(t)       bootdatasym_rel(bootdata_end,2,t)         /* 2KiB (256 entries) */
+#define vesa_glob_info(t) bootdatasym_rel((bootdata_end+0x800),2,t) /* 1KiB */
+#define vesa_mode_info(t) bootdatasym_rel((bootdata_end+0xc00),2,t) /* 1KiB */
 
 /* Retrieve Extended Display Identification Data. */
 #define CONFIG_FIRMWARE_EDID
@@ -113,7 +113,7 @@ mopar2: movb    %al, _param(PARAM_VIDEO_LINES)
 
 # Fetching of VESA frame buffer parameters
 mopar_gr:
-        leaw    vesa_mode_info, %di
+        leaw    vesa_mode_info(%di)
         movb    $0x23, _param(PARAM_HAVE_VGA)
         movw    16(%di), %ax
         movw    %ax, _param(PARAM_LFB_LINELENGTH)
@@ -134,7 +134,7 @@ mopar_gr:
         movw    %ax, _param(PARAM_VESA_ATTRIB)
 
 # get video mem size
-        leaw    vesa_glob_info, %di
+        leaw    vesa_glob_info(%di)
         xorl    %eax, %eax
         movw    18(%di), %ax
         movl    %eax, _param(PARAM_LFB_SIZE)
@@ -226,7 +226,7 @@ an1:    call    prtstr
         leaw    bootsym(listhdr), %si   # Table header
         call    prtstr
         movb    $0x30, %dl              # DL holds mode number
-        leaw    modelist, %si
+        leaw    modelist(%si)
 lm1:    cmpw    $ASK_VGA, (%si)         # End?
         jz      lm2
 
@@ -435,13 +435,13 @@ setmenu:
         jmp     mode_set
 
 check_vesa:
-        leaw    vesa_glob_info, %di
+        leaw    vesa_glob_info(%di)
         movw    $0x4f00, %ax
         int     $0x10
         cmpw    $0x004f, %ax
         jnz     setbad
 
-        leaw    vesa_mode_info, %di
+        leaw    vesa_mode_info(%di)
         subb    $VIDEO_FIRST_VESA>>8, %bh
         movw    %bx, %cx                # Get mode information structure
         movw    $0x4f01, %ax
@@ -509,7 +509,7 @@ inidx:  outb    %al, %dx                # Read from indexed VGA register
 
 setvesabysize:
         call    mode_table
-        leaw    modelist,%si
+        leaw    modelist(%si)
 1:      add     $8,%si
         cmpw    $ASK_VGA,-8(%si)        # End?
         je      _setbad
@@ -669,7 +669,7 @@ mode_table:
         orw     %di, %di
         jnz     mtab1
 
-        leaw    modelist, %di           # Store standard modes:
+        leaw    modelist(%di)           # Store standard modes:
         movw    $VIDEO_80x25,(%di)      # The 80x25 mode (ALL)
         movw    $0x50,2(%di)
         movw    $0x19,4(%di)
@@ -684,7 +684,7 @@ mode_table:
 
         movw    $ASK_VGA, (%di)         # End marker
         movw    %di, bootsym(mt_end)
-mtab1:  leaw    modelist, %si           # SI=mode list, DI=list end
+mtab1:  leaw    modelist(%si)           # SI=mode list, DI=list end
 ret0:   ret
 
 # Modes usable on all standard VGAs
@@ -700,7 +700,7 @@ vga_modes_end:
 # Detect VESA modes.
 vesa_modes:
         movw    %di, %bp                # BP=original mode table end
-        leaw    vesa_glob_info, %di
+        leaw    vesa_glob_info(%di)
         movw    $0x4f00, %ax            # VESA Get card info call
         int     $0x10
         movw    %di, %si
@@ -897,7 +897,7 @@ store_edid:
         cmpb    $1, bootsym(opt_edid)   # EDID disabled on cmdline (edid=no)?
         je      .Lno_edid
 
-        leaw    vesa_glob_info, %di
+        leaw    vesa_glob_info(%di)
         movw    $0x4f00, %ax
         int     $0x10
         cmpw    $0x004f, %ax
@@ -990,6 +990,7 @@ name_bann:      .asciz  "Video adapter: "
 
 force_size:     .word   0       # Use this size instead of the one in BIOS vars
 
+        .pushsection .data.boot16, "aw", @progbits
 GLOBAL(boot_vid_info)
         .byte   0, 0    /* orig_x, orig_y */
         .byte   3       /* text mode 3    */
@@ -1001,3 +1002,4 @@ GLOBAL(boot_edid_info)
         .fill   128,1,0x13
 GLOBAL(boot_edid_caps)
         .word   0x1313
+        .popsection
diff --git a/xen/arch/x86/platform_hypercall.c b/xen/arch/x86/platform_hypercall.c
index b19f6ec4ed..9a56bd8f84 100644
--- a/xen/arch/x86/platform_hypercall.c
+++ b/xen/arch/x86/platform_hypercall.c
@@ -318,10 +318,10 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PARAM(xen_platform_op_t) u_xenpf_op)
             u16 length;
 
             ret = -ESRCH;
-            if ( op->u.firmware_info.index >= bootsym(boot_edd_info_nr) )
+            if ( op->u.firmware_info.index >= boot_edd_info_nr )
                 break;
 
-            info = bootsym(boot_edd_info) + op->u.firmware_info.index;
+            info = boot_edd_info + op->u.firmware_info.index;
 
             /* Transfer the EDD info block. */
             ret = -EFAULT;
@@ -357,10 +357,10 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PARAM(xen_platform_op_t) u_xenpf_op)
             const struct mbr_signature *sig;
 
             ret = -ESRCH;
-            if ( op->u.firmware_info.index >= bootsym(boot_mbr_signature_nr) )
+            if ( op->u.firmware_info.index >= boot_mbr_signature_nr )
                 break;
 
-            sig = bootsym(boot_mbr_signature) + op->u.firmware_info.index;
+            sig = boot_mbr_signature + op->u.firmware_info.index;
 
             op->u.firmware_info.u.disk_mbr_signature.device = sig->device;
             op->u.firmware_info.u.disk_mbr_signature.mbr_signature =
@@ -376,13 +376,13 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PARAM(xen_platform_op_t) u_xenpf_op)
 #ifdef CONFIG_VIDEO
             if ( op->u.firmware_info.index != 0 )
                 break;
-            if ( *(u32 *)bootsym(boot_edid_info) == 0x13131313 )
+            if ( *(u32 *)boot_edid_info == 0x13131313 )
                 break;
 
             op->u.firmware_info.u.vbeddc_info.capabilities =
-                bootsym(boot_edid_caps);
+                boot_edid_caps;
             op->u.firmware_info.u.vbeddc_info.edid_transfer_time =
-                bootsym(boot_edid_caps) >> 8;
+                boot_edid_caps >> 8;
 
             ret = 0;
             if ( __copy_field_to_guest(u_xenpf_op, op, u.firmware_info.
@@ -390,7 +390,7 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PARAM(xen_platform_op_t) u_xenpf_op)
                  __copy_field_to_guest(u_xenpf_op, op, u.firmware_info.
                                        u.vbeddc_info.edid_transfer_time) ||
                  copy_to_compat(op->u.firmware_info.u.vbeddc_info.edid,
-                                bootsym(boot_edid_info), 128) )
+                                boot_edid_info, 128) )
                 ret = -EFAULT;
 #endif
             break;
@@ -407,7 +407,7 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PARAM(xen_platform_op_t) u_xenpf_op)
             if ( op->u.firmware_info.index != 0 )
                 break;
 
-            op->u.firmware_info.u.kbd_shift_flags = bootsym(kbd_shift_flags);
+            op->u.firmware_info.u.kbd_shift_flags = kbd_shift_flags;
 
             ret = 0;
             if ( __copy_field_to_guest(u_xenpf_op, op,
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index 910e5302d3..5c818caed2 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -506,7 +506,7 @@ extern struct boot_video_info boot_vid_info;
 static void __init parse_video_info(void)
 {
 #ifdef CONFIG_VIDEO
-    struct boot_video_info *bvi = &bootsym(boot_vid_info);
+    struct boot_video_info *bvi = &boot_vid_info;
 
     /* vga_console_info is filled directly on EFI platform. */
     if ( efi_enabled(EFI_BOOT) )
@@ -671,10 +671,10 @@ static char * __init cmdline_cook(char *p, const char *loader_name)
 
 static unsigned int __init copy_bios_e820(struct e820entry *map, unsigned int limit)
 {
-    unsigned int n = min(bootsym(bios_e820nr), limit);
+    unsigned int n = min(bios_e820nr, limit);
 
     if ( n )
-        memcpy(map, bootsym(bios_e820map), sizeof(*map) * n);
+        memcpy(map, bios_e820map, sizeof(*map) * n);
 
     return n;
 }
@@ -815,15 +815,15 @@ void __init noreturn __start_xen(unsigned long mbi_p)
     }
 
     /* Print VBE/DDC EDID information. */
-    if ( bootsym(boot_edid_caps) != 0x1313 )
+    if ( boot_edid_caps != 0x1313 )
     {
-        u16 caps = bootsym(boot_edid_caps);
+        u16 caps = boot_edid_caps;
         printk(" VBE/DDC methods:%s%s%s; ",
                (caps & 1) ? " V1" : "",
                (caps & 2) ? " V2" : "",
                !(caps & 3) ? " none" : "");
         printk("EDID transfer time: %d seconds\n", caps >> 8);
-        if ( *(u32 *)bootsym(boot_edid_info) == 0x13131313 )
+        if ( *(u32 *)boot_edid_info == 0x13131313 )
         {
             printk(" EDID info not retrieved because ");
             if ( !(caps & 3) )
@@ -838,9 +838,9 @@ void __init noreturn __start_xen(unsigned long mbi_p)
 
     printk("Disc information:\n");
     printk(" Found %d MBR signatures\n",
-           bootsym(boot_mbr_signature_nr));
+           boot_mbr_signature_nr);
     printk(" Found %d EDD information structures\n",
-           bootsym(boot_edd_info_nr));
+           boot_edd_info_nr);
 
     /* Check that we have at least one Multiboot module. */
     if ( !(mbi->flags & MBI_MODULES) || (mbi->mods_count == 0) )
@@ -923,14 +923,14 @@ void __init noreturn __start_xen(unsigned long mbi_p)
             bytes += map->size + 4;
         }
     }
-    else if ( bootsym(lowmem_kb) )
+    else if ( lowmem_kb )
     {
         memmap_type = "Xen-e801";
         e820_raw.map[0].addr = 0;
-        e820_raw.map[0].size = bootsym(lowmem_kb) << 10;
+        e820_raw.map[0].size = lowmem_kb << 10;
         e820_raw.map[0].type = E820_RAM;
         e820_raw.map[1].addr = 0x100000;
-        e820_raw.map[1].size = bootsym(highmem_kb) << 10;
+        e820_raw.map[1].size = highmem_kb << 10;
         e820_raw.map[1].type = E820_RAM;
         e820_raw.nr_map = 2;
     }
diff --git a/xen/arch/x86/xen.lds.S b/xen/arch/x86/xen.lds.S
index 310cc525ac..e26bcb7f79 100644
--- a/xen/arch/x86/xen.lds.S
+++ b/xen/arch/x86/xen.lds.S
@@ -243,11 +243,13 @@ SECTIONS
        __trampoline32_rel_stop = .;
        __bootsym_rel_start = .;
        *(.bootsym_rel)
-       *(.bootdatasym_rel)
        __bootsym_rel_stop = .;
        __bootsym_seg_start = .;
        *(.bootsym_seg)
        __bootsym_seg_stop = .;
+       __bootdatasym_rel_start = .;
+       *(.bootdatasym_rel)
+       __bootdatasym_rel_stop = .;
        /*
         * struct alt_inst entries. From the header (alternative.h):
         * "Alternative instructions for different CPU types or capabilities"
@@ -290,6 +292,11 @@ SECTIONS
   DECL_SECTION(.data) {
        *(.data.page_aligned)
        *(.data)
+       . = ALIGN(4);
+       __bootdata_start = .;
+       *(.data.boot16)
+       . = ALIGN(4);
+       __bootdata_end = .;
        *(.data.rel)
        *(.data.rel.*)
        CONSTRUCTORS
diff --git a/xen/include/asm-x86/edd.h b/xen/include/asm-x86/edd.h
index afaa23732a..a4d6b4d90e 100644
--- a/xen/include/asm-x86/edd.h
+++ b/xen/include/asm-x86/edd.h
@@ -143,7 +143,6 @@ struct __packed mbr_signature {
     u32 signature;
 };
 
-/* These all reside in the boot trampoline. Access via bootsym(). */
 extern struct mbr_signature boot_mbr_signature[];
 extern u8 boot_mbr_signature_nr;
 extern struct edd_info boot_edd_info[];
-- 
2.21.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [Xen-devel] [PATCH v3 5/5] x86/boot: Do not use trampoline for no-real-mode boot paths
  2019-08-21 16:35 ` [Xen-devel] [PATCH v3 1/5] x86/boot: Only jump into low trampoline code for real-mode boot David Woodhouse
                     ` (2 preceding siblings ...)
  2019-08-21 16:35   ` [Xen-devel] [PATCH v3 4/5] x86/boot: Copy 16-bit boot variables back up to Xen image David Woodhouse
@ 2019-08-21 16:35   ` David Woodhouse
  2019-09-02  8:54     ` Jan Beulich
  2019-08-30 14:25   ` [Xen-devel] [PATCH v3 1/5] x86/boot: Only jump into low trampoline code for real-mode boot Jan Beulich
  4 siblings, 1 reply; 19+ messages in thread
From: David Woodhouse @ 2019-08-21 16:35 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

From: David Woodhouse <dwmw@amazon.co.uk>

Where booted from EFI or with no-real-mode, there is no need to stomp
on low memory with the 16-boot code. Instead, just go straight to
trampoline_protmode_entry() at its physical location within the Xen
image, having applied suitable relocations.

This means that the GDT has to be loaded with lgdtl because the 16-bit
lgdt instruction would drop the high 8 bits of the gdt_trampoline
address, causing failures if the Xen image was loaded above 16MiB.

For now, the boot code (including the EFI loader path) still determines
what the trampoline_phys address should be for the permanent 16-bit
trampoline (used for AP startup, and wakeup). But that trampoline is
now only relocated for that address and copied into low memory later,
from a relocate_trampoline() call made from __start_xen().

The permanent trampoline can't trivially use the 32-bit code in place in
its physical location in the Xen image, as idle_pg_table doesn't have a
full physical mapping. Fixing that is theoretically possible but it's
actually much simpler just to relocate the 32-bit code (again) into low
memory, alongside the 16-bit code for the permanent trampoline.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 xen/arch/x86/acpi/power.c      |  6 +--
 xen/arch/x86/boot/head.S       | 67 ++++++++++++++++------------------
 xen/arch/x86/boot/trampoline.S | 32 ++++++++++++----
 xen/arch/x86/cpu/common.c      |  2 +-
 xen/arch/x86/cpu/intel.c       |  2 +-
 xen/arch/x86/efi/efi-boot.h    | 31 ++--------------
 xen/arch/x86/setup.c           | 46 +++++++++++++++++++++--
 xen/arch/x86/smpboot.c         |  6 +--
 xen/arch/x86/tboot.c           |  6 +--
 xen/arch/x86/x86_64/mm.c       |  2 +-
 xen/include/asm-x86/acpi.h     |  2 +-
 xen/include/asm-x86/config.h   | 12 +++---
 12 files changed, 122 insertions(+), 92 deletions(-)

diff --git a/xen/arch/x86/acpi/power.c b/xen/arch/x86/acpi/power.c
index aecc754fdb..d2a355429e 100644
--- a/xen/arch/x86/acpi/power.c
+++ b/xen/arch/x86/acpi/power.c
@@ -152,9 +152,9 @@ static void acpi_sleep_prepare(u32 state)
         return;
 
     if ( acpi_sinfo.vector_width == 32 )
-        *(uint32_t *)wakeup_vector_va = bootsym_phys(wakeup_start);
+        *(uint32_t *)wakeup_vector_va = trampsym_phys(wakeup_start);
     else
-        *(uint64_t *)wakeup_vector_va = bootsym_phys(wakeup_start);
+        *(uint64_t *)wakeup_vector_va = trampsym_phys(wakeup_start);
 }
 
 static void acpi_sleep_post(u32 state) {}
@@ -388,7 +388,7 @@ static void tboot_sleep(u8 sleep_state)
     g_tboot_shared->acpi_sinfo.wakeup_vector = acpi_sinfo.wakeup_vector;
     g_tboot_shared->acpi_sinfo.vector_width = acpi_sinfo.vector_width;
     g_tboot_shared->acpi_sinfo.kernel_s3_resume_vector =
-                                              bootsym_phys(wakeup_start);
+                                              trampsym_phys(wakeup_start);
 
     switch ( sleep_state )
     {
diff --git a/xen/arch/x86/boot/head.S b/xen/arch/x86/boot/head.S
index 6d315020d2..cebad00c13 100644
--- a/xen/arch/x86/boot/head.S
+++ b/xen/arch/x86/boot/head.S
@@ -689,16 +689,23 @@ trampoline_setup:
         lea     __PAGE_HYPERVISOR+sym_esi(l1_identmap),%edi
         mov     %edi,sym_fs(l2_bootmap)
 
-        /* Apply relocations to bootstrap trampoline. */
-        mov     sym_fs(trampoline_phys),%edx
-        mov     $sym_offs(__trampoline_rel_start),%edi
-1:
-        mov     %fs:(%edi),%eax
-        add     %edx,%fs:(%edi,%eax)
-        add     $4,%edi
-        cmp     $sym_offs(__trampoline_rel_stop),%edi
-        jb      1b
+        /* Do not parse command line on EFI platform here. */
+        cmpb    $0,sym_fs(efi_platform)
+        jnz     1f
 
+        /* Bail if there is no command line to parse. */
+        mov     sym_fs(multiboot_ptr),%ebx
+        testl   $MBI_CMDLINE,MB_flags(%ebx)
+        jz      1f
+
+        lea     sym_esi(early_boot_opts),%eax
+        push    %eax
+        pushl   MB_cmdline(%ebx)
+        call    cmdline_parse_early
+
+1:
+        /* Apply relocations to 32-bit trampoline for execution in place. */
+        lea     sym_esi(perm_trampoline_start),%edx
         mov     $sym_offs(__trampoline32_rel_start),%edi
 1:
         mov     %fs:(%edi),%eax
@@ -707,6 +714,21 @@ trampoline_setup:
         cmp     $sym_offs(__trampoline32_rel_stop),%edi
         jb      1b
 
+        cmp     $0,sym_esi(skip_realmode)
+        jz      .Ldo_realmode
+
+        /* Go directly to trampoline_protmode_entry at its physical address */
+        lea     trampoline_protmode_entry-__XEN_VIRT_START(%esi),%eax
+        pushl   $BOOT_CS32
+        push    %eax
+
+        /* EBX == 0 indicates we are the BP (Boot Processor). */
+        xor     %ebx,%ebx
+        retl
+
+.Ldo_realmode:
+        /* Apply relocations to 16-bit boot code. */
+        mov     sym_fs(trampoline_phys),%edx
         mov     $sym_offs(__bootsym_rel_start),%edi
 1:
         mov     %fs:(%edi),%eax
@@ -736,35 +758,12 @@ trampoline_setup:
         cmp     $sym_offs(__bootdatasym_rel_stop),%edi
         jb      1b
 
-        /* Do not parse command line on EFI platform here. */
-        cmpb    $0,sym_fs(efi_platform)
-        jnz     1f
-
-        /* Bail if there is no command line to parse. */
-        mov     sym_fs(multiboot_ptr),%ebx
-        testl   $MBI_CMDLINE,MB_flags(%ebx)
-        jz      1f
-
-        lea     sym_esi(early_boot_opts),%eax
-        push    %eax
-        pushl   MB_cmdline(%ebx)
-        call    cmdline_parse_early
-
-1:
         /* Switch to low-memory stack which lives at the end of trampoline region. */
         mov     sym_fs(trampoline_phys),%edi
         lea     TRAMPOLINE_SPACE+TRAMPOLINE_STACK_SPACE(%edi),%esp
-        cmpb    $0, sym_fs(skip_realmode)
-        jz      1f
-        /* If no-real-mode, jump straight to trampoline_protmode_entry */
-        lea     trampoline_protmode_entry-boot_trampoline_start(%edi),%eax
-        /* EBX == 0 indicates we are the BP (Boot Processor). */
-        xor     %ebx,%ebx
-        jmp     2f
-1:
+
         /* Go via 16-bit code in trampoline_boot_cpu_entry */
         lea     trampoline_boot_cpu_entry-boot_trampoline_start(%edi),%eax
-2:
         pushl   $BOOT_CS32
         push    %eax
 
@@ -787,8 +786,6 @@ cmdline_parse_early:
 reloc:
 #include "reloc.S"
 
-ENTRY(boot_trampoline_start)
 #include "trampoline.S"
-ENTRY(boot_trampoline_end)
 
 #include "x86_64.S"
diff --git a/xen/arch/x86/boot/trampoline.S b/xen/arch/x86/boot/trampoline.S
index c747d70404..b846977ad0 100644
--- a/xen/arch/x86/boot/trampoline.S
+++ b/xen/arch/x86/boot/trampoline.S
@@ -60,7 +60,7 @@ GLOBAL(bootdata_start)
         .popsection
 
 #undef trampsym
-#define trampsym(s) ((s)-boot_trampoline_start)
+#define trampsym(s) ((s)-perm_trampoline_start)
 
 #define trampsym_rel(sym, off, opnd...)    \
         trampsym(sym),##opnd;              \
@@ -70,7 +70,7 @@ GLOBAL(bootdata_start)
         .popsection
 
 #undef tramp32sym
-#define tramp32sym(s) ((s)-boot_trampoline_start)
+#define tramp32sym(s) ((s)-perm_trampoline_start)
 
 #define tramp32sym_rel(sym, off, opnd...)  \
         tramp32sym(sym),##opnd;            \
@@ -83,6 +83,8 @@ GLOBAL(bootdata_start)
 
         .code16
 
+ENTRY(perm_trampoline_start)
+
 /*
  * do_boot_cpu() programs the Startup-IPI to point here.  Due to the SIPI
  * format, the relocated entrypoint must be 4k aligned.
@@ -90,6 +92,7 @@ GLOBAL(bootdata_start)
  * It is entered in Real Mode, with %cs = trampoline_realmode_entry >> 4 and
  * %ip = 0.
  */
+
 GLOBAL(trampoline_realmode_entry)
         mov     %cs,%ax
         mov     %ax,%ds
@@ -97,7 +100,7 @@ GLOBAL(trampoline_realmode_entry)
         cld
         cli
         lidt    trampsym(idt_48)
-        lgdt    trampsym(gdt_48)
+        lgdtl   trampsym(gdt_48)
         mov     $1,%bl                    # EBX != 0 indicates we are an AP
         xor     %ax, %ax
         inc     %ax
@@ -236,11 +239,23 @@ gdt_48: .word   7*8-1
 
 /* The first page of trampoline is permanent, the rest boot-time only. */
 /* Reuse the boot trampoline on the 1st trampoline page as stack for wakeup. */
-        .equ    wakeup_stack, boot_trampoline_start + PAGE_SIZE
+        .equ    wakeup_stack, perm_trampoline_start + PAGE_SIZE
         .global wakeup_stack
 
+ENTRY(perm_trampoline_end)
+
 /* From here on early boot only. */
 
+ENTRY(boot_trampoline_start)
+
+        .word   0
+boot16_idt:
+        .word   0, 0, 0 # base = limit = 0
+        .word   0
+boot16_gdt:
+        .word   7*8-1
+        .long   tramp32sym_rel(trampoline_gdt,4)
+
         .code32
 trampoline_boot_cpu_entry:
         /* Load pseudo-real-mode segments. */
@@ -304,8 +319,8 @@ trampoline_boot_cpu_entry:
         cli
 
         /* Reset GDT and IDT. Some BIOSes clobber GDTR. */
-        lidt    bootsym(idt_48)
-        lgdt    bootsym(gdt_48)
+        lidt    bootsym(boot16_idt)
+        lgdtl   bootsym(boot16_gdt)
 
         /* Enter protected mode, and flush insn queue. */
         xor     %ax,%ax
@@ -343,7 +358,8 @@ trampoline_boot_cpu_entry:
         xor     %ebx,%ebx
 
         /* Jump to the common bootstrap entry point. */
-        jmp     trampoline_protmode_entry
+        mov     $tramp32sym_rel(trampoline_protmode_entry,4,%eax)
+        jmp     *%eax
 
 #include "video.h"
 
@@ -379,6 +395,8 @@ rm_idt: .word   256*4-1, 0, 0
 #include "video.S"
 #endif
 
+ENTRY(boot_trampoline_end)
+
         .pushsection .data.boot16, "aw", @progbits
 GLOBAL(bootdata_end)
         .popsection
diff --git a/xen/arch/x86/cpu/common.c b/xen/arch/x86/cpu/common.c
index a074176c00..05ced26597 100644
--- a/xen/arch/x86/cpu/common.c
+++ b/xen/arch/x86/cpu/common.c
@@ -389,7 +389,7 @@ static void generic_identify(struct cpuinfo_x86 *c)
 		      &c->x86_capability[cpufeat_word(X86_FEATURE_LAHF_LM)],
 		      &c->x86_capability[cpufeat_word(X86_FEATURE_SYSCALL)]);
 	if (c == &boot_cpu_data)
-		bootsym(cpuid_ext_features) =
+		trampsym(cpuid_ext_features) =
 			c->x86_capability[cpufeat_word(X86_FEATURE_NX)];
 
 	if (c->extended_cpuid_level >= 0x80000004)
diff --git a/xen/arch/x86/cpu/intel.c b/xen/arch/x86/cpu/intel.c
index 5356a6ae10..1970eb1848 100644
--- a/xen/arch/x86/cpu/intel.c
+++ b/xen/arch/x86/cpu/intel.c
@@ -269,7 +269,7 @@ static void early_init_intel(struct cpuinfo_x86 *c)
 				 MSR_IA32_MISC_ENABLE_XD_DISABLE);
 	if (disable) {
 		wrmsrl(MSR_IA32_MISC_ENABLE, misc_enable & ~disable);
-		bootsym(trampoline_misc_enable_off) |= disable;
+		trampsym(trampoline_misc_enable_off) |= disable;
 	}
 
 	if (disable & MSR_IA32_MISC_ENABLE_LIMIT_CPUID)
diff --git a/xen/arch/x86/efi/efi-boot.h b/xen/arch/x86/efi/efi-boot.h
index fac2b5e12a..f8c6ea100c 100644
--- a/xen/arch/x86/efi/efi-boot.h
+++ b/xen/arch/x86/efi/efi-boot.h
@@ -98,29 +98,6 @@ static void __init efi_arch_relocate_image(unsigned long delta)
     }
 }
 
-extern const s32 __trampoline_rel_start[], __trampoline_rel_stop[];
-extern const s32 __trampoline32_rel_start[], __trampoline32_rel_stop[];
-
-static void __init relocate_trampoline(unsigned long phys)
-{
-    const s32 *trampoline_ptr;
-
-    trampoline_phys = phys;
-
-    if ( !efi_enabled(EFI_LOADER) )
-        return;
-
-    /* Apply relocations to trampoline. */
-    for ( trampoline_ptr = __trampoline_rel_start;
-          trampoline_ptr < __trampoline_rel_stop;
-          ++trampoline_ptr )
-        *(u32 *)(*trampoline_ptr + (long)trampoline_ptr) += phys;
-    for ( trampoline_ptr = __trampoline32_rel_start;
-          trampoline_ptr < __trampoline32_rel_stop;
-          ++trampoline_ptr )
-        *(u32 *)(*trampoline_ptr + (long)trampoline_ptr) += phys;
-}
-
 static void __init place_string(u32 *addr, const char *s)
 {
     char *alloc = NULL;
@@ -223,7 +200,7 @@ static void __init efi_arch_pre_exit_boot(void)
     {
         if ( !cfg.addr )
             blexit(L"No memory for trampoline");
-        relocate_trampoline(cfg.addr);
+        trampoline_phys = cfg.addr;
     }
 }
 
@@ -232,7 +209,6 @@ static void __init noreturn efi_arch_post_exit_boot(void)
     u64 cr4 = XEN_MINIMAL_CR4 & ~X86_CR4_PGE, efer;
 
     efi_arch_relocate_image(__XEN_VIRT_START - xen_phys_start);
-    memcpy((void *)trampoline_phys, boot_trampoline_start, cfg.size);
 
     /* Set system registers and transfer control. */
     asm volatile("pushq $0\n\tpopfq");
@@ -566,14 +542,14 @@ static void __init efi_arch_memory_setup(void)
     cfg.addr = 0x100000;
 
     if ( efi_enabled(EFI_LOADER) )
-        cfg.size = boot_trampoline_end - boot_trampoline_start;
+        cfg.size = perm_trampoline_end - perm_trampoline_start;
     else
         cfg.size = TRAMPOLINE_SPACE + TRAMPOLINE_STACK_SPACE;
 
     status = efi_bs->AllocatePages(AllocateMaxAddress, EfiLoaderData,
                                    PFN_UP(cfg.size), &cfg.addr);
     if ( status == EFI_SUCCESS )
-        relocate_trampoline(cfg.addr);
+        trampoline_phys = cfg.addr;
     else
     {
         cfg.addr = 0;
@@ -665,7 +641,6 @@ static void __init efi_arch_load_addr_check(EFI_LOADED_IMAGE *loaded_image)
         blexit(L"Xen must be loaded below 4Gb.");
     if ( xen_phys_start & ((1 << L2_PAGETABLE_SHIFT) - 1) )
         blexit(L"Xen must be loaded at a 2Mb boundary.");
-    trampoline_xen_phys_start = xen_phys_start;
 }
 
 static bool __init efi_arch_use_config_file(EFI_SYSTEM_TABLE *SystemTable)
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index 5c818caed2..4faafc180c 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -679,6 +679,45 @@ static unsigned int __init copy_bios_e820(struct e820entry *map, unsigned int li
     return n;
 }
 
+extern const s32 __trampoline_rel_start[], __trampoline_rel_stop[];
+extern const s32 __trampoline32_rel_start[], __trampoline32_rel_stop[];
+
+static void __init relocate_trampoline(unsigned long phys)
+{
+    const s32 *trampoline_ptr;
+    uint32_t tramp32_delta;
+
+    /* Apply relocations to trampoline. */
+    for ( trampoline_ptr = __trampoline_rel_start;
+          trampoline_ptr < __trampoline_rel_stop;
+          ++trampoline_ptr )
+        *(u32 *)(*trampoline_ptr + (long)trampoline_ptr) += phys;
+
+    tramp32_delta = phys;
+    if ( !efi_enabled(EFI_LOADER) )
+    {
+        /*
+         * The non-EFI boot code uses the 32-bit trampoline in place
+         * so will have relocated it to the physical address of
+         * perm_trampoline_start already. Undo that as it needs to
+         * run from low memory for AP startup, because the Xen
+         * physical address range won't be mapped.
+         */
+        tramp32_delta -= trampoline_xen_phys_start;
+        tramp32_delta -= (unsigned long)(perm_trampoline_start - __XEN_VIRT_START);
+    }
+    for ( trampoline_ptr = __trampoline32_rel_start;
+          trampoline_ptr < __trampoline32_rel_stop;
+          ++trampoline_ptr )
+    {
+        *(u32 *)(*trampoline_ptr + (long)trampoline_ptr) += tramp32_delta;
+    }
+    trampoline_xen_phys_start = xen_phys_start;
+
+    memcpy(trampsym(perm_trampoline_start), perm_trampoline_start,
+           perm_trampoline_end - perm_trampoline_start);
+}
+
 void __init noreturn __start_xen(unsigned long mbi_p)
 {
     char *memmap_type = NULL;
@@ -1073,7 +1112,6 @@ void __init noreturn __start_xen(unsigned long mbi_p)
             /* Select relocation address. */
             e = end - reloc_size;
             xen_phys_start = e;
-            bootsym(trampoline_xen_phys_start) = e;
 
             /*
              * No PTEs pointing above this address are candidates for relocation.
@@ -1506,6 +1544,8 @@ void __init noreturn __start_xen(unsigned long mbi_p)
     else
         end_boot_allocator();
 
+    relocate_trampoline(trampoline_phys);
+
     system_state = SYS_STATE_boot;
     /*
      * No calls involving ACPI code should go between the setting of
@@ -1876,8 +1916,8 @@ int __hwdom_init xen_in_range(unsigned long mfn)
     if ( !xen_regions[0].s )
     {
         /* S3 resume code (and other real mode trampoline code) */
-        xen_regions[region_s3].s = bootsym_phys(boot_trampoline_start);
-        xen_regions[region_s3].e = bootsym_phys(boot_trampoline_end);
+        xen_regions[region_s3].s = trampsym_phys(perm_trampoline_start);
+        xen_regions[region_s3].e = trampsym_phys(perm_trampoline_end);
 
         /*
          * This needs to remain in sync with the uses of the same symbols in
diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index 5c4254ac87..8e0c844e37 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -47,7 +47,7 @@
 #include <asm/tboot.h>
 #include <mach_apic.h>
 
-#define setup_trampoline()    (bootsym_phys(trampoline_realmode_entry))
+#define setup_trampoline()    (trampsym_phys(trampoline_realmode_entry))
 
 unsigned long __read_mostly trampoline_phys;
 
@@ -598,7 +598,7 @@ static int do_boot_cpu(int apicid, int cpu)
         {
             boot_error = 1;
             smp_mb();
-            if ( bootsym(trampoline_cpu_started) == 0xA5 )
+            if ( trampsym(trampoline_cpu_started) == 0xA5 )
                 /* trampoline started but...? */
                 printk("Stuck ??\n");
             else
@@ -614,7 +614,7 @@ static int do_boot_cpu(int apicid, int cpu)
     }
 
     /* mark "stuck" area as not stuck */
-    bootsym(trampoline_cpu_started) = 0;
+    trampsym(trampoline_cpu_started) = 0;
     smp_mb();
 
     return rc;
diff --git a/xen/arch/x86/tboot.c b/xen/arch/x86/tboot.c
index 2769e1ccd9..ad2591578d 100644
--- a/xen/arch/x86/tboot.c
+++ b/xen/arch/x86/tboot.c
@@ -369,9 +369,9 @@ void tboot_shutdown(uint32_t shutdown_type)
          */
         g_tboot_shared->num_mac_regions = 3;
         /* S3 resume code (and other real mode trampoline code) */
-        g_tboot_shared->mac_regions[0].start = bootsym_phys(boot_trampoline_start);
-        g_tboot_shared->mac_regions[0].size = bootsym_phys(boot_trampoline_end) -
-                                              bootsym_phys(boot_trampoline_start);
+        g_tboot_shared->mac_regions[0].start = trampsym_phys(perm_trampoline_start);
+        g_tboot_shared->mac_regions[0].size = trampsym_phys(perm_trampoline_end) -
+                                              trampsym_phys(perm_trampoline_start);
         /* hypervisor .text + .rodata */
         g_tboot_shared->mac_regions[1].start = (uint64_t)__pa(&_stext);
         g_tboot_shared->mac_regions[1].size = __pa(&__2M_rodata_end) -
diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 149cc4f7b5..05d476c423 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -697,7 +697,7 @@ void __init zap_low_mappings(void)
 
     /* Replace with mapping of the boot trampoline only. */
     map_pages_to_xen(trampoline_phys, maddr_to_mfn(trampoline_phys),
-                     PFN_UP(boot_trampoline_end - boot_trampoline_start),
+                     PFN_UP(perm_trampoline_end - perm_trampoline_start),
                      __PAGE_HYPERVISOR);
 }
 
diff --git a/xen/include/asm-x86/acpi.h b/xen/include/asm-x86/acpi.h
index 7032f3a001..8c5381aa47 100644
--- a/xen/include/asm-x86/acpi.h
+++ b/xen/include/asm-x86/acpi.h
@@ -106,7 +106,7 @@ extern int acpi_scan_nodes(u64 start, u64 end);
 #define NR_NODE_MEMBLKS (MAX_NUMNODES*2)
 
 extern struct acpi_sleep_info acpi_sinfo;
-#define acpi_video_flags bootsym(video_flags)
+#define acpi_video_flags trampsym(video_flags)
 struct xenpf_enter_acpi_sleep;
 extern int acpi_enter_sleep(struct xenpf_enter_acpi_sleep *sleep);
 extern int acpi_enter_state(u32 state);
diff --git a/xen/include/asm-x86/config.h b/xen/include/asm-x86/config.h
index 50481748f6..e5043fafbd 100644
--- a/xen/include/asm-x86/config.h
+++ b/xen/include/asm-x86/config.h
@@ -89,12 +89,12 @@
 
 #ifndef __ASSEMBLY__
 extern unsigned long trampoline_phys;
-#define bootsym_phys(sym)                                 \
-    (((unsigned long)&(sym)-(unsigned long)&boot_trampoline_start)+trampoline_phys)
-#define bootsym(sym)                                      \
-    (*RELOC_HIDE((typeof(&(sym)))__va(__pa(&(sym))),      \
-                 trampoline_phys-__pa(boot_trampoline_start)))
-extern char boot_trampoline_start[], boot_trampoline_end[];
+#define trampsym_phys(sym)                                 \
+    (((unsigned long)&(sym) - (unsigned long)&perm_trampoline_start) + trampoline_phys)
+#define trampsym(sym)                                      \
+    (*RELOC_HIDE((typeof(&(sym)))__va(__pa(&(sym))),       \
+                 trampoline_phys - __pa(perm_trampoline_start)))
+extern char perm_trampoline_start[], perm_trampoline_end[];
 extern char trampoline_realmode_entry[];
 extern unsigned int trampoline_xen_phys_start;
 extern unsigned char trampoline_cpu_started;
-- 
2.21.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Xen-devel] [PATCH v3 1/5] x86/boot: Only jump into low trampoline code for real-mode boot
  2019-08-21 16:35 ` [Xen-devel] [PATCH v3 1/5] x86/boot: Only jump into low trampoline code for real-mode boot David Woodhouse
                     ` (3 preceding siblings ...)
  2019-08-21 16:35   ` [Xen-devel] [PATCH v3 5/5] x86/boot: Do not use trampoline for no-real-mode boot paths David Woodhouse
@ 2019-08-30 14:25   ` Jan Beulich
  2019-08-30 14:28     ` David Woodhouse
  4 siblings, 1 reply; 19+ messages in thread
From: Jan Beulich @ 2019-08-30 14:25 UTC (permalink / raw)
  To: David Woodhouse; +Cc: xen-devel, Roger Pau Monné, Wei Liu, Andrew Cooper

On 21.08.2019 18:35, David Woodhouse wrote:
> --- a/xen/arch/x86/boot/head.S
> +++ b/xen/arch/x86/boot/head.S
> @@ -727,7 +727,17 @@ trampoline_setup:
>          /* Switch to low-memory stack which lives at the end of trampoline region. */
>          mov     sym_fs(trampoline_phys),%edi
>          lea     TRAMPOLINE_SPACE+TRAMPOLINE_STACK_SPACE(%edi),%esp
> +        cmpb    $0, sym_fs(skip_realmode)
> +        jz      1f
> +        /* If no-real-mode, jump straight to trampoline_protmode_entry */
> +        lea     trampoline_protmode_entry-trampoline_start(%edi),%eax
> +        /* EBX == 0 indicates we are the BP (Boot Processor). */
> +        xor     %ebx,%ebx
> +        jmp     2f
> +1:
> +        /* Go via 16-bit code in trampoline_boot_cpu_entry */
>          lea     trampoline_boot_cpu_entry-trampoline_start(%edi),%eax
> +2:
>          pushl   $BOOT_CS32
>          push    %eax

Provided it goes in together with the subsequent change removing this
double jump again
Acked-by: Jan Beulich <jbeulich@suse.com>

Of course it would have been nice if within you addition you'd been
consistent with adding (or not) blanks after commas separating insn
operands.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Xen-devel] [PATCH v3 1/5] x86/boot: Only jump into low trampoline code for real-mode boot
  2019-08-30 14:25   ` [Xen-devel] [PATCH v3 1/5] x86/boot: Only jump into low trampoline code for real-mode boot Jan Beulich
@ 2019-08-30 14:28     ` David Woodhouse
  0 siblings, 0 replies; 19+ messages in thread
From: David Woodhouse @ 2019-08-30 14:28 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, Roger Pau Monné, Wei Liu, Andrew Cooper

[-- Attachment #1.1: Type: text/plain, Size: 1507 bytes --]

On Fri, 2019-08-30 at 16:25 +0200, Jan Beulich wrote:
> On 21.08.2019 18:35, David Woodhouse wrote:
> > --- a/xen/arch/x86/boot/head.S
> > +++ b/xen/arch/x86/boot/head.S
> > @@ -727,7 +727,17 @@ trampoline_setup:
> >          /* Switch to low-memory stack which lives at the end of
> > trampoline region. */
> >          mov     sym_fs(trampoline_phys),%edi
> >          lea     TRAMPOLINE_SPACE+TRAMPOLINE_STACK_SPACE(%edi),%esp
> > +        cmpb    $0, sym_fs(skip_realmode)
> > +        jz      1f
> > +        /* If no-real-mode, jump straight to
> > trampoline_protmode_entry */
> > +        lea     trampoline_protmode_entry-
> > trampoline_start(%edi),%eax
> > +        /* EBX == 0 indicates we are the BP (Boot Processor). */
> > +        xor     %ebx,%ebx
> > +        jmp     2f
> > +1:
> > +        /* Go via 16-bit code in trampoline_boot_cpu_entry */
> >          lea     trampoline_boot_cpu_entry-
> > trampoline_start(%edi),%eax
> > +2:
> >          pushl   $BOOT_CS32
> >          push    %eax
> 
> Provided it goes in together with the subsequent change removing this
> double jump again
> Acked-by: Jan Beulich <jbeulich@suse.com>

Thanks.

> Of course it would have been nice if within you addition you'd been
> consistent with adding (or not) blanks after commas separating insn
> operands.

Yeah, I can see how that would be useful. I'll do it as I rebase on top
of the current staging branch and redo the conflict resolution with
Andy's changes.

[-- Attachment #1.2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5174 bytes --]

[-- Attachment #2: Type: text/plain, Size: 157 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Xen-devel] [PATCH v3 2/5] x86/boot: Split bootsym() into four types of relocations
  2019-08-21 16:35   ` [Xen-devel] [PATCH v3 2/5] x86/boot: Split bootsym() into four types of relocations David Woodhouse
@ 2019-08-30 15:10     ` Jan Beulich
  2019-08-30 16:12       ` David Woodhouse
  0 siblings, 1 reply; 19+ messages in thread
From: Jan Beulich @ 2019-08-30 15:10 UTC (permalink / raw)
  To: David Woodhouse; +Cc: xen-devel, Roger Pau Monné, Wei Liu, Andrew Cooper

On 21.08.2019 18:35, David Woodhouse wrote:
> --- a/xen/arch/x86/boot/head.S
> +++ b/xen/arch/x86/boot/head.S
> @@ -699,14 +699,30 @@ trampoline_setup:
>          cmp     $sym_offs(__trampoline_rel_stop),%edi
>          jb      1b
>  
> -        /* Patch in the trampoline segment. */
> +        mov     $sym_offs(__trampoline32_rel_start),%edi
> +1:
> +        mov     %fs:(%edi),%eax
> +        add     %edx,%fs:(%edi,%eax)
> +        add     $4,%edi
> +        cmp     $sym_offs(__trampoline32_rel_stop),%edi
> +        jb      1b
> +
> +        mov     $sym_offs(__bootsym_rel_start),%edi
> +1:
> +        mov     %fs:(%edi),%eax
> +        add     %edx,%fs:(%edi,%eax)
> +        add     $4,%edi
> +        cmp     $sym_offs(__bootsym_rel_stop),%edi
> +        jb      1b

With the smaller sets now - are we risking misbehavior if one
of the relocation sets ends up empty? This wasn't reasonable to
expect before, but I think it would be nice to have a build-time
check rather than a hard to debug crash in case this happens.

> --- a/xen/arch/x86/boot/trampoline.S
> +++ b/xen/arch/x86/boot/trampoline.S
> @@ -16,21 +16,62 @@
>   * not guaranteed to persist.
>   */
>  
> -/* NB. bootsym() is only usable in real mode, or via BOOT_PSEUDORM_DS. */
> +/*
> + * There are four sets of relocations:
> + *
> + * bootsym():     Boot-time code relocated to low memory and run only once.
> + *                Only usable at boot; in real mode or via BOOT_PSEUDORM_DS.
> + * bootdatasym(): Boot-time BIOS-discovered data, relocated back up to Xen
> + *                image after discovery.
> + * trampsym():    Permanent trampoline code relocated into low memory for AP
> + *                startup and wakeup.
> + * tramp32sym():  32-bit trampoline code which at boot can be used directly
> + *                from the Xen image in memory, but which will need to be
> + *                relocated into low (well, into *mapped*) memory in order
> + *                to be used for AP startup.
> + */
>  #undef bootsym
>  #define bootsym(s) ((s)-trampoline_start)
>  
>  #define bootsym_rel(sym, off, opnd...)     \
>          bootsym(sym),##opnd;               \
>  111:;                                      \
> -        .pushsection .trampoline_rel, "a"; \
> +        .pushsection .bootsym_rel, "a";    \
>          .long 111b - (off) - .;            \
>          .popsection
>  
>  #define bootsym_segrel(sym, off)           \
>          $0,$bootsym(sym);                  \
>  111:;                                      \
> -        .pushsection .trampoline_seg, "a"; \
> +        .pushsection .bootsym_seg, "a";    \
> +        .long 111b - (off) - .;            \
> +        .popsection
> +
> +#define bootdatasym(s) ((s)-trampoline_start)
> +#define bootdatasym_rel(sym, off, opnd...) \
> +        bootdatasym(sym),##opnd;           \
> +111:;                                      \
> +        .pushsection .bootdatasym_rel, "a";\
> +        .long 111b - (off) - .;            \
> +        .popsection
> +
> +#undef trampsym

Why this and ...

> +#define trampsym(s) ((s)-trampoline_start)
> +
> +#define trampsym_rel(sym, off, opnd...)    \
> +        trampsym(sym),##opnd;              \
> +111:;                                      \
> +        .pushsection .trampsym_rel, "a";   \
> +        .long 111b - (off) - .;            \
> +        .popsection
> +
> +#undef tramp32sym

... this #undef? You have none ahead of the bootdatasym #define-s,
and (other than for bootsym) there's not conflicting C level one
afaics.

> +#define tramp32sym(s) ((s)-trampoline_start)
> +
> +#define tramp32sym_rel(sym, off, opnd...)  \
> +        tramp32sym(sym),##opnd;            \
> +111:;                                      \
> +        .pushsection .tramp32sym_rel, "a"; \
>          .long 111b - (off) - .;            \
>          .popsection

After your reply to my comment regarding the redundancy here I've
checked (in your git branch) how things end up. Am I mistaken, or
are the trampsym and tramp32sym #define-s entirely identical
(except for the relocations section name)? Even between the others
there's little enough difference, so it continues to be unclear to
me why you think it's better to have four instances of about the
same (not entirely trivial) thing.

> @@ -48,16 +89,19 @@
>  GLOBAL(trampoline_realmode_entry)
>          mov     %cs,%ax
>          mov     %ax,%ds
> -        movb    $0xA5,bootsym(trampoline_cpu_started)
> +        movb    $0xA5,trampsym(trampoline_cpu_started)
>          cld
>          cli
> -        lidt    bootsym(idt_48)
> -        lgdt    bootsym(gdt_48)
> +        lidt    trampsym(idt_48)
> +        lgdt    trampsym(gdt_48)
>          mov     $1,%bl                    # EBX != 0 indicates we are an AP
>          xor     %ax, %ax
>          inc     %ax
>          lmsw    %ax                       # CR0.PE = 1 (enter protected mode)
> -        ljmpl   $BOOT_CS32,$bootsym_rel(trampoline_protmode_entry,6)
> +        ljmpl   $BOOT_CS32,$tramp32sym_rel(trampoline_protmode_entry,6)
> +
> +GLOBAL(trampoline_cpu_started)
> +        .byte   0

The movement of this item here seems unrelated to this change; it's
also not mentioned in the description.

> @@ -115,10 +115,10 @@ static void __init relocate_trampoline(unsigned long phys)
>            trampoline_ptr < __trampoline_rel_stop;
>            ++trampoline_ptr )
>          *(u32 *)(*trampoline_ptr + (long)trampoline_ptr) += phys;
> -    for ( trampoline_ptr = __trampoline_seg_start;
> -          trampoline_ptr < __trampoline_seg_stop;
> +    for ( trampoline_ptr = __trampoline32_rel_start;
> +          trampoline_ptr < __trampoline32_rel_stop;
>            ++trampoline_ptr )
> -        *(u16 *)(*trampoline_ptr + (long)trampoline_ptr) = phys >> 4;
> +        *(u32 *)(*trampoline_ptr + (long)trampoline_ptr) += phys;
>  }

Seeing this and adding in the comment about the redundant tramp*sym
macros I wonder why the relocations can't be put together in a single
section, and there be just a single loop here. (I realize this
entire function gets deleted from here later on, but anyway.)

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Xen-devel] [PATCH v3 4/5] x86/boot: Copy 16-bit boot variables back up to Xen image
  2019-08-21 16:35   ` [Xen-devel] [PATCH v3 4/5] x86/boot: Copy 16-bit boot variables back up to Xen image David Woodhouse
@ 2019-08-30 15:43     ` Jan Beulich
  2019-08-30 16:25       ` David Woodhouse
  0 siblings, 1 reply; 19+ messages in thread
From: Jan Beulich @ 2019-08-30 15:43 UTC (permalink / raw)
  To: David Woodhouse; +Cc: xen-devel, Roger Pau Monné, Wei Liu, Andrew Cooper

On 21.08.2019 18:35, David Woodhouse wrote:
> From: David Woodhouse <dwmw@amazon.co.uk>
> 
> Ditch the bootsym() access from C code for the variables populated by
> 16-bit boot code. As well as being cleaner this also paves the way for
> not having the 16-bit boot code in low memory for no-real-mode or EFI
> loader boots at all.
> 
> These variables are put into a separate .data.boot16 section and
> accessed in low memory during the real-mode boot, then copied back to
> their native location in the Xen image when real mode has finished.
> 
> Fix the limit in gdt_48 to admit that trampoline_gdt actually includes
> 7 entries, since we do now use the seventh (BOOT_FS) in late code so it
> matters. Andrew has a patch to further tidy up the GDT and initialise
> accessed bits etc., so I won't go overboard with more than the trivial
> size fix for now.
> 
> The bootsym() macro remains in C code purely for the variables which
> are written for the later AP startup and wakeup trampoline to use.
> 
> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
> ---
>  xen/arch/x86/boot/edd.S           |  2 ++
>  xen/arch/x86/boot/head.S          | 16 +++++++++++++++
>  xen/arch/x86/boot/mem.S           |  2 ++
>  xen/arch/x86/boot/trampoline.S    | 33 ++++++++++++++++++++++++++++---
>  xen/arch/x86/boot/video.S         | 30 +++++++++++++++-------------
>  xen/arch/x86/platform_hypercall.c | 18 ++++++++---------
>  xen/arch/x86/setup.c              | 22 ++++++++++-----------
>  xen/arch/x86/xen.lds.S            |  9 ++++++++-
>  xen/include/asm-x86/edd.h         |  1 -
>  9 files changed, 94 insertions(+), 39 deletions(-)
> 
> diff --git a/xen/arch/x86/boot/edd.S b/xen/arch/x86/boot/edd.S
> index 434bbbd960..138d04c964 100644
> --- a/xen/arch/x86/boot/edd.S
> +++ b/xen/arch/x86/boot/edd.S
> @@ -163,6 +163,7 @@ edd_done:
>  .Ledd_mbr_sig_skip:
>          ret
>  
> +        .pushsection .data.boot16, "aw", @progbits
>  GLOBAL(boot_edd_info_nr)
>          .byte   0
>  GLOBAL(boot_mbr_signature_nr)
> @@ -171,3 +172,4 @@ GLOBAL(boot_mbr_signature)
>          .fill   EDD_MBR_SIG_MAX*8,1,0
>  GLOBAL(boot_edd_info)
>          .fill   EDD_INFO_MAX * (EDDEXTSIZE + EDDPARMSIZE), 1, 0
> +        .popsection
> diff --git a/xen/arch/x86/boot/head.S b/xen/arch/x86/boot/head.S
> index 4118f73683..6d315020d2 100644
> --- a/xen/arch/x86/boot/head.S
> +++ b/xen/arch/x86/boot/head.S
> @@ -725,6 +725,17 @@ trampoline_setup:
>          cmp     $sym_offs(__bootsym_seg_stop),%edi
>          jb      1b
>  
> +        /* Relocations for the boot data section. */
> +        mov     sym_fs(trampoline_phys),%edx
> +        add     $(boot_trampoline_end - boot_trampoline_start),%edx
> +        mov     $sym_offs(__bootdatasym_rel_start),%edi
> +1:
> +        mov     %fs:(%edi),%eax
> +        add     %edx,%fs:(%edi,%eax)
> +        add     $4,%edi
> +        cmp     $sym_offs(__bootdatasym_rel_stop),%edi
> +        jb      1b
> +
>          /* Do not parse command line on EFI platform here. */
>          cmpb    $0,sym_fs(efi_platform)
>          jnz     1f
> @@ -762,6 +773,11 @@ trampoline_setup:
>          mov     $((boot_trampoline_end - boot_trampoline_start) / 4),%ecx
>          rep movsl %fs:(%esi),%es:(%edi)
>  
> +        /* Copy boot data template to low memory. */
> +        mov     $sym_offs(bootdata_start),%esi
> +        mov     $((bootdata_end - bootdata_start) / 4),%ecx
> +        rep movsl %fs:(%esi),%es:(%edi)

Afaict neither bootdata_start nor bootdata_end are aligned, and so
the difference isn't necessarily a multiple of 4. In fact the
other (preexisting) movsl looks to have the same issue; I wonder
if we propagate bad EDID data for that reason on certain builds /
in certain versions.

> --- a/xen/arch/x86/boot/trampoline.S
> +++ b/xen/arch/x86/boot/trampoline.S
> @@ -47,11 +47,15 @@
>          .long 111b - (off) - .;            \
>          .popsection
>  
> -#define bootdatasym(s) ((s)-boot_trampoline_start)
> +        .pushsection .data.boot16, "aw", @progbits
> +GLOBAL(bootdata_start)
> +        .popsection
> +
> +#define bootdatasym(s) ((s)-bootdata_start+(boot_trampoline_end-boot_trampoline_start))

Please can you add the missing blanks around the binary operators
here? (I should perhaps asked this already on the earlier patch
adding this #define.) Also it looks like the line might be overly
long.

> --- a/xen/arch/x86/boot/video.S
> +++ b/xen/arch/x86/boot/video.S
> @@ -15,10 +15,10 @@
>  
>  #include "video.h"
>  
> -/* Scratch space layout: boot_trampoline_end to boot_trampoline_end+0x1000. */
> -#define modelist       bootsym(boot_trampoline_end)   /* 2kB (256 entries) */
> -#define vesa_glob_info (modelist + 0x800)        /* 1kB */
> -#define vesa_mode_info (vesa_glob_info + 0x400)  /* 1kB */
> +/* Scratch space layout: bootdata_end to bootdata_end+0x1000. */
> +#define modelist(t)       bootdatasym_rel(bootdata_end,2,t)         /* 2KiB (256 entries) */
> +#define vesa_glob_info(t) bootdatasym_rel((bootdata_end+0x800),2,t) /* 1KiB */
> +#define vesa_mode_info(t) bootdatasym_rel((bootdata_end+0xc00),2,t) /* 1KiB */

Didn't you agree to extend the comment to warn about the risk resulting
from the literal 2-s in here?

> @@ -290,6 +292,11 @@ SECTIONS
>    DECL_SECTION(.data) {
>         *(.data.page_aligned)
>         *(.data)
> +       . = ALIGN(4);
> +       __bootdata_start = .;
> +       *(.data.boot16)
> +       . = ALIGN(4);
> +       __bootdata_end = .;

What do you need the labels for here? And once they're gone the ALIGN()
won't belong here anymore either - suitable alignment should be enforced
by the contributions to the section.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Xen-devel] [PATCH v3 2/5] x86/boot: Split bootsym() into four types of relocations
  2019-08-30 15:10     ` Jan Beulich
@ 2019-08-30 16:12       ` David Woodhouse
  2019-09-02  7:37         ` Jan Beulich
  0 siblings, 1 reply; 19+ messages in thread
From: David Woodhouse @ 2019-08-30 16:12 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, Roger Pau Monné, Wei Liu, Andrew Cooper

[-- Attachment #1.1: Type: text/plain, Size: 7788 bytes --]

On Fri, 2019-08-30 at 17:10 +0200, Jan Beulich wrote:
> On 21.08.2019 18:35, David Woodhouse wrote:
> > --- a/xen/arch/x86/boot/head.S
> > +++ b/xen/arch/x86/boot/head.S
> > @@ -699,14 +699,30 @@ trampoline_setup:
> >          cmp     $sym_offs(__trampoline_rel_stop),%edi
> >          jb      1b
> >  
> > -        /* Patch in the trampoline segment. */
> > +        mov     $sym_offs(__trampoline32_rel_start),%edi
> > +1:
> > +        mov     %fs:(%edi),%eax
> > +        add     %edx,%fs:(%edi,%eax)
> > +        add     $4,%edi
> > +        cmp     $sym_offs(__trampoline32_rel_stop),%edi
> > +        jb      1b
> > +
> > +        mov     $sym_offs(__bootsym_rel_start),%edi
> > +1:
> > +        mov     %fs:(%edi),%eax
> > +        add     %edx,%fs:(%edi,%eax)
> > +        add     $4,%edi
> > +        cmp     $sym_offs(__bootsym_rel_stop),%edi
> > +        jb      1b
> 
> With the smaller sets now - are we risking misbehavior if one
> of the relocation sets ends up empty? This wasn't reasonable to
> expect before, but I think it would be nice to have a build-time
> check rather than a hard to debug crash in case this happens.

Or just code it differently as a while() instead of a do{}while() so
that it actually copes with a zero-length section. 

> > --- a/xen/arch/x86/boot/trampoline.S
> > +++ b/xen/arch/x86/boot/trampoline.S
> > @@ -16,21 +16,62 @@
> >   * not guaranteed to persist.
> >   */
> >  
> > -/* NB. bootsym() is only usable in real mode, or via BOOT_PSEUDORM_DS. */
> > +/*
> > + * There are four sets of relocations:
> > + *
> > + * bootsym():     Boot-time code relocated to low memory and run only once.
> > + *                Only usable at boot; in real mode or via BOOT_PSEUDORM_DS.
> > + * bootdatasym(): Boot-time BIOS-discovered data, relocated back up to Xen
> > + *                image after discovery.
> > + * trampsym():    Permanent trampoline code relocated into low memory for AP
> > + *                startup and wakeup.
> > + * tramp32sym():  32-bit trampoline code which at boot can be used directly
> > + *                from the Xen image in memory, but which will need to be
> > + *                relocated into low (well, into *mapped*) memory in order
> > + *                to be used for AP startup.
> > + */
> >  #undef bootsym
> >  #define bootsym(s) ((s)-trampoline_start)
> >  
> >  #define bootsym_rel(sym, off, opnd...)     \
> >          bootsym(sym),##opnd;               \
> >  111:;                                      \
> > -        .pushsection .trampoline_rel, "a"; \
> > +        .pushsection .bootsym_rel, "a";    \
> >          .long 111b - (off) - .;            \
> >          .popsection
> >  
> >  #define bootsym_segrel(sym, off)           \
> >          $0,$bootsym(sym);                  \
> >  111:;                                      \
> > -        .pushsection .trampoline_seg, "a"; \
> > +        .pushsection .bootsym_seg, "a";    \
> > +        .long 111b - (off) - .;            \
> > +        .popsection
> > +
> > +#define bootdatasym(s) ((s)-trampoline_start)
> > +#define bootdatasym_rel(sym, off, opnd...) \
> > +        bootdatasym(sym),##opnd;           \
> > +111:;                                      \
> > +        .pushsection .bootdatasym_rel, "a";\
> > +        .long 111b - (off) - .;            \
> > +        .popsection
> > +
> > +#undef trampsym
> 
> Why this and ...
> 
> > +#define trampsym(s) ((s)-trampoline_start)
> > +
> > +#define trampsym_rel(sym, off, opnd...)    \
> > +        trampsym(sym),##opnd;              \
> > +111:;                                      \
> > +        .pushsection .trampsym_rel, "a";   \
> > +        .long 111b - (off) - .;            \
> > +        .popsection
> > +
> > +#undef tramp32sym
> 
> ... this #undef? You have none ahead of the bootdatasym #define-s,
> and (other than for bootsym) there's not conflicting C level one
> afaics.
> 
> > +#define tramp32sym(s) ((s)-trampoline_start)
> > +
> > +#define tramp32sym_rel(sym, off, opnd...)  \
> > +        tramp32sym(sym),##opnd;            \
> > +111:;                                      \
> > +        .pushsection .tramp32sym_rel, "a"; \
> >          .long 111b - (off) - .;            \
> >          .popsection
> 
> After your reply to my comment regarding the redundancy here I've
> checked (in your git branch) how things end up. Am I mistaken, or
> are the trampsym and tramp32sym #define-s entirely identical
> (except for the relocations section name)? Even between the others
> there's little enough difference, so it continues to be unclear to
> me why you think it's better to have four instances of about the
> same (not entirely trivial) thing.

The distinction is that in a no-real-mode boot tramp32 is used in place
in the Xen image at the physical address it happened to be loaded at,
and then *again* later in the AP/wakeup path. In the latter case it
needs to be moved to low memory (or we need to put the physical
location into idle_pg_table which seemed to be harder, as discussed).

So tramp32 symbols get relocated *twice*, while the plain tramp symbols
don't, but actually we could probably ditch the distinction and treat
them all the same, which would reduce the four categories to three.

I'll take a look.

I suppose we could also combine bootsym and bootdatasym, and copy that
*whole* section back up to the Xen image; both code and data. But I'm
inclined to prefer keeping them separate and only copying the data back
up.

> > @@ -48,16 +89,19 @@
> >  GLOBAL(trampoline_realmode_entry)
> >          mov     %cs,%ax
> >          mov     %ax,%ds
> > -        movb    $0xA5,bootsym(trampoline_cpu_started)
> > +        movb    $0xA5,trampsym(trampoline_cpu_started)
> >          cld
> >          cli
> > -        lidt    bootsym(idt_48)
> > -        lgdt    bootsym(gdt_48)
> > +        lidt    trampsym(idt_48)
> > +        lgdt    trampsym(gdt_48)
> >          mov     $1,%bl                    # EBX != 0 indicates we are an AP
> >          xor     %ax, %ax
> >          inc     %ax
> >          lmsw    %ax                       # CR0.PE = 1 (enter protected mode)
> > -        ljmpl   $BOOT_CS32,$bootsym_rel(trampoline_protmode_entry,6)
> > +        ljmpl   $BOOT_CS32,$tramp32sym_rel(trampoline_protmode_entry,6)
> > +
> > +GLOBAL(trampoline_cpu_started)
> > +        .byte   0
> 
> The movement of this item here seems unrelated to this change; it's
> also not mentioned in the description.

Andy's already moved that elsewhere anyway; I'll undo that as I rebase.

> > @@ -115,10 +115,10 @@ static void __init relocate_trampoline(unsigned long phys)
> >            trampoline_ptr < __trampoline_rel_stop;
> >            ++trampoline_ptr )
> >          *(u32 *)(*trampoline_ptr + (long)trampoline_ptr) += phys;
> > -    for ( trampoline_ptr = __trampoline_seg_start;
> > -          trampoline_ptr < __trampoline_seg_stop;
> > +    for ( trampoline_ptr = __trampoline32_rel_start;
> > +          trampoline_ptr < __trampoline32_rel_stop;
> >            ++trampoline_ptr )
> > -        *(u16 *)(*trampoline_ptr + (long)trampoline_ptr) = phys >> 4;
> > +        *(u32 *)(*trampoline_ptr + (long)trampoline_ptr) += phys;
> >  }
> 
> Seeing this and adding in the comment about the redundant tramp*sym
> macros I wonder why the relocations can't be put together in a single
> section, and there be just a single loop here. (I realize this
> entire function gets deleted from here later on, but anyway.)

Yeah, I think it's worth the harmless double-relocation in the non-EFI
case to treat everything (well tramp vs. tramp32) the same there. I'll
do that.

Thanks.

[-- Attachment #1.2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5174 bytes --]

[-- Attachment #2: Type: text/plain, Size: 157 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Xen-devel] [PATCH v3 4/5] x86/boot: Copy 16-bit boot variables back up to Xen image
  2019-08-30 15:43     ` Jan Beulich
@ 2019-08-30 16:25       ` David Woodhouse
  2019-09-02  7:44         ` Jan Beulich
  0 siblings, 1 reply; 19+ messages in thread
From: David Woodhouse @ 2019-08-30 16:25 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, Roger Pau Monné, Wei Liu, Andrew Cooper

[-- Attachment #1.1: Type: text/plain, Size: 7266 bytes --]

On Fri, 2019-08-30 at 17:43 +0200, Jan Beulich wrote:
> On 21.08.2019 18:35, David Woodhouse wrote:
> > From: David Woodhouse <dwmw@amazon.co.uk>
> > 
> > Ditch the bootsym() access from C code for the variables populated by
> > 16-bit boot code. As well as being cleaner this also paves the way for
> > not having the 16-bit boot code in low memory for no-real-mode or EFI
> > loader boots at all.
> > 
> > These variables are put into a separate .data.boot16 section and
> > accessed in low memory during the real-mode boot, then copied back to
> > their native location in the Xen image when real mode has finished.
> > 
> > Fix the limit in gdt_48 to admit that trampoline_gdt actually includes
> > 7 entries, since we do now use the seventh (BOOT_FS) in late code so it
> > matters. Andrew has a patch to further tidy up the GDT and initialise
> > accessed bits etc., so I won't go overboard with more than the trivial
> > size fix for now.
> > 
> > The bootsym() macro remains in C code purely for the variables which
> > are written for the later AP startup and wakeup trampoline to use.
> > 
> > Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
> > ---
> >  xen/arch/x86/boot/edd.S           |  2 ++
> >  xen/arch/x86/boot/head.S          | 16 +++++++++++++++
> >  xen/arch/x86/boot/mem.S           |  2 ++
> >  xen/arch/x86/boot/trampoline.S    | 33 ++++++++++++++++++++++++++++---
> >  xen/arch/x86/boot/video.S         | 30 +++++++++++++++-------------
> >  xen/arch/x86/platform_hypercall.c | 18 ++++++++---------
> >  xen/arch/x86/setup.c              | 22 ++++++++++-----------
> >  xen/arch/x86/xen.lds.S            |  9 ++++++++-
> >  xen/include/asm-x86/edd.h         |  1 -
> >  9 files changed, 94 insertions(+), 39 deletions(-)
> > 
> > diff --git a/xen/arch/x86/boot/edd.S b/xen/arch/x86/boot/edd.S
> > index 434bbbd960..138d04c964 100644
> > --- a/xen/arch/x86/boot/edd.S
> > +++ b/xen/arch/x86/boot/edd.S
> > @@ -163,6 +163,7 @@ edd_done:
> >  .Ledd_mbr_sig_skip:
> >          ret
> >  
> > +        .pushsection .data.boot16, "aw", @progbits
> >  GLOBAL(boot_edd_info_nr)
> >          .byte   0
> >  GLOBAL(boot_mbr_signature_nr)
> > @@ -171,3 +172,4 @@ GLOBAL(boot_mbr_signature)
> >          .fill   EDD_MBR_SIG_MAX*8,1,0
> >  GLOBAL(boot_edd_info)
> >          .fill   EDD_INFO_MAX * (EDDEXTSIZE + EDDPARMSIZE), 1, 0
> > +        .popsection
> > diff --git a/xen/arch/x86/boot/head.S b/xen/arch/x86/boot/head.S
> > index 4118f73683..6d315020d2 100644
> > --- a/xen/arch/x86/boot/head.S
> > +++ b/xen/arch/x86/boot/head.S
> > @@ -725,6 +725,17 @@ trampoline_setup:
> >          cmp     $sym_offs(__bootsym_seg_stop),%edi
> >          jb      1b
> >  
> > +        /* Relocations for the boot data section. */
> > +        mov     sym_fs(trampoline_phys),%edx
> > +        add     $(boot_trampoline_end - boot_trampoline_start),%edx
> > +        mov     $sym_offs(__bootdatasym_rel_start),%edi
> > +1:
> > +        mov     %fs:(%edi),%eax
> > +        add     %edx,%fs:(%edi,%eax)
> > +        add     $4,%edi
> > +        cmp     $sym_offs(__bootdatasym_rel_stop),%edi
> > +        jb      1b
> > +
> >          /* Do not parse command line on EFI platform here. */
> >          cmpb    $0,sym_fs(efi_platform)
> >          jnz     1f
> > @@ -762,6 +773,11 @@ trampoline_setup:
> >          mov     $((boot_trampoline_end - boot_trampoline_start) / 4),%ecx
> >          rep movsl %fs:(%esi),%es:(%edi)
> >  
> > +        /* Copy boot data template to low memory. */
> > +        mov     $sym_offs(bootdata_start),%esi
> > +        mov     $((bootdata_end - bootdata_start) / 4),%ecx
> > +        rep movsl %fs:(%esi),%es:(%edi)
> 
> Afaict neither bootdata_start nor bootdata_end are aligned, and so
> the difference isn't necessarily a multiple of 4. In fact the
> other (preexisting) movsl looks to have the same issue; I wonder
> if we propagate bad EDID data for that reason on certain builds /
> in certain versions.

Hm, I'm not sure I quite realised the distinction between
bootdata_start and __bootdata_start (and likewise _end).

Now that things are placed in the .data.boot16 section by
.pushsection/.popsection can we rely on the ordering, and that the
globals in the .S files are actually at the start and end?

I thought we *needed* to use the ones in the linker script, and what I
should probably do here is kill bootdata_start/bootdata_end completely
and rely only on the ones from the linker script?

Either that or I should kill the ones in the linker script completely.

> > --- a/xen/arch/x86/boot/trampoline.S
> > +++ b/xen/arch/x86/boot/trampoline.S
> > @@ -47,11 +47,15 @@
> >          .long 111b - (off) - .;            \
> >          .popsection
> >  
> > -#define bootdatasym(s) ((s)-boot_trampoline_start)
> > +        .pushsection .data.boot16, "aw", @progbits
> > +GLOBAL(bootdata_start)
> > +        .popsection
> > +
> > +#define bootdatasym(s) ((s)-bootdata_start+(boot_trampoline_end-boot_trampoline_start))
> 
> Please can you add the missing blanks around the binary operators
> here? (I should perhaps asked this already on the earlier patch
> adding this #define.) Also it looks like the line might be overly
> long.

Ack.

> > --- a/xen/arch/x86/boot/video.S
> > +++ b/xen/arch/x86/boot/video.S
> > @@ -15,10 +15,10 @@
> >  
> >  #include "video.h"
> >  
> > -/* Scratch space layout: boot_trampoline_end to boot_trampoline_end+0x1000. */
> > -#define modelist       bootsym(boot_trampoline_end)   /* 2kB (256 entries) */
> > -#define vesa_glob_info (modelist + 0x800)        /* 1kB */
> > -#define vesa_mode_info (vesa_glob_info + 0x400)  /* 1kB */
> > +/* Scratch space layout: bootdata_end to bootdata_end+0x1000. */
> > +#define modelist(t)       bootdatasym_rel(bootdata_end,2,t)         /* 2KiB (256 entries) */
> > +#define vesa_glob_info(t) bootdatasym_rel((bootdata_end+0x800),2,t) /* 1KiB */
> > +#define vesa_mode_info(t) bootdatasym_rel((bootdata_end+0xc00),2,t) /* 1KiB */
> 
> Didn't you agree to extend the comment to warn about the risk resulting
> from the literal 2-s in here?

I think I didn't explicitly respond to that paragraph, and thus I
missed it when I went back through the emails to check I'd caught
everything. Will do it this time; apologies for missing it.

> > @@ -290,6 +292,11 @@ SECTIONS
> >    DECL_SECTION(.data) {
> >         *(.data.page_aligned)
> >         *(.data)
> > +       . = ALIGN(4);
> > +       __bootdata_start = .;
> > +       *(.data.boot16)
> > +       . = ALIGN(4);
> > +       __bootdata_end = .;
> 
> What do you need the labels for here? And once they're gone the ALIGN()
> won't belong here anymore either - suitable alignment should be enforced
> by the contributions to the section.

See above. Am I right to be concerned about the fragility of putting
the symbols in the .S files? Doing it in the linker script is more
robust, isn't it?

I know we *currently* build everything with #include from one huge .S
file and thus we do know that they'll end up first/last as we desire,
and it doesn't depend on link order or any crap like that. But I don't
like depending on that.

[-- Attachment #1.2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5174 bytes --]

[-- Attachment #2: Type: text/plain, Size: 157 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Xen-devel] [PATCH v3 2/5] x86/boot: Split bootsym() into four types of relocations
  2019-08-30 16:12       ` David Woodhouse
@ 2019-09-02  7:37         ` Jan Beulich
  0 siblings, 0 replies; 19+ messages in thread
From: Jan Beulich @ 2019-09-02  7:37 UTC (permalink / raw)
  To: David Woodhouse; +Cc: Andrew Cooper, xen-devel, Wei Liu, Roger Pau Monné

On 30.08.2019 18:12, David Woodhouse wrote:
> On Fri, 2019-08-30 at 17:10 +0200, Jan Beulich wrote:
>> On 21.08.2019 18:35, David Woodhouse wrote:
>>> --- a/xen/arch/x86/boot/trampoline.S
>>> +++ b/xen/arch/x86/boot/trampoline.S
>>> @@ -16,21 +16,62 @@
>>>   * not guaranteed to persist.
>>>   */
>>>  
>>> -/* NB. bootsym() is only usable in real mode, or via BOOT_PSEUDORM_DS. */
>>> +/*
>>> + * There are four sets of relocations:
>>> + *
>>> + * bootsym():     Boot-time code relocated to low memory and run only once.
>>> + *                Only usable at boot; in real mode or via BOOT_PSEUDORM_DS.
>>> + * bootdatasym(): Boot-time BIOS-discovered data, relocated back up to Xen
>>> + *                image after discovery.
>>> + * trampsym():    Permanent trampoline code relocated into low memory for AP
>>> + *                startup and wakeup.
>>> + * tramp32sym():  32-bit trampoline code which at boot can be used directly
>>> + *                from the Xen image in memory, but which will need to be
>>> + *                relocated into low (well, into *mapped*) memory in order
>>> + *                to be used for AP startup.
>>> + */
>>>  #undef bootsym
>>>  #define bootsym(s) ((s)-trampoline_start)
>>>  
>>>  #define bootsym_rel(sym, off, opnd...)     \
>>>          bootsym(sym),##opnd;               \
>>>  111:;                                      \
>>> -        .pushsection .trampoline_rel, "a"; \
>>> +        .pushsection .bootsym_rel, "a";    \
>>>          .long 111b - (off) - .;            \
>>>          .popsection
>>>  
>>>  #define bootsym_segrel(sym, off)           \
>>>          $0,$bootsym(sym);                  \
>>>  111:;                                      \
>>> -        .pushsection .trampoline_seg, "a"; \
>>> +        .pushsection .bootsym_seg, "a";    \
>>> +        .long 111b - (off) - .;            \
>>> +        .popsection
>>> +
>>> +#define bootdatasym(s) ((s)-trampoline_start)
>>> +#define bootdatasym_rel(sym, off, opnd...) \
>>> +        bootdatasym(sym),##opnd;           \
>>> +111:;                                      \
>>> +        .pushsection .bootdatasym_rel, "a";\
>>> +        .long 111b - (off) - .;            \
>>> +        .popsection
>>> +
>>> +#undef trampsym
>>> +#define trampsym(s) ((s)-trampoline_start)
>>> +
>>> +#define trampsym_rel(sym, off, opnd...)    \
>>> +        trampsym(sym),##opnd;              \
>>> +111:;                                      \
>>> +        .pushsection .trampsym_rel, "a";   \
>>> +        .long 111b - (off) - .;            \
>>> +        .popsection
>>> +
>>> +#undef tramp32sym
>>> +#define tramp32sym(s) ((s)-trampoline_start)
>>> +
>>> +#define tramp32sym_rel(sym, off, opnd...)  \
>>> +        tramp32sym(sym),##opnd;            \
>>> +111:;                                      \
>>> +        .pushsection .tramp32sym_rel, "a"; \
>>>          .long 111b - (off) - .;            \
>>>          .popsection
>>
>> After your reply to my comment regarding the redundancy here I've
>> checked (in your git branch) how things end up. Am I mistaken, or
>> are the trampsym and tramp32sym #define-s entirely identical
>> (except for the relocations section name)? Even between the others
>> there's little enough difference, so it continues to be unclear to
>> me why you think it's better to have four instances of about the
>> same (not entirely trivial) thing.
> 
> The distinction is that in a no-real-mode boot tramp32 is used in place
> in the Xen image at the physical address it happened to be loaded at,
> and then *again* later in the AP/wakeup path. In the latter case it
> needs to be moved to low memory (or we need to put the physical
> location into idle_pg_table which seemed to be harder, as discussed).
> 
> So tramp32 symbols get relocated *twice*, while the plain tramp symbols
> don't, but actually we could probably ditch the distinction and treat
> them all the same, which would reduce the four categories to three.
> 
> I'll take a look.
> 
> I suppose we could also combine bootsym and bootdatasym, and copy that
> *whole* section back up to the Xen image; both code and data. But I'm
> inclined to prefer keeping them separate and only copying the data back
> up.

My remark here was and is not so much about reducing the number of
instances of separate reloc macros/sections, but about reducing the
redundancy in their definition. At the very least this part

111:;                                      \
        .pushsection .bootdatasym_rel, "a";\
        .long 111b - (off) - .;            \
        .popsection

is identical between all of them, except for the section name, and
hence I'd prefer it to be spelled out just once, and the "actual"
macros then simply using the resulting (helper) macro.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Xen-devel] [PATCH v3 4/5] x86/boot: Copy 16-bit boot variables back up to Xen image
  2019-08-30 16:25       ` David Woodhouse
@ 2019-09-02  7:44         ` Jan Beulich
  2019-09-02 12:51           ` David Woodhouse
  0 siblings, 1 reply; 19+ messages in thread
From: Jan Beulich @ 2019-09-02  7:44 UTC (permalink / raw)
  To: David Woodhouse; +Cc: Andrew Cooper, xen-devel, Wei Liu, Roger Pau Monné

On 30.08.2019 18:25, David Woodhouse wrote:
> On Fri, 2019-08-30 at 17:43 +0200, Jan Beulich wrote:
>> On 21.08.2019 18:35, David Woodhouse wrote:
>>> @@ -762,6 +773,11 @@ trampoline_setup:
>>>          mov     $((boot_trampoline_end - boot_trampoline_start) / 4),%ecx
>>>          rep movsl %fs:(%esi),%es:(%edi)
>>>  
>>> +        /* Copy boot data template to low memory. */
>>> +        mov     $sym_offs(bootdata_start),%esi
>>> +        mov     $((bootdata_end - bootdata_start) / 4),%ecx
>>> +        rep movsl %fs:(%esi),%es:(%edi)
>>
>> Afaict neither bootdata_start nor bootdata_end are aligned, and so
>> the difference isn't necessarily a multiple of 4. In fact the
>> other (preexisting) movsl looks to have the same issue; I wonder
>> if we propagate bad EDID data for that reason on certain builds /
>> in certain versions.
> 
> Hm, I'm not sure I quite realised the distinction between
> bootdata_start and __bootdata_start (and likewise _end).
> 
> Now that things are placed in the .data.boot16 section by
> .pushsection/.popsection can we rely on the ordering, and that the
> globals in the .S files are actually at the start and end?

Right now I think we can; as you say further down we may not want to
though.

> I thought we *needed* to use the ones in the linker script, and what I
> should probably do here is kill bootdata_start/bootdata_end completely
> and rely only on the ones from the linker script?
> 
> Either that or I should kill the ones in the linker script completely.

Right, just one pair should survive. And seeing how things work before
this series I think it indeed should be linker script symbols only.
And then the ALIGN() ahead of the "start" ones should stay, but there's
no need for one on the "end" ones (again as is currently the case).

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Xen-devel] [PATCH v3 5/5] x86/boot: Do not use trampoline for no-real-mode boot paths
  2019-08-21 16:35   ` [Xen-devel] [PATCH v3 5/5] x86/boot: Do not use trampoline for no-real-mode boot paths David Woodhouse
@ 2019-09-02  8:54     ` Jan Beulich
  0 siblings, 0 replies; 19+ messages in thread
From: Jan Beulich @ 2019-09-02  8:54 UTC (permalink / raw)
  To: David Woodhouse; +Cc: xen-devel, Roger Pau Monné, Wei Liu, Andrew Cooper

On 21.08.2019 18:35, David Woodhouse wrote:
> From: David Woodhouse <dwmw@amazon.co.uk>
> 
> Where booted from EFI or with no-real-mode, there is no need to stomp
> on low memory with the 16-boot code. Instead, just go straight to
> trampoline_protmode_entry() at its physical location within the Xen
> image, having applied suitable relocations.
> 
> This means that the GDT has to be loaded with lgdtl because the 16-bit
> lgdt instruction would drop the high 8 bits of the gdt_trampoline
> address, causing failures if the Xen image was loaded above 16MiB.

There's a 2nd case where we assume an address not exceeding 16MiB:
The BOOT_PSEUDORM_{C,D}S entries on trampoline_gdt. While they'll
be unused (afaict) when not going through real mode, them getting
corrupted by applying relocations still seems somewhat risky to
me. I therefore wonder whether we shouldn't re-arrange this GDT's
layout in a prereq patch, such that theses two entries would go
last, and the GDT limit be reduced by two entries when skipping
real mode.

> --- a/xen/arch/x86/boot/head.S
> +++ b/xen/arch/x86/boot/head.S
> @@ -689,16 +689,23 @@ trampoline_setup:
>          lea     __PAGE_HYPERVISOR+sym_esi(l1_identmap),%edi
>          mov     %edi,sym_fs(l2_bootmap)
>  
> -        /* Apply relocations to bootstrap trampoline. */
> -        mov     sym_fs(trampoline_phys),%edx
> -        mov     $sym_offs(__trampoline_rel_start),%edi
> -1:
> -        mov     %fs:(%edi),%eax
> -        add     %edx,%fs:(%edi,%eax)
> -        add     $4,%edi
> -        cmp     $sym_offs(__trampoline_rel_stop),%edi
> -        jb      1b
> +        /* Do not parse command line on EFI platform here. */
> +        cmpb    $0,sym_fs(efi_platform)
> +        jnz     1f
>  
> +        /* Bail if there is no command line to parse. */
> +        mov     sym_fs(multiboot_ptr),%ebx
> +        testl   $MBI_CMDLINE,MB_flags(%ebx)
> +        jz      1f

As a really minor nit - I think it is generally better to have CMP
followed by JE/JNE, while (as you already have it) TEST/AND/OR are
better followed by JZ/JNZ. (Obviously this is a remark applicable to
the series as a whole.)

> @@ -90,6 +92,7 @@ GLOBAL(bootdata_start)
>   * It is entered in Real Mode, with %cs = trampoline_realmode_entry >> 4 and
>   * %ip = 0.
>   */
> +
>  GLOBAL(trampoline_realmode_entry)
>          mov     %cs,%ax
>          mov     %ax,%ds

I'd prefer if you didn't insert a blank line here, as the comment is
specifically associated with this label.

> @@ -97,7 +100,7 @@ GLOBAL(trampoline_realmode_entry)
>          cld
>          cli
>          lidt    trampsym(idt_48)
> -        lgdt    trampsym(gdt_48)
> +        lgdtl   trampsym(gdt_48)
>          mov     $1,%bl                    # EBX != 0 indicates we are an AP
>          xor     %ax, %ax
>          inc     %ax

As per the remark further up, trampoline_gdt's two entries (below
here) having a relocation associate with them should imo at least get
a comment added to state why (other than for the LGDTL here) there's
no issue with a relocation to an address above 16MiB.

> @@ -236,11 +239,23 @@ gdt_48: .word   7*8-1
>  
>  /* The first page of trampoline is permanent, the rest boot-time only. */
>  /* Reuse the boot trampoline on the 1st trampoline page as stack for wakeup. */
> -        .equ    wakeup_stack, boot_trampoline_start + PAGE_SIZE
> +        .equ    wakeup_stack, perm_trampoline_start + PAGE_SIZE

Doesn't this, at the very least, render the comment stale?

>          .global wakeup_stack
>  
> +ENTRY(perm_trampoline_end)
> +
>  /* From here on early boot only. */
>  
> +ENTRY(boot_trampoline_start)
> +
> +        .word   0
> +boot16_idt:
> +        .word   0, 0, 0 # base = limit = 0
> +        .word   0
> +boot16_gdt:
> +        .word   7*8-1
> +        .long   tramp32sym_rel(trampoline_gdt,4)

As there's no change in this patch to how/where the boot trampoline
gets copied, am I understanding right that both end up at
trampoline_phys, just at different points in time? Otherwise I wonder
what the alignment of the relocated boot_trampoline_start end up
being, and hence whether the initial .word here is actually useful at
all.

> @@ -343,7 +358,8 @@ trampoline_boot_cpu_entry:
>          xor     %ebx,%ebx
>  
>          /* Jump to the common bootstrap entry point. */
> -        jmp     trampoline_protmode_entry
> +        mov     $tramp32sym_rel(trampoline_protmode_entry,4,%eax)
> +        jmp     *%eax

Do you really need to switch to an indirect branch here? I.e. can't
there be a relocation associated with the direct JMP's displacement?

> --- a/xen/arch/x86/setup.c
> +++ b/xen/arch/x86/setup.c
> @@ -679,6 +679,45 @@ static unsigned int __init copy_bios_e820(struct e820entry *map, unsigned int li
>      return n;
>  }
>  
> +extern const s32 __trampoline_rel_start[], __trampoline_rel_stop[];
> +extern const s32 __trampoline32_rel_start[], __trampoline32_rel_stop[];
> +
> +static void __init relocate_trampoline(unsigned long phys)
> +{
> +    const s32 *trampoline_ptr;
> +    uint32_t tramp32_delta;
> +
> +    /* Apply relocations to trampoline. */
> +    for ( trampoline_ptr = __trampoline_rel_start;
> +          trampoline_ptr < __trampoline_rel_stop;
> +          ++trampoline_ptr )
> +        *(u32 *)(*trampoline_ptr + (long)trampoline_ptr) += phys;
> +
> +    tramp32_delta = phys;
> +    if ( !efi_enabled(EFI_LOADER) )
> +    {
> +        /*
> +         * The non-EFI boot code uses the 32-bit trampoline in place
> +         * so will have relocated it to the physical address of
> +         * perm_trampoline_start already. Undo that as it needs to
> +         * run from low memory for AP startup, because the Xen
> +         * physical address range won't be mapped.
> +         */
> +        tramp32_delta -= trampoline_xen_phys_start;
> +        tramp32_delta -= (unsigned long)(perm_trampoline_start - __XEN_VIRT_START);
> +    }
> +    for ( trampoline_ptr = __trampoline32_rel_start;
> +          trampoline_ptr < __trampoline32_rel_stop;
> +          ++trampoline_ptr )
> +    {
> +        *(u32 *)(*trampoline_ptr + (long)trampoline_ptr) += tramp32_delta;
> +    }

Along the lines of the loop further up, lease omit the braces here.

> --- a/xen/arch/x86/smpboot.c
> +++ b/xen/arch/x86/smpboot.c
> @@ -47,7 +47,7 @@
>  #include <asm/tboot.h>
>  #include <mach_apic.h>
>  
> -#define setup_trampoline()    (bootsym_phys(trampoline_realmode_entry))
> +#define setup_trampoline()    (trampsym_phys(trampoline_realmode_entry))

Would you mind taking the opportunity and strip the unnecessary pair
of parentheses here?

> --- a/xen/arch/x86/x86_64/mm.c
> +++ b/xen/arch/x86/x86_64/mm.c
> @@ -697,7 +697,7 @@ void __init zap_low_mappings(void)
>  
>      /* Replace with mapping of the boot trampoline only. */
>      map_pages_to_xen(trampoline_phys, maddr_to_mfn(trampoline_phys),
> -                     PFN_UP(boot_trampoline_end - boot_trampoline_start),
> +                     PFN_UP(perm_trampoline_end - perm_trampoline_start),
>                       __PAGE_HYPERVISOR);

You also want to adjust the comment then.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Xen-devel] [PATCH v3 4/5] x86/boot: Copy 16-bit boot variables back up to Xen image
  2019-09-02  7:44         ` Jan Beulich
@ 2019-09-02 12:51           ` David Woodhouse
  2019-09-02 13:47             ` Jan Beulich
  0 siblings, 1 reply; 19+ messages in thread
From: David Woodhouse @ 2019-09-02 12:51 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Andrew Cooper, xen-devel, Wei Liu, Roger Pau Monné

[-- Attachment #1.1: Type: text/plain, Size: 516 bytes --]

On Mon, 2019-09-02 at 09:44 +0200, Jan Beulich wrote:
> Right, just one pair should survive. And seeing how things work before
> this series I think it indeed should be linker script symbols only.
> And then the ALIGN() ahead of the "start" ones should stay, but there's
> no need for one on the "end" ones (again as is currently the case).

If we don't align the end symbol then we need to go back to rounding up
the length with ((boot_trampoline_end - boot_trampoline_start) + 3) / 4
again though, right?

[-- Attachment #1.2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5174 bytes --]

[-- Attachment #2: Type: text/plain, Size: 157 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Xen-devel] [PATCH v3 4/5] x86/boot: Copy 16-bit boot variables back up to Xen image
  2019-09-02 12:51           ` David Woodhouse
@ 2019-09-02 13:47             ` Jan Beulich
  2019-09-02 13:52               ` David Woodhouse
  0 siblings, 1 reply; 19+ messages in thread
From: Jan Beulich @ 2019-09-02 13:47 UTC (permalink / raw)
  To: David Woodhouse; +Cc: Andrew Cooper, xen-devel, Wei Liu, Roger Pau Monné

On 02.09.2019 14:51, David Woodhouse wrote:
> On Mon, 2019-09-02 at 09:44 +0200, Jan Beulich wrote:
>> Right, just one pair should survive. And seeing how things work before
>> this series I think it indeed should be linker script symbols only.
>> And then the ALIGN() ahead of the "start" ones should stay, but there's
>> no need for one on the "end" ones (again as is currently the case).
> 
> If we don't align the end symbol then we need to go back to rounding up
> the length with ((boot_trampoline_end - boot_trampoline_start) + 3) / 4
> again though, right?

Wait - we've been talking about the *_rel sections / tables here,
haven't we? All entries of these tables ought to be of equal size,
and hence alignment of a table's "end" label automatically matches
the size of the table entries.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Xen-devel] [PATCH v3 4/5] x86/boot: Copy 16-bit boot variables back up to Xen image
  2019-09-02 13:47             ` Jan Beulich
@ 2019-09-02 13:52               ` David Woodhouse
  2019-09-02 14:10                 ` Jan Beulich
  0 siblings, 1 reply; 19+ messages in thread
From: David Woodhouse @ 2019-09-02 13:52 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Andrew Cooper, xen-devel, Wei Liu, Roger Pau Monné

[-- Attachment #1.1: Type: text/plain, Size: 1117 bytes --]

On Mon, 2019-09-02 at 15:47 +0200, Jan Beulich wrote:
> On 02.09.2019 14:51, David Woodhouse wrote:
> > On Mon, 2019-09-02 at 09:44 +0200, Jan Beulich wrote:
> > > Right, just one pair should survive. And seeing how things work before
> > > this series I think it indeed should be linker script symbols only.
> > > And then the ALIGN() ahead of the "start" ones should stay, but there's
> > > no need for one on the "end" ones (again as is currently the case).
> > 
> > If we don't align the end symbol then we need to go back to rounding up
> > the length with ((boot_trampoline_end - boot_trampoline_start) + 3) / 4
> > again though, right?
> 
> Wait - we've been talking about the *_rel sections / tables here,
> haven't we? All entries of these tables ought to be of equal size,
> and hence alignment of a table's "end" label automatically matches
> the size of the table entries.

The specific one we were taking about just then was
bootdata_{start,end} which is the data itself to be copied up/down, not
the relocations.

The _rel sections indeed need no alignment at the end, as you say.

[-- Attachment #1.2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5174 bytes --]

[-- Attachment #2: Type: text/plain, Size: 157 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Xen-devel] [PATCH v3 4/5] x86/boot: Copy 16-bit boot variables back up to Xen image
  2019-09-02 13:52               ` David Woodhouse
@ 2019-09-02 14:10                 ` Jan Beulich
  0 siblings, 0 replies; 19+ messages in thread
From: Jan Beulich @ 2019-09-02 14:10 UTC (permalink / raw)
  To: David Woodhouse; +Cc: Andrew Cooper, xen-devel, Wei Liu, Roger Pau Monné

On 02.09.2019 15:52, David Woodhouse wrote:
> On Mon, 2019-09-02 at 15:47 +0200, Jan Beulich wrote:
>> On 02.09.2019 14:51, David Woodhouse wrote:
>>> On Mon, 2019-09-02 at 09:44 +0200, Jan Beulich wrote:
>>>> Right, just one pair should survive. And seeing how things work before
>>>> this series I think it indeed should be linker script symbols only.
>>>> And then the ALIGN() ahead of the "start" ones should stay, but there's
>>>> no need for one on the "end" ones (again as is currently the case).
>>>
>>> If we don't align the end symbol then we need to go back to rounding up
>>> the length with ((boot_trampoline_end - boot_trampoline_start) + 3) / 4
>>> again though, right?
>>
>> Wait - we've been talking about the *_rel sections / tables here,
>> haven't we? All entries of these tables ought to be of equal size,
>> and hence alignment of a table's "end" label automatically matches
>> the size of the table entries.
> 
> The specific one we were taking about just then was
> bootdata_{start,end} which is the data itself to be copied up/down, not
> the relocations.

Oh, I'm sorry then for mixing things up.

> The _rel sections indeed need no alignment at the end, as you say.

Right; and I agree the non-*_rel one wants its "end" label aligned.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, back to index

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-08-21 16:35 [Xen-devel] [PATCH v3 0/5] Clean up x86_64 boot code David Woodhouse
2019-08-21 16:35 ` [Xen-devel] [PATCH v3 1/5] x86/boot: Only jump into low trampoline code for real-mode boot David Woodhouse
2019-08-21 16:35   ` [Xen-devel] [PATCH v3 2/5] x86/boot: Split bootsym() into four types of relocations David Woodhouse
2019-08-30 15:10     ` Jan Beulich
2019-08-30 16:12       ` David Woodhouse
2019-09-02  7:37         ` Jan Beulich
2019-08-21 16:35   ` [Xen-devel] [PATCH v3 3/5] x86/boot: Rename trampoline_{start, end} to boot_trampoline_{start, end} David Woodhouse
2019-08-21 16:35   ` [Xen-devel] [PATCH v3 4/5] x86/boot: Copy 16-bit boot variables back up to Xen image David Woodhouse
2019-08-30 15:43     ` Jan Beulich
2019-08-30 16:25       ` David Woodhouse
2019-09-02  7:44         ` Jan Beulich
2019-09-02 12:51           ` David Woodhouse
2019-09-02 13:47             ` Jan Beulich
2019-09-02 13:52               ` David Woodhouse
2019-09-02 14:10                 ` Jan Beulich
2019-08-21 16:35   ` [Xen-devel] [PATCH v3 5/5] x86/boot: Do not use trampoline for no-real-mode boot paths David Woodhouse
2019-09-02  8:54     ` Jan Beulich
2019-08-30 14:25   ` [Xen-devel] [PATCH v3 1/5] x86/boot: Only jump into low trampoline code for real-mode boot Jan Beulich
2019-08-30 14:28     ` David Woodhouse

Xen-Devel Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/xen-devel/0 xen-devel/git/0.git
	git clone --mirror https://lore.kernel.org/xen-devel/1 xen-devel/git/1.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 xen-devel xen-devel/ https://lore.kernel.org/xen-devel \
		xen-devel@lists.xenproject.org xen-devel@archiver.kernel.org
	public-inbox-index xen-devel


Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.xenproject.lists.xen-devel


AGPL code for this site: git clone https://public-inbox.org/ public-inbox