All of lore.kernel.org
 help / color / mirror / Atom feed
* Fix up asmlinkage
@ 2014-04-01 17:32 Andi Kleen
  2014-04-01 17:32 ` [PATCH 1/4] Revert "lto: Make asmlinkage __visible" Andi Kleen
                   ` (3 more replies)
  0 siblings, 4 replies; 9+ messages in thread
From: Andi Kleen @ 2014-04-01 17:32 UTC (permalink / raw)
  To: x86; +Cc: linux-kernel, torvalds

As requested by Linus, revert the "Add __visible to asmlinkage"
and replace it with explicit __visibles. This is roughly ~200
changes in a tree sweep. I separated the patches
into arch/x86, arch/x86/crypto and else. Right now it's only
x86, the MIPS and ARM ports of LTO will need to do that 
separately.

If you want to pull the changes are here. 

BTW with these patches we're just 3 patches away from a 
(slowly building, not fully optimized, but working) 
LTO build.


The following changes since commit 01d5f3b598b18a5035426c30801adf65822dbd0c:

  Merge branch 'for-3.15' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/libata (2014-03-31 15:27:37 -0700)

are available in the git repository at:

  git://git.kernel.org/pub/scm/linux/kernel/git/ak/linux-misc.git tags/asmlinkage-for-linus

for you to fetch changes up to a61d70fece76c881b98a29507dcb0db1dd21abf3:

  asmlinkage: Add explicit __visible to drivers/*, lib/*, kernel/* (2014-04-01 19:20:56 +0200)

----------------------------------------------------------------
Andi Kleen (4):
      Revert "lto: Make asmlinkage __visible"
      asmlinkage, x86: Add explicit __visible to arch/x86/*
      asmlinkage, x86: Add explicit __visible to arch/x86/crypto/*
      asmlinkage: Add explicit __visible to drivers/*, lib/*, kernel/*

 arch/x86/boot/compressed/misc.c            |   2 +-
 arch/x86/crypto/aes_glue.c                 |   4 +-
 arch/x86/crypto/aesni-intel_glue.c         |  34 +++++-----
 arch/x86/crypto/blowfish_glue.c            |   8 +--
 arch/x86/crypto/camellia_aesni_avx2_glue.c |  12 ++--
 arch/x86/crypto/camellia_aesni_avx_glue.c  |  12 ++--
 arch/x86/crypto/camellia_glue.c            |   8 +--
 arch/x86/crypto/cast5_avx_glue.c           |   8 +--
 arch/x86/crypto/cast6_avx_glue.c           |  12 ++--
 arch/x86/crypto/crc32c-intel_glue.c        |   2 +-
 arch/x86/crypto/crct10dif-pclmul_glue.c    |   2 +-
 arch/x86/crypto/salsa20_glue.c             |   6 +-
 arch/x86/crypto/serpent_avx2_glue.c        |  12 ++--
 arch/x86/crypto/serpent_avx_glue.c         |  12 ++--
 arch/x86/crypto/sha1_ssse3_glue.c          |   4 +-
 arch/x86/crypto/sha256_ssse3_glue.c        |   6 +-
 arch/x86/crypto/sha512_ssse3_glue.c        |   6 +-
 arch/x86/crypto/twofish_avx_glue.c         |  12 ++--
 arch/x86/crypto/twofish_glue.c             |   4 +-
 arch/x86/include/asm/crypto/camellia.h     |  20 +++---
 arch/x86/include/asm/crypto/serpent-avx.h  |  12 ++--
 arch/x86/include/asm/crypto/serpent-sse2.h |   8 +--
 arch/x86/include/asm/crypto/twofish.h      |   8 +--
 arch/x86/include/asm/hw_irq.h              | 102 ++++++++++++++---------------
 arch/x86/include/asm/kprobes.h             |   2 +-
 arch/x86/include/asm/kvm_host.h            |   2 +-
 arch/x86/include/asm/processor.h           |   2 +-
 arch/x86/include/asm/setup.h               |   6 +-
 arch/x86/include/asm/special_insns.h       |   2 +-
 arch/x86/include/asm/traps.h               |  56 ++++++++--------
 arch/x86/kernel/acpi/sleep.c               |   2 +-
 arch/x86/kernel/apic/io_apic.c             |   2 +-
 arch/x86/kernel/cpu/mcheck/therm_throt.c   |   4 +-
 arch/x86/kernel/cpu/mcheck/threshold.c     |   4 +-
 arch/x86/kernel/head32.c                   |   2 +-
 arch/x86/kernel/head64.c                   |   2 +-
 arch/x86/kernel/process_32.c               |   4 +-
 arch/x86/kernel/process_64.c               |   4 +-
 arch/x86/kernel/smp.c                      |   2 +-
 arch/x86/kernel/traps.c                    |   8 +--
 arch/x86/kernel/vsmp_64.c                  |   6 +-
 arch/x86/kvm/x86.c                         |   2 +-
 arch/x86/lguest/boot.c                     |   4 +-
 arch/x86/math-emu/errors.c                 |  16 ++---
 arch/x86/platform/olpc/olpc-xo1-pm.c       |   2 +-
 arch/x86/power/hibernate_64.c              |   2 +-
 arch/x86/xen/enlighten.c                   |   2 +-
 arch/x86/xen/irq.c                         |   6 +-
 arch/x86/xen/setup.c                       |   2 +-
 drivers/pnp/pnpbios/bioscalls.c            |   2 +-
 include/linux/linkage.h                    |   4 +-
 init/main.c                                |   2 +-
 kernel/context_tracking.c                  |   2 +-
 kernel/locking/lockdep.c                   |   2 +-
 kernel/power/snapshot.c                    |   2 +-
 kernel/printk/printk.c                     |   4 +-
 kernel/sched/core.c                        |  10 +--
 kernel/softirq.c                           |   4 +-
 lib/dump_stack.c                           |   4 +-
 59 files changed, 249 insertions(+), 249 deletions(-)


-Andi


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 1/4] Revert "lto: Make asmlinkage __visible"
  2014-04-01 17:32 Fix up asmlinkage Andi Kleen
@ 2014-04-01 17:32 ` Andi Kleen
  2014-04-01 17:32 ` [PATCH 2/4] asmlinkage, x86: Add explicit __visible to arch/x86/* Andi Kleen
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 9+ messages in thread
From: Andi Kleen @ 2014-04-01 17:32 UTC (permalink / raw)
  To: x86; +Cc: linux-kernel, torvalds, Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

As requested by Linus, revert adding __visible to asmlinkage.
Instead we add __visible explicitely to all the symbols
that need it.

This reverts commit 128ea04a9885af9629059e631ddf0cab4815b589.
---
 include/linux/linkage.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/include/linux/linkage.h b/include/linux/linkage.h
index 34a513a..a6a42dd 100644
--- a/include/linux/linkage.h
+++ b/include/linux/linkage.h
@@ -12,9 +12,9 @@
 #endif
 
 #ifdef __cplusplus
-#define CPP_ASMLINKAGE extern "C" __visible
+#define CPP_ASMLINKAGE extern "C"
 #else
-#define CPP_ASMLINKAGE __visible
+#define CPP_ASMLINKAGE
 #endif
 
 #ifndef asmlinkage
-- 
1.8.5.2


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 2/4] asmlinkage, x86: Add explicit __visible to arch/x86/*
  2014-04-01 17:32 Fix up asmlinkage Andi Kleen
  2014-04-01 17:32 ` [PATCH 1/4] Revert "lto: Make asmlinkage __visible" Andi Kleen
@ 2014-04-01 17:32 ` Andi Kleen
  2014-04-01 18:33   ` Linus Torvalds
  2014-04-01 17:32 ` [PATCH 3/4] asmlinkage, x86: Add explicit __visible to arch/x86/crypto/* Andi Kleen
  2014-04-01 17:32 ` [PATCH 4/4] asmlinkage: Add explicit __visible to drivers/*, lib/*, kernel/* Andi Kleen
  3 siblings, 1 reply; 9+ messages in thread
From: Andi Kleen @ 2014-04-01 17:32 UTC (permalink / raw)
  To: x86; +Cc: linux-kernel, torvalds, Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

As requested by Linus add explicit __visible to the asmlinkage users.
This marks both functions visible to assembler and some functions
defined in assembler to make it clear to the  compiler that they
exist elsewhere.

Tree sweep for most of arch/x86/*

Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 arch/x86/boot/compressed/misc.c          |   2 +-
 arch/x86/include/asm/hw_irq.h            | 102 +++++++++++++++----------------
 arch/x86/include/asm/kprobes.h           |   2 +-
 arch/x86/include/asm/kvm_host.h          |   2 +-
 arch/x86/include/asm/processor.h         |   2 +-
 arch/x86/include/asm/setup.h             |   6 +-
 arch/x86/include/asm/special_insns.h     |   2 +-
 arch/x86/include/asm/traps.h             |  56 ++++++++---------
 arch/x86/kernel/acpi/sleep.c             |   2 +-
 arch/x86/kernel/apic/io_apic.c           |   2 +-
 arch/x86/kernel/cpu/mcheck/therm_throt.c |   4 +-
 arch/x86/kernel/cpu/mcheck/threshold.c   |   4 +-
 arch/x86/kernel/head32.c                 |   2 +-
 arch/x86/kernel/head64.c                 |   2 +-
 arch/x86/kernel/process_32.c             |   4 +-
 arch/x86/kernel/process_64.c             |   4 +-
 arch/x86/kernel/smp.c                    |   2 +-
 arch/x86/kernel/traps.c                  |   8 +--
 arch/x86/kernel/vsmp_64.c                |   6 +-
 arch/x86/kvm/x86.c                       |   2 +-
 arch/x86/lguest/boot.c                   |   4 +-
 arch/x86/math-emu/errors.c               |  16 ++---
 arch/x86/platform/olpc/olpc-xo1-pm.c     |   2 +-
 arch/x86/power/hibernate_64.c            |   2 +-
 arch/x86/xen/enlighten.c                 |   2 +-
 arch/x86/xen/irq.c                       |   6 +-
 arch/x86/xen/setup.c                     |   2 +-
 27 files changed, 125 insertions(+), 125 deletions(-)

diff --git a/arch/x86/boot/compressed/misc.c b/arch/x86/boot/compressed/misc.c
index 196eaf3..8aa6d8b 100644
--- a/arch/x86/boot/compressed/misc.c
+++ b/arch/x86/boot/compressed/misc.c
@@ -389,7 +389,7 @@ static void parse_elf(void *output)
 	free(phdrs);
 }
 
-asmlinkage void *decompress_kernel(void *rmode, memptr heap,
+asmlinkage __visible void *decompress_kernel(void *rmode, memptr heap,
 				  unsigned char *input_data,
 				  unsigned long input_len,
 				  unsigned char *output,
diff --git a/arch/x86/include/asm/hw_irq.h b/arch/x86/include/asm/hw_irq.h
index 67d69b8..eb8909c 100644
--- a/arch/x86/include/asm/hw_irq.h
+++ b/arch/x86/include/asm/hw_irq.h
@@ -26,56 +26,56 @@
 #include <asm/sections.h>
 
 /* Interrupt handlers registered during init_IRQ */
-extern asmlinkage void apic_timer_interrupt(void);
-extern asmlinkage void x86_platform_ipi(void);
-extern asmlinkage void kvm_posted_intr_ipi(void);
-extern asmlinkage void error_interrupt(void);
-extern asmlinkage void irq_work_interrupt(void);
-
-extern asmlinkage void spurious_interrupt(void);
-extern asmlinkage void thermal_interrupt(void);
-extern asmlinkage void reschedule_interrupt(void);
-
-extern asmlinkage void invalidate_interrupt(void);
-extern asmlinkage void invalidate_interrupt0(void);
-extern asmlinkage void invalidate_interrupt1(void);
-extern asmlinkage void invalidate_interrupt2(void);
-extern asmlinkage void invalidate_interrupt3(void);
-extern asmlinkage void invalidate_interrupt4(void);
-extern asmlinkage void invalidate_interrupt5(void);
-extern asmlinkage void invalidate_interrupt6(void);
-extern asmlinkage void invalidate_interrupt7(void);
-extern asmlinkage void invalidate_interrupt8(void);
-extern asmlinkage void invalidate_interrupt9(void);
-extern asmlinkage void invalidate_interrupt10(void);
-extern asmlinkage void invalidate_interrupt11(void);
-extern asmlinkage void invalidate_interrupt12(void);
-extern asmlinkage void invalidate_interrupt13(void);
-extern asmlinkage void invalidate_interrupt14(void);
-extern asmlinkage void invalidate_interrupt15(void);
-extern asmlinkage void invalidate_interrupt16(void);
-extern asmlinkage void invalidate_interrupt17(void);
-extern asmlinkage void invalidate_interrupt18(void);
-extern asmlinkage void invalidate_interrupt19(void);
-extern asmlinkage void invalidate_interrupt20(void);
-extern asmlinkage void invalidate_interrupt21(void);
-extern asmlinkage void invalidate_interrupt22(void);
-extern asmlinkage void invalidate_interrupt23(void);
-extern asmlinkage void invalidate_interrupt24(void);
-extern asmlinkage void invalidate_interrupt25(void);
-extern asmlinkage void invalidate_interrupt26(void);
-extern asmlinkage void invalidate_interrupt27(void);
-extern asmlinkage void invalidate_interrupt28(void);
-extern asmlinkage void invalidate_interrupt29(void);
-extern asmlinkage void invalidate_interrupt30(void);
-extern asmlinkage void invalidate_interrupt31(void);
-
-extern asmlinkage void irq_move_cleanup_interrupt(void);
-extern asmlinkage void reboot_interrupt(void);
-extern asmlinkage void threshold_interrupt(void);
-
-extern asmlinkage void call_function_interrupt(void);
-extern asmlinkage void call_function_single_interrupt(void);
+extern asmlinkage __visible void apic_timer_interrupt(void);
+extern asmlinkage __visible void x86_platform_ipi(void);
+extern asmlinkage __visible void kvm_posted_intr_ipi(void);
+extern asmlinkage __visible void error_interrupt(void);
+extern asmlinkage __visible void irq_work_interrupt(void);
+
+extern asmlinkage __visible void spurious_interrupt(void);
+extern asmlinkage __visible void thermal_interrupt(void);
+extern asmlinkage __visible void reschedule_interrupt(void);
+
+extern asmlinkage __visible void invalidate_interrupt(void);
+extern asmlinkage __visible void invalidate_interrupt0(void);
+extern asmlinkage __visible void invalidate_interrupt1(void);
+extern asmlinkage __visible void invalidate_interrupt2(void);
+extern asmlinkage __visible void invalidate_interrupt3(void);
+extern asmlinkage __visible void invalidate_interrupt4(void);
+extern asmlinkage __visible void invalidate_interrupt5(void);
+extern asmlinkage __visible void invalidate_interrupt6(void);
+extern asmlinkage __visible void invalidate_interrupt7(void);
+extern asmlinkage __visible void invalidate_interrupt8(void);
+extern asmlinkage __visible void invalidate_interrupt9(void);
+extern asmlinkage __visible void invalidate_interrupt10(void);
+extern asmlinkage __visible void invalidate_interrupt11(void);
+extern asmlinkage __visible void invalidate_interrupt12(void);
+extern asmlinkage __visible void invalidate_interrupt13(void);
+extern asmlinkage __visible void invalidate_interrupt14(void);
+extern asmlinkage __visible void invalidate_interrupt15(void);
+extern asmlinkage __visible void invalidate_interrupt16(void);
+extern asmlinkage __visible void invalidate_interrupt17(void);
+extern asmlinkage __visible void invalidate_interrupt18(void);
+extern asmlinkage __visible void invalidate_interrupt19(void);
+extern asmlinkage __visible void invalidate_interrupt20(void);
+extern asmlinkage __visible void invalidate_interrupt21(void);
+extern asmlinkage __visible void invalidate_interrupt22(void);
+extern asmlinkage __visible void invalidate_interrupt23(void);
+extern asmlinkage __visible void invalidate_interrupt24(void);
+extern asmlinkage __visible void invalidate_interrupt25(void);
+extern asmlinkage __visible void invalidate_interrupt26(void);
+extern asmlinkage __visible void invalidate_interrupt27(void);
+extern asmlinkage __visible void invalidate_interrupt28(void);
+extern asmlinkage __visible void invalidate_interrupt29(void);
+extern asmlinkage __visible void invalidate_interrupt30(void);
+extern asmlinkage __visible void invalidate_interrupt31(void);
+
+extern asmlinkage __visible void irq_move_cleanup_interrupt(void);
+extern asmlinkage __visible void reboot_interrupt(void);
+extern asmlinkage __visible void threshold_interrupt(void);
+
+extern asmlinkage __visible void call_function_interrupt(void);
+extern asmlinkage __visible void call_function_single_interrupt(void);
 
 #ifdef CONFIG_TRACING
 /* Interrupt handlers registered during init_IRQ */
@@ -177,7 +177,7 @@ extern __visible void smp_spurious_interrupt(struct pt_regs *);
 extern __visible void smp_x86_platform_ipi(struct pt_regs *);
 extern __visible void smp_error_interrupt(struct pt_regs *);
 #ifdef CONFIG_X86_IO_APIC
-extern asmlinkage void smp_irq_move_cleanup_interrupt(void);
+extern asmlinkage __visible void smp_irq_move_cleanup_interrupt(void);
 #endif
 #ifdef CONFIG_SMP
 extern __visible void smp_reschedule_interrupt(struct pt_regs *);
diff --git a/arch/x86/include/asm/kprobes.h b/arch/x86/include/asm/kprobes.h
index 9454c16..cd1d62c4 100644
--- a/arch/x86/include/asm/kprobes.h
+++ b/arch/x86/include/asm/kprobes.h
@@ -62,7 +62,7 @@ extern __visible kprobe_opcode_t optprobe_template_end;
 extern const int kretprobe_blacklist_size;
 
 void arch_remove_kprobe(struct kprobe *p);
-asmlinkage void kretprobe_trampoline(void);
+asmlinkage __visible void kretprobe_trampoline(void);
 
 /* Architecture specific copy of original instruction*/
 struct arch_specific_insn {
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index fdf83af..ebddc85 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1009,7 +1009,7 @@ enum {
  * reboot turns off virtualization while processes are running.
  * Trap the fault and ignore the instruction if that happens.
  */
-asmlinkage void kvm_spurious_fault(void);
+asmlinkage __visible void kvm_spurious_fault(void);
 
 #define ____kvm_handle_fault_on_reboot(insn, cleanup_insn)	\
 	"666: " insn "\n\t" \
diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
index fdedd38..b4196bf 100644
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -434,7 +434,7 @@ DECLARE_INIT_PER_CPU(irq_stack_union);
 
 DECLARE_PER_CPU(char *, irq_stack_ptr);
 DECLARE_PER_CPU(unsigned int, irq_count);
-extern asmlinkage void ignore_sysret(void);
+extern asmlinkage __visible void ignore_sysret(void);
 #else	/* X86_64 */
 #ifdef CONFIG_CC_STACKPROTECTOR
 /*
diff --git a/arch/x86/include/asm/setup.h b/arch/x86/include/asm/setup.h
index d62c9f8..39b5ddc 100644
--- a/arch/x86/include/asm/setup.h
+++ b/arch/x86/include/asm/setup.h
@@ -111,11 +111,11 @@ void *extend_brk(size_t size, size_t align);
 extern void probe_roms(void);
 #ifdef __i386__
 
-asmlinkage void __init i386_start_kernel(void);
+asmlinkage __visible void __init i386_start_kernel(void);
 
 #else
-asmlinkage void __init x86_64_start_kernel(char *real_mode);
-asmlinkage void __init x86_64_start_reservations(char *real_mode_data);
+asmlinkage __visible void __init x86_64_start_kernel(char *real_mode);
+asmlinkage __visible void __init x86_64_start_reservations(char *real_mode_data);
 
 #endif /* __i386__ */
 #endif /* _SETUP */
diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h
index e820c08..8c298b4 100644
--- a/arch/x86/include/asm/special_insns.h
+++ b/arch/x86/include/asm/special_insns.h
@@ -101,7 +101,7 @@ static inline void native_wbinvd(void)
 	asm volatile("wbinvd": : :"memory");
 }
 
-extern asmlinkage void native_load_gs_index(unsigned);
+extern asmlinkage __visible void native_load_gs_index(unsigned);
 
 #ifdef CONFIG_PARAVIRT
 #include <asm/paravirt.h>
diff --git a/arch/x86/include/asm/traps.h b/arch/x86/include/asm/traps.h
index 58d66fe..3e54856 100644
--- a/arch/x86/include/asm/traps.h
+++ b/arch/x86/include/asm/traps.h
@@ -8,37 +8,37 @@
 
 #define dotraplinkage __visible
 
-asmlinkage void divide_error(void);
-asmlinkage void debug(void);
-asmlinkage void nmi(void);
-asmlinkage void int3(void);
-asmlinkage void xen_debug(void);
-asmlinkage void xen_int3(void);
-asmlinkage void xen_stack_segment(void);
-asmlinkage void overflow(void);
-asmlinkage void bounds(void);
-asmlinkage void invalid_op(void);
-asmlinkage void device_not_available(void);
+asmlinkage __visible void divide_error(void);
+asmlinkage __visible void debug(void);
+asmlinkage __visible void nmi(void);
+asmlinkage __visible void int3(void);
+asmlinkage __visible void xen_debug(void);
+asmlinkage __visible void xen_int3(void);
+asmlinkage __visible void xen_stack_segment(void);
+asmlinkage __visible void overflow(void);
+asmlinkage __visible void bounds(void);
+asmlinkage __visible void invalid_op(void);
+asmlinkage __visible void device_not_available(void);
 #ifdef CONFIG_X86_64
-asmlinkage void double_fault(void);
+asmlinkage __visible void double_fault(void);
 #endif
-asmlinkage void coprocessor_segment_overrun(void);
-asmlinkage void invalid_TSS(void);
-asmlinkage void segment_not_present(void);
-asmlinkage void stack_segment(void);
-asmlinkage void general_protection(void);
-asmlinkage void page_fault(void);
-asmlinkage void async_page_fault(void);
-asmlinkage void spurious_interrupt_bug(void);
-asmlinkage void coprocessor_error(void);
-asmlinkage void alignment_check(void);
+asmlinkage __visible void coprocessor_segment_overrun(void);
+asmlinkage __visible void invalid_TSS(void);
+asmlinkage __visible void segment_not_present(void);
+asmlinkage __visible void stack_segment(void);
+asmlinkage __visible void general_protection(void);
+asmlinkage __visible void page_fault(void);
+asmlinkage __visible void async_page_fault(void);
+asmlinkage __visible void spurious_interrupt_bug(void);
+asmlinkage __visible void coprocessor_error(void);
+asmlinkage __visible void alignment_check(void);
 #ifdef CONFIG_X86_MCE
-asmlinkage void machine_check(void);
+asmlinkage __visible void machine_check(void);
 #endif /* CONFIG_X86_MCE */
-asmlinkage void simd_coprocessor_error(void);
+asmlinkage __visible void simd_coprocessor_error(void);
 
 #ifdef CONFIG_TRACING
-asmlinkage void trace_page_fault(void);
+asmlinkage __visible void trace_page_fault(void);
 #define trace_divide_error divide_error
 #define trace_bounds bounds
 #define trace_invalid_op invalid_op
@@ -68,7 +68,7 @@ dotraplinkage void do_segment_not_present(struct pt_regs *, long);
 dotraplinkage void do_stack_segment(struct pt_regs *, long);
 #ifdef CONFIG_X86_64
 dotraplinkage void do_double_fault(struct pt_regs *, long);
-asmlinkage __kprobes struct pt_regs *sync_regs(struct pt_regs *);
+asmlinkage __visible __kprobes struct pt_regs *sync_regs(struct pt_regs *);
 #endif
 dotraplinkage void do_general_protection(struct pt_regs *, long);
 dotraplinkage void do_page_fault(struct pt_regs *, unsigned long);
@@ -101,8 +101,8 @@ extern int panic_on_unrecovered_nmi;
 void math_error(struct pt_regs *, int, int);
 void math_emulate(struct math_emu_info *);
 #ifndef CONFIG_X86_32
-asmlinkage void smp_thermal_interrupt(void);
-asmlinkage void mce_threshold_interrupt(void);
+asmlinkage __visible void smp_thermal_interrupt(void);
+asmlinkage __visible void mce_threshold_interrupt(void);
 #endif
 
 /* Interrupts/Exceptions */
diff --git a/arch/x86/kernel/acpi/sleep.c b/arch/x86/kernel/acpi/sleep.c
index 3a2ae4c..3136820 100644
--- a/arch/x86/kernel/acpi/sleep.c
+++ b/arch/x86/kernel/acpi/sleep.c
@@ -31,7 +31,7 @@ static char temp_stack[4096];
  *
  * Wrapper around acpi_enter_sleep_state() to be called by assmebly.
  */
-acpi_status asmlinkage x86_acpi_enter_sleep_state(u8 state)
+acpi_status asmlinkage __visible x86_acpi_enter_sleep_state(u8 state)
 {
 	return acpi_enter_sleep_state(state);
 }
diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c
index 6ad4658..d61b23e 100644
--- a/arch/x86/kernel/apic/io_apic.c
+++ b/arch/x86/kernel/apic/io_apic.c
@@ -2189,7 +2189,7 @@ void send_cleanup_vector(struct irq_cfg *cfg)
 	cfg->move_in_progress = 0;
 }
 
-asmlinkage void smp_irq_move_cleanup_interrupt(void)
+asmlinkage __visible void smp_irq_move_cleanup_interrupt(void)
 {
 	unsigned vector, me;
 
diff --git a/arch/x86/kernel/cpu/mcheck/therm_throt.c b/arch/x86/kernel/cpu/mcheck/therm_throt.c
index 3eec7de..1a2fb30 100644
--- a/arch/x86/kernel/cpu/mcheck/therm_throt.c
+++ b/arch/x86/kernel/cpu/mcheck/therm_throt.c
@@ -439,14 +439,14 @@ static inline void __smp_thermal_interrupt(void)
 	smp_thermal_vector();
 }
 
-asmlinkage void smp_thermal_interrupt(struct pt_regs *regs)
+asmlinkage __visible void smp_thermal_interrupt(struct pt_regs *regs)
 {
 	entering_irq();
 	__smp_thermal_interrupt();
 	exiting_ack_irq();
 }
 
-asmlinkage void smp_trace_thermal_interrupt(struct pt_regs *regs)
+asmlinkage __visible void smp_trace_thermal_interrupt(struct pt_regs *regs)
 {
 	entering_irq();
 	trace_thermal_apic_entry(THERMAL_APIC_VECTOR);
diff --git a/arch/x86/kernel/cpu/mcheck/threshold.c b/arch/x86/kernel/cpu/mcheck/threshold.c
index fe6b1c8..7245980 100644
--- a/arch/x86/kernel/cpu/mcheck/threshold.c
+++ b/arch/x86/kernel/cpu/mcheck/threshold.c
@@ -24,14 +24,14 @@ static inline void __smp_threshold_interrupt(void)
 	mce_threshold_vector();
 }
 
-asmlinkage void smp_threshold_interrupt(void)
+asmlinkage __visible void smp_threshold_interrupt(void)
 {
 	entering_irq();
 	__smp_threshold_interrupt();
 	exiting_ack_irq();
 }
 
-asmlinkage void smp_trace_threshold_interrupt(void)
+asmlinkage __visible void smp_trace_threshold_interrupt(void)
 {
 	entering_irq();
 	trace_threshold_apic_entry(THRESHOLD_APIC_VECTOR);
diff --git a/arch/x86/kernel/head32.c b/arch/x86/kernel/head32.c
index c61a14a..d6c1b983 100644
--- a/arch/x86/kernel/head32.c
+++ b/arch/x86/kernel/head32.c
@@ -29,7 +29,7 @@ static void __init i386_default_early_setup(void)
 	reserve_ebda_region();
 }
 
-asmlinkage void __init i386_start_kernel(void)
+asmlinkage __visible void __init i386_start_kernel(void)
 {
 	sanitize_boot_params(&boot_params);
 
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index 85126cc..068054f 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -137,7 +137,7 @@ static void __init copy_bootdata(char *real_mode_data)
 	}
 }
 
-asmlinkage void __init x86_64_start_kernel(char * real_mode_data)
+asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data)
 {
 	int i;
 
diff --git a/arch/x86/kernel/process_32.c b/arch/x86/kernel/process_32.c
index 0de43e9..07d35b2 100644
--- a/arch/x86/kernel/process_32.c
+++ b/arch/x86/kernel/process_32.c
@@ -55,8 +55,8 @@
 #include <asm/debugreg.h>
 #include <asm/switch_to.h>
 
-asmlinkage void ret_from_fork(void) __asm__("ret_from_fork");
-asmlinkage void ret_from_kernel_thread(void) __asm__("ret_from_kernel_thread");
+asmlinkage __visible void ret_from_fork(void) __asm__("ret_from_fork");
+asmlinkage __visible void ret_from_kernel_thread(void) __asm__("ret_from_kernel_thread");
 
 /*
  * Return saved PC of a blocked thread.
diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
index 9c0280f..eae1f10 100644
--- a/arch/x86/kernel/process_64.c
+++ b/arch/x86/kernel/process_64.c
@@ -50,9 +50,9 @@
 #include <asm/debugreg.h>
 #include <asm/switch_to.h>
 
-asmlinkage extern void ret_from_fork(void);
+asmlinkage __visible extern void ret_from_fork(void);
 
-asmlinkage DEFINE_PER_CPU(unsigned long, old_rsp);
+__visible DEFINE_PER_CPU(unsigned long, old_rsp);
 
 /* Prints also some state that isn't saved in the pt_regs */
 void __show_regs(struct pt_regs *regs, int all)
diff --git a/arch/x86/kernel/smp.c b/arch/x86/kernel/smp.c
index 7c3a5a6..be8e1bd 100644
--- a/arch/x86/kernel/smp.c
+++ b/arch/x86/kernel/smp.c
@@ -168,7 +168,7 @@ static int smp_stop_nmi_callback(unsigned int val, struct pt_regs *regs)
  * this function calls the 'stop' function on all other CPUs in the system.
  */
 
-asmlinkage void smp_reboot_interrupt(void)
+asmlinkage __visible void smp_reboot_interrupt(void)
 {
 	ack_APIC_irq();
 	irq_enter();
diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
index 57409f6..0fd7f57 100644
--- a/arch/x86/kernel/traps.c
+++ b/arch/x86/kernel/traps.c
@@ -71,7 +71,7 @@ gate_desc debug_idt_table[NR_VECTORS] __page_aligned_bss;
 #include <asm/processor-flags.h>
 #include <asm/setup.h>
 
-asmlinkage int system_call(void);
+asmlinkage __visible int system_call(void);
 #endif
 
 /* Must be page-aligned because the real IDT is used in a fixmap. */
@@ -357,7 +357,7 @@ exit:
  * for scheduling or signal handling. The actual stack switch is done in
  * entry.S
  */
-asmlinkage __kprobes struct pt_regs *sync_regs(struct pt_regs *eregs)
+asmlinkage __visible __kprobes struct pt_regs *sync_regs(struct pt_regs *eregs)
 {
 	struct pt_regs *regs = eregs;
 	/* Did already sync */
@@ -601,11 +601,11 @@ do_spurious_interrupt_bug(struct pt_regs *regs, long error_code)
 #endif
 }
 
-asmlinkage void __attribute__((weak)) smp_thermal_interrupt(void)
+asmlinkage __visible void __attribute__((weak)) smp_thermal_interrupt(void)
 {
 }
 
-asmlinkage void __attribute__((weak)) smp_threshold_interrupt(void)
+asmlinkage __visible void __attribute__((weak)) smp_threshold_interrupt(void)
 {
 }
 
diff --git a/arch/x86/kernel/vsmp_64.c b/arch/x86/kernel/vsmp_64.c
index f6584a9..aeda81e 100644
--- a/arch/x86/kernel/vsmp_64.c
+++ b/arch/x86/kernel/vsmp_64.c
@@ -33,7 +33,7 @@
  * and vice versa.
  */
 
-asmlinkage unsigned long vsmp_save_fl(void)
+asmlinkage __visible unsigned long vsmp_save_fl(void)
 {
 	unsigned long flags = native_save_fl();
 
@@ -53,7 +53,7 @@ __visible void vsmp_restore_fl(unsigned long flags)
 }
 PV_CALLEE_SAVE_REGS_THUNK(vsmp_restore_fl);
 
-asmlinkage void vsmp_irq_disable(void)
+asmlinkage __visible void vsmp_irq_disable(void)
 {
 	unsigned long flags = native_save_fl();
 
@@ -61,7 +61,7 @@ asmlinkage void vsmp_irq_disable(void)
 }
 PV_CALLEE_SAVE_REGS_THUNK(vsmp_irq_disable);
 
-asmlinkage void vsmp_irq_enable(void)
+asmlinkage __visible void vsmp_irq_enable(void)
 {
 	unsigned long flags = native_save_fl();
 
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 2b85784..14fff15 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -280,7 +280,7 @@ int kvm_set_apic_base(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 }
 EXPORT_SYMBOL_GPL(kvm_set_apic_base);
 
-asmlinkage void kvm_spurious_fault(void)
+asmlinkage __visible void kvm_spurious_fault(void)
 {
 	/* Fault while not rebooting.  We want the trace. */
 	BUG();
diff --git a/arch/x86/lguest/boot.c b/arch/x86/lguest/boot.c
index ad1fb5f..aae9413 100644
--- a/arch/x86/lguest/boot.c
+++ b/arch/x86/lguest/boot.c
@@ -233,13 +233,13 @@ static void lguest_end_context_switch(struct task_struct *next)
  * flags word contains all kind of stuff, but in practice Linux only cares
  * about the interrupt flag.  Our "save_flags()" just returns that.
  */
-asmlinkage unsigned long lguest_save_fl(void)
+asmlinkage __visible unsigned long lguest_save_fl(void)
 {
 	return lguest_data.irq_enabled;
 }
 
 /* Interrupts go off... */
-asmlinkage void lguest_irq_disable(void)
+asmlinkage __visible void lguest_irq_disable(void)
 {
 	lguest_data.irq_enabled = 0;
 }
diff --git a/arch/x86/math-emu/errors.c b/arch/x86/math-emu/errors.c
index a544908..9e6545f 100644
--- a/arch/x86/math-emu/errors.c
+++ b/arch/x86/math-emu/errors.c
@@ -302,7 +302,7 @@ static struct {
 	      0x242  in div_Xsig.S
  */
 
-asmlinkage void FPU_exception(int n)
+asmlinkage __visible void FPU_exception(int n)
 {
 	int i, int_type;
 
@@ -492,7 +492,7 @@ int real_2op_NaN(FPU_REG const *b, u_char tagb,
 
 /* Invalid arith operation on Valid registers */
 /* Returns < 0 if the exception is unmasked */
-asmlinkage int arith_invalid(int deststnr)
+asmlinkage __visible int arith_invalid(int deststnr)
 {
 
 	EXCEPTION(EX_Invalid);
@@ -507,7 +507,7 @@ asmlinkage int arith_invalid(int deststnr)
 }
 
 /* Divide a finite number by zero */
-asmlinkage int FPU_divide_by_zero(int deststnr, u_char sign)
+asmlinkage __visible int FPU_divide_by_zero(int deststnr, u_char sign)
 {
 	FPU_REG *dest = &st(deststnr);
 	int tag = TAG_Valid;
@@ -539,7 +539,7 @@ int set_precision_flag(int flags)
 }
 
 /* This may be called often, so keep it lean */
-asmlinkage void set_precision_flag_up(void)
+asmlinkage __visible void set_precision_flag_up(void)
 {
 	if (control_word & CW_Precision)
 		partial_status |= (SW_Precision | SW_C1);	/* The masked response */
@@ -548,7 +548,7 @@ asmlinkage void set_precision_flag_up(void)
 }
 
 /* This may be called often, so keep it lean */
-asmlinkage void set_precision_flag_down(void)
+asmlinkage __visible void set_precision_flag_down(void)
 {
 	if (control_word & CW_Precision) {	/* The masked response */
 		partial_status &= ~SW_C1;
@@ -557,7 +557,7 @@ asmlinkage void set_precision_flag_down(void)
 		EXCEPTION(EX_Precision);
 }
 
-asmlinkage int denormal_operand(void)
+asmlinkage __visible int denormal_operand(void)
 {
 	if (control_word & CW_Denormal) {	/* The masked response */
 		partial_status |= SW_Denorm_Op;
@@ -568,7 +568,7 @@ asmlinkage int denormal_operand(void)
 	}
 }
 
-asmlinkage int arith_overflow(FPU_REG *dest)
+asmlinkage __visible int arith_overflow(FPU_REG *dest)
 {
 	int tag = TAG_Valid;
 
@@ -596,7 +596,7 @@ asmlinkage int arith_overflow(FPU_REG *dest)
 
 }
 
-asmlinkage int arith_underflow(FPU_REG *dest)
+asmlinkage __visible int arith_underflow(FPU_REG *dest)
 {
 	int tag = TAG_Valid;
 
diff --git a/arch/x86/platform/olpc/olpc-xo1-pm.c b/arch/x86/platform/olpc/olpc-xo1-pm.c
index ff0174d..a9acde7 100644
--- a/arch/x86/platform/olpc/olpc-xo1-pm.c
+++ b/arch/x86/platform/olpc/olpc-xo1-pm.c
@@ -75,7 +75,7 @@ static int xo1_power_state_enter(suspend_state_t pm_state)
 	return 0;
 }
 
-asmlinkage int xo1_do_sleep(u8 sleep_state)
+asmlinkage __visible int xo1_do_sleep(u8 sleep_state)
 {
 	void *pgd_addr = __va(read_cr3());
 
diff --git a/arch/x86/power/hibernate_64.c b/arch/x86/power/hibernate_64.c
index 304fca2..35e2bb6 100644
--- a/arch/x86/power/hibernate_64.c
+++ b/arch/x86/power/hibernate_64.c
@@ -23,7 +23,7 @@
 extern __visible const void __nosave_begin, __nosave_end;
 
 /* Defined in hibernate_asm_64.S */
-extern asmlinkage int restore_image(void);
+extern asmlinkage __visible int restore_image(void);
 
 /*
  * Address to jump to in the last phase of restore in order to get to the image
diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index 201d09a..c34bfc4 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -1515,7 +1515,7 @@ static void __init xen_pvh_early_guest_init(void)
 }
 
 /* First C function to be called on Xen boot */
-asmlinkage void __init xen_start_kernel(void)
+asmlinkage __visible void __init xen_start_kernel(void)
 {
 	struct physdev_set_iopl set_iopl;
 	int rc;
diff --git a/arch/x86/xen/irq.c b/arch/x86/xen/irq.c
index 08f763d..a1207cb 100644
--- a/arch/x86/xen/irq.c
+++ b/arch/x86/xen/irq.c
@@ -23,7 +23,7 @@ void xen_force_evtchn_callback(void)
 	(void)HYPERVISOR_xen_version(0, NULL);
 }
 
-asmlinkage unsigned long xen_save_fl(void)
+asmlinkage __visible unsigned long xen_save_fl(void)
 {
 	struct vcpu_info *vcpu;
 	unsigned long flags;
@@ -63,7 +63,7 @@ __visible void xen_restore_fl(unsigned long flags)
 }
 PV_CALLEE_SAVE_REGS_THUNK(xen_restore_fl);
 
-asmlinkage void xen_irq_disable(void)
+asmlinkage __visible void xen_irq_disable(void)
 {
 	/* There's a one instruction preempt window here.  We need to
 	   make sure we're don't switch CPUs between getting the vcpu
@@ -74,7 +74,7 @@ asmlinkage void xen_irq_disable(void)
 }
 PV_CALLEE_SAVE_REGS_THUNK(xen_irq_disable);
 
-asmlinkage void xen_irq_enable(void)
+asmlinkage __visible void xen_irq_enable(void)
 {
 	struct vcpu_info *vcpu;
 
diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index 0982233..5c95244 100644
--- a/arch/x86/xen/setup.c
+++ b/arch/x86/xen/setup.c
@@ -35,7 +35,7 @@
 extern const char xen_hypervisor_callback[];
 extern const char xen_failsafe_callback[];
 #ifdef CONFIG_X86_64
-extern asmlinkage void nmi(void);
+extern asmlinkage __visible void nmi(void);
 #endif
 extern void xen_sysenter_target(void);
 extern void xen_syscall_target(void);
-- 
1.8.5.2


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 3/4] asmlinkage, x86: Add explicit __visible to arch/x86/crypto/*
  2014-04-01 17:32 Fix up asmlinkage Andi Kleen
  2014-04-01 17:32 ` [PATCH 1/4] Revert "lto: Make asmlinkage __visible" Andi Kleen
  2014-04-01 17:32 ` [PATCH 2/4] asmlinkage, x86: Add explicit __visible to arch/x86/* Andi Kleen
@ 2014-04-01 17:32 ` Andi Kleen
  2014-04-01 18:53   ` Linus Torvalds
  2014-04-01 17:32 ` [PATCH 4/4] asmlinkage: Add explicit __visible to drivers/*, lib/*, kernel/* Andi Kleen
  3 siblings, 1 reply; 9+ messages in thread
From: Andi Kleen @ 2014-04-01 17:32 UTC (permalink / raw)
  To: x86; +Cc: linux-kernel, torvalds, Andi Kleen, Herbert Xu

From: Andi Kleen <ak@linux.intel.com>

As requested by Linus add explicit __visible to the asmlinkage users.
This marks both functions visible to assembler and some functions
defined in assembler to make it clear to the  compiler that they
exist elsewhere.

Tree sweep for arch/x86/crypto/*

Cc: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 arch/x86/crypto/aes_glue.c                 |  4 ++--
 arch/x86/crypto/aesni-intel_glue.c         | 34 +++++++++++++++---------------
 arch/x86/crypto/blowfish_glue.c            |  8 +++----
 arch/x86/crypto/camellia_aesni_avx2_glue.c | 12 +++++------
 arch/x86/crypto/camellia_aesni_avx_glue.c  | 12 +++++------
 arch/x86/crypto/camellia_glue.c            |  8 +++----
 arch/x86/crypto/cast5_avx_glue.c           |  8 +++----
 arch/x86/crypto/cast6_avx_glue.c           | 12 +++++------
 arch/x86/crypto/crc32c-intel_glue.c        |  2 +-
 arch/x86/crypto/crct10dif-pclmul_glue.c    |  2 +-
 arch/x86/crypto/salsa20_glue.c             |  6 +++---
 arch/x86/crypto/serpent_avx2_glue.c        | 12 +++++------
 arch/x86/crypto/serpent_avx_glue.c         | 12 +++++------
 arch/x86/crypto/sha1_ssse3_glue.c          |  4 ++--
 arch/x86/crypto/sha256_ssse3_glue.c        |  6 +++---
 arch/x86/crypto/sha512_ssse3_glue.c        |  6 +++---
 arch/x86/crypto/twofish_avx_glue.c         | 12 +++++------
 arch/x86/crypto/twofish_glue.c             |  4 ++--
 arch/x86/include/asm/crypto/camellia.h     | 20 +++++++++---------
 arch/x86/include/asm/crypto/serpent-avx.h  | 12 +++++------
 arch/x86/include/asm/crypto/serpent-sse2.h |  8 +++----
 arch/x86/include/asm/crypto/twofish.h      |  8 +++----
 22 files changed, 106 insertions(+), 106 deletions(-)

diff --git a/arch/x86/crypto/aes_glue.c b/arch/x86/crypto/aes_glue.c
index aafe8ce..78712c1 100644
--- a/arch/x86/crypto/aes_glue.c
+++ b/arch/x86/crypto/aes_glue.c
@@ -7,8 +7,8 @@
 #include <crypto/aes.h>
 #include <asm/crypto/aes.h>
 
-asmlinkage void aes_enc_blk(struct crypto_aes_ctx *ctx, u8 *out, const u8 *in);
-asmlinkage void aes_dec_blk(struct crypto_aes_ctx *ctx, u8 *out, const u8 *in);
+asmlinkage __visible void aes_enc_blk(struct crypto_aes_ctx *ctx, u8 *out, const u8 *in);
+asmlinkage __visible void aes_dec_blk(struct crypto_aes_ctx *ctx, u8 *out, const u8 *in);
 
 void crypto_aes_encrypt_x86(struct crypto_aes_ctx *ctx, u8 *dst, const u8 *src)
 {
diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c
index 948ad0e..4e54d1c 100644
--- a/arch/x86/crypto/aesni-intel_glue.c
+++ b/arch/x86/crypto/aesni-intel_glue.c
@@ -83,19 +83,19 @@ struct aesni_xts_ctx {
 	u8 raw_crypt_ctx[sizeof(struct crypto_aes_ctx) + AESNI_ALIGN - 1];
 };
 
-asmlinkage int aesni_set_key(struct crypto_aes_ctx *ctx, const u8 *in_key,
+asmlinkage __visible int aesni_set_key(struct crypto_aes_ctx *ctx, const u8 *in_key,
 			     unsigned int key_len);
-asmlinkage void aesni_enc(struct crypto_aes_ctx *ctx, u8 *out,
+asmlinkage __visible void aesni_enc(struct crypto_aes_ctx *ctx, u8 *out,
 			  const u8 *in);
-asmlinkage void aesni_dec(struct crypto_aes_ctx *ctx, u8 *out,
+asmlinkage __visible void aesni_dec(struct crypto_aes_ctx *ctx, u8 *out,
 			  const u8 *in);
-asmlinkage void aesni_ecb_enc(struct crypto_aes_ctx *ctx, u8 *out,
+asmlinkage __visible void aesni_ecb_enc(struct crypto_aes_ctx *ctx, u8 *out,
 			      const u8 *in, unsigned int len);
-asmlinkage void aesni_ecb_dec(struct crypto_aes_ctx *ctx, u8 *out,
+asmlinkage __visible void aesni_ecb_dec(struct crypto_aes_ctx *ctx, u8 *out,
 			      const u8 *in, unsigned int len);
-asmlinkage void aesni_cbc_enc(struct crypto_aes_ctx *ctx, u8 *out,
+asmlinkage __visible void aesni_cbc_enc(struct crypto_aes_ctx *ctx, u8 *out,
 			      const u8 *in, unsigned int len, u8 *iv);
-asmlinkage void aesni_cbc_dec(struct crypto_aes_ctx *ctx, u8 *out,
+asmlinkage __visible void aesni_cbc_dec(struct crypto_aes_ctx *ctx, u8 *out,
 			      const u8 *in, unsigned int len, u8 *iv);
 
 int crypto_fpu_init(void);
@@ -105,10 +105,10 @@ void crypto_fpu_exit(void);
 #define AVX_GEN4_OPTSIZE 4096
 
 #ifdef CONFIG_X86_64
-asmlinkage void aesni_ctr_enc(struct crypto_aes_ctx *ctx, u8 *out,
+asmlinkage __visible void aesni_ctr_enc(struct crypto_aes_ctx *ctx, u8 *out,
 			      const u8 *in, unsigned int len, u8 *iv);
 
-asmlinkage void aesni_xts_crypt8(struct crypto_aes_ctx *ctx, u8 *out,
+asmlinkage __visible void aesni_xts_crypt8(struct crypto_aes_ctx *ctx, u8 *out,
 				 const u8 *in, bool enc, u8 *iv);
 
 /* asmlinkage void aesni_gcm_enc()
@@ -127,7 +127,7 @@ asmlinkage void aesni_xts_crypt8(struct crypto_aes_ctx *ctx, u8 *out,
  * unsigned long auth_tag_len), Authenticated Tag Length in bytes.
  *          Valid values are 16 (most likely), 12 or 8.
  */
-asmlinkage void aesni_gcm_enc(void *ctx, u8 *out,
+asmlinkage __visible void aesni_gcm_enc(void *ctx, u8 *out,
 			const u8 *in, unsigned long plaintext_len, u8 *iv,
 			u8 *hash_subkey, const u8 *aad, unsigned long aad_len,
 			u8 *auth_tag, unsigned long auth_tag_len);
@@ -148,7 +148,7 @@ asmlinkage void aesni_gcm_enc(void *ctx, u8 *out,
  * unsigned long auth_tag_len) Authenticated Tag Length in bytes.
  * Valid values are 16 (most likely), 12 or 8.
  */
-asmlinkage void aesni_gcm_dec(void *ctx, u8 *out,
+asmlinkage __visible void aesni_gcm_dec(void *ctx, u8 *out,
 			const u8 *in, unsigned long ciphertext_len, u8 *iv,
 			u8 *hash_subkey, const u8 *aad, unsigned long aad_len,
 			u8 *auth_tag, unsigned long auth_tag_len);
@@ -160,14 +160,14 @@ asmlinkage void aesni_gcm_dec(void *ctx, u8 *out,
  * gcm_data *my_ctx_data, context data
  * u8 *hash_subkey,  the Hash sub key input. Data starts on a 16-byte boundary.
  */
-asmlinkage void aesni_gcm_precomp_avx_gen2(void *my_ctx_data, u8 *hash_subkey);
+asmlinkage __visible void aesni_gcm_precomp_avx_gen2(void *my_ctx_data, u8 *hash_subkey);
 
-asmlinkage void aesni_gcm_enc_avx_gen2(void *ctx, u8 *out,
+asmlinkage __visible void aesni_gcm_enc_avx_gen2(void *ctx, u8 *out,
 			const u8 *in, unsigned long plaintext_len, u8 *iv,
 			const u8 *aad, unsigned long aad_len,
 			u8 *auth_tag, unsigned long auth_tag_len);
 
-asmlinkage void aesni_gcm_dec_avx_gen2(void *ctx, u8 *out,
+asmlinkage __visible void aesni_gcm_dec_avx_gen2(void *ctx, u8 *out,
 			const u8 *in, unsigned long ciphertext_len, u8 *iv,
 			const u8 *aad, unsigned long aad_len,
 			u8 *auth_tag, unsigned long auth_tag_len);
@@ -209,14 +209,14 @@ static void aesni_gcm_dec_avx(void *ctx, u8 *out,
  * gcm_data *my_ctx_data, context data
  * u8 *hash_subkey,  the Hash sub key input. Data starts on a 16-byte boundary.
  */
-asmlinkage void aesni_gcm_precomp_avx_gen4(void *my_ctx_data, u8 *hash_subkey);
+asmlinkage __visible void aesni_gcm_precomp_avx_gen4(void *my_ctx_data, u8 *hash_subkey);
 
-asmlinkage void aesni_gcm_enc_avx_gen4(void *ctx, u8 *out,
+asmlinkage __visible void aesni_gcm_enc_avx_gen4(void *ctx, u8 *out,
 			const u8 *in, unsigned long plaintext_len, u8 *iv,
 			const u8 *aad, unsigned long aad_len,
 			u8 *auth_tag, unsigned long auth_tag_len);
 
-asmlinkage void aesni_gcm_dec_avx_gen4(void *ctx, u8 *out,
+asmlinkage __visible void aesni_gcm_dec_avx_gen4(void *ctx, u8 *out,
 			const u8 *in, unsigned long ciphertext_len, u8 *iv,
 			const u8 *aad, unsigned long aad_len,
 			u8 *auth_tag, unsigned long auth_tag_len);
diff --git a/arch/x86/crypto/blowfish_glue.c b/arch/x86/crypto/blowfish_glue.c
index 50ec333..1611e86 100644
--- a/arch/x86/crypto/blowfish_glue.c
+++ b/arch/x86/crypto/blowfish_glue.c
@@ -34,14 +34,14 @@
 #include <crypto/algapi.h>
 
 /* regular block cipher functions */
-asmlinkage void __blowfish_enc_blk(struct bf_ctx *ctx, u8 *dst, const u8 *src,
+asmlinkage __visible void __blowfish_enc_blk(struct bf_ctx *ctx, u8 *dst, const u8 *src,
 				   bool xor);
-asmlinkage void blowfish_dec_blk(struct bf_ctx *ctx, u8 *dst, const u8 *src);
+asmlinkage __visible void blowfish_dec_blk(struct bf_ctx *ctx, u8 *dst, const u8 *src);
 
 /* 4-way parallel cipher functions */
-asmlinkage void __blowfish_enc_blk_4way(struct bf_ctx *ctx, u8 *dst,
+asmlinkage __visible void __blowfish_enc_blk_4way(struct bf_ctx *ctx, u8 *dst,
 					const u8 *src, bool xor);
-asmlinkage void blowfish_dec_blk_4way(struct bf_ctx *ctx, u8 *dst,
+asmlinkage __visible void blowfish_dec_blk_4way(struct bf_ctx *ctx, u8 *dst,
 				      const u8 *src);
 
 static inline void blowfish_enc_blk(struct bf_ctx *ctx, u8 *dst, const u8 *src)
diff --git a/arch/x86/crypto/camellia_aesni_avx2_glue.c b/arch/x86/crypto/camellia_aesni_avx2_glue.c
index 4209a76..403e575 100644
--- a/arch/x86/crypto/camellia_aesni_avx2_glue.c
+++ b/arch/x86/crypto/camellia_aesni_avx2_glue.c
@@ -28,19 +28,19 @@
 #define CAMELLIA_AESNI_AVX2_PARALLEL_BLOCKS 32
 
 /* 32-way AVX2/AES-NI parallel cipher functions */
-asmlinkage void camellia_ecb_enc_32way(struct camellia_ctx *ctx, u8 *dst,
+asmlinkage __visible void camellia_ecb_enc_32way(struct camellia_ctx *ctx, u8 *dst,
 				       const u8 *src);
-asmlinkage void camellia_ecb_dec_32way(struct camellia_ctx *ctx, u8 *dst,
+asmlinkage __visible void camellia_ecb_dec_32way(struct camellia_ctx *ctx, u8 *dst,
 				       const u8 *src);
 
-asmlinkage void camellia_cbc_dec_32way(struct camellia_ctx *ctx, u8 *dst,
+asmlinkage __visible void camellia_cbc_dec_32way(struct camellia_ctx *ctx, u8 *dst,
 				       const u8 *src);
-asmlinkage void camellia_ctr_32way(struct camellia_ctx *ctx, u8 *dst,
+asmlinkage __visible void camellia_ctr_32way(struct camellia_ctx *ctx, u8 *dst,
 				   const u8 *src, le128 *iv);
 
-asmlinkage void camellia_xts_enc_32way(struct camellia_ctx *ctx, u8 *dst,
+asmlinkage __visible void camellia_xts_enc_32way(struct camellia_ctx *ctx, u8 *dst,
 				       const u8 *src, le128 *iv);
-asmlinkage void camellia_xts_dec_32way(struct camellia_ctx *ctx, u8 *dst,
+asmlinkage __visible void camellia_xts_dec_32way(struct camellia_ctx *ctx, u8 *dst,
 				       const u8 *src, le128 *iv);
 
 static const struct common_glue_ctx camellia_enc = {
diff --git a/arch/x86/crypto/camellia_aesni_avx_glue.c b/arch/x86/crypto/camellia_aesni_avx_glue.c
index 87a041a..b206471 100644
--- a/arch/x86/crypto/camellia_aesni_avx_glue.c
+++ b/arch/x86/crypto/camellia_aesni_avx_glue.c
@@ -27,27 +27,27 @@
 #define CAMELLIA_AESNI_PARALLEL_BLOCKS 16
 
 /* 16-way parallel cipher functions (avx/aes-ni) */
-asmlinkage void camellia_ecb_enc_16way(struct camellia_ctx *ctx, u8 *dst,
+asmlinkage __visible void camellia_ecb_enc_16way(struct camellia_ctx *ctx, u8 *dst,
 				       const u8 *src);
 EXPORT_SYMBOL_GPL(camellia_ecb_enc_16way);
 
-asmlinkage void camellia_ecb_dec_16way(struct camellia_ctx *ctx, u8 *dst,
+asmlinkage __visible void camellia_ecb_dec_16way(struct camellia_ctx *ctx, u8 *dst,
 				       const u8 *src);
 EXPORT_SYMBOL_GPL(camellia_ecb_dec_16way);
 
-asmlinkage void camellia_cbc_dec_16way(struct camellia_ctx *ctx, u8 *dst,
+asmlinkage __visible void camellia_cbc_dec_16way(struct camellia_ctx *ctx, u8 *dst,
 				       const u8 *src);
 EXPORT_SYMBOL_GPL(camellia_cbc_dec_16way);
 
-asmlinkage void camellia_ctr_16way(struct camellia_ctx *ctx, u8 *dst,
+asmlinkage __visible void camellia_ctr_16way(struct camellia_ctx *ctx, u8 *dst,
 				   const u8 *src, le128 *iv);
 EXPORT_SYMBOL_GPL(camellia_ctr_16way);
 
-asmlinkage void camellia_xts_enc_16way(struct camellia_ctx *ctx, u8 *dst,
+asmlinkage __visible void camellia_xts_enc_16way(struct camellia_ctx *ctx, u8 *dst,
 				       const u8 *src, le128 *iv);
 EXPORT_SYMBOL_GPL(camellia_xts_enc_16way);
 
-asmlinkage void camellia_xts_dec_16way(struct camellia_ctx *ctx, u8 *dst,
+asmlinkage __visible void camellia_xts_dec_16way(struct camellia_ctx *ctx, u8 *dst,
 				       const u8 *src, le128 *iv);
 EXPORT_SYMBOL_GPL(camellia_xts_dec_16way);
 
diff --git a/arch/x86/crypto/camellia_glue.c b/arch/x86/crypto/camellia_glue.c
index c171dcb..5aaad82 100644
--- a/arch/x86/crypto/camellia_glue.c
+++ b/arch/x86/crypto/camellia_glue.c
@@ -36,18 +36,18 @@
 #include <asm/crypto/glue_helper.h>
 
 /* regular block cipher functions */
-asmlinkage void __camellia_enc_blk(struct camellia_ctx *ctx, u8 *dst,
+asmlinkage __visible void __camellia_enc_blk(struct camellia_ctx *ctx, u8 *dst,
 				   const u8 *src, bool xor);
 EXPORT_SYMBOL_GPL(__camellia_enc_blk);
-asmlinkage void camellia_dec_blk(struct camellia_ctx *ctx, u8 *dst,
+asmlinkage __visible void camellia_dec_blk(struct camellia_ctx *ctx, u8 *dst,
 				 const u8 *src);
 EXPORT_SYMBOL_GPL(camellia_dec_blk);
 
 /* 2-way parallel cipher functions */
-asmlinkage void __camellia_enc_blk_2way(struct camellia_ctx *ctx, u8 *dst,
+asmlinkage __visible void __camellia_enc_blk_2way(struct camellia_ctx *ctx, u8 *dst,
 					const u8 *src, bool xor);
 EXPORT_SYMBOL_GPL(__camellia_enc_blk_2way);
-asmlinkage void camellia_dec_blk_2way(struct camellia_ctx *ctx, u8 *dst,
+asmlinkage __visible void camellia_dec_blk_2way(struct camellia_ctx *ctx, u8 *dst,
 				      const u8 *src);
 EXPORT_SYMBOL_GPL(camellia_dec_blk_2way);
 
diff --git a/arch/x86/crypto/cast5_avx_glue.c b/arch/x86/crypto/cast5_avx_glue.c
index e6a3700..77bbfe0 100644
--- a/arch/x86/crypto/cast5_avx_glue.c
+++ b/arch/x86/crypto/cast5_avx_glue.c
@@ -37,13 +37,13 @@
 
 #define CAST5_PARALLEL_BLOCKS 16
 
-asmlinkage void cast5_ecb_enc_16way(struct cast5_ctx *ctx, u8 *dst,
+asmlinkage __visible void cast5_ecb_enc_16way(struct cast5_ctx *ctx, u8 *dst,
 				    const u8 *src);
-asmlinkage void cast5_ecb_dec_16way(struct cast5_ctx *ctx, u8 *dst,
+asmlinkage __visible void cast5_ecb_dec_16way(struct cast5_ctx *ctx, u8 *dst,
 				    const u8 *src);
-asmlinkage void cast5_cbc_dec_16way(struct cast5_ctx *ctx, u8 *dst,
+asmlinkage __visible void cast5_cbc_dec_16way(struct cast5_ctx *ctx, u8 *dst,
 				    const u8 *src);
-asmlinkage void cast5_ctr_16way(struct cast5_ctx *ctx, u8 *dst, const u8 *src,
+asmlinkage __visible void cast5_ctr_16way(struct cast5_ctx *ctx, u8 *dst, const u8 *src,
 				__be64 *iv);
 
 static inline bool cast5_fpu_begin(bool fpu_enabled, unsigned int nbytes)
diff --git a/arch/x86/crypto/cast6_avx_glue.c b/arch/x86/crypto/cast6_avx_glue.c
index 09f3677..9568783 100644
--- a/arch/x86/crypto/cast6_avx_glue.c
+++ b/arch/x86/crypto/cast6_avx_glue.c
@@ -42,19 +42,19 @@
 
 #define CAST6_PARALLEL_BLOCKS 8
 
-asmlinkage void cast6_ecb_enc_8way(struct cast6_ctx *ctx, u8 *dst,
+asmlinkage __visible void cast6_ecb_enc_8way(struct cast6_ctx *ctx, u8 *dst,
 				   const u8 *src);
-asmlinkage void cast6_ecb_dec_8way(struct cast6_ctx *ctx, u8 *dst,
+asmlinkage __visible void cast6_ecb_dec_8way(struct cast6_ctx *ctx, u8 *dst,
 				   const u8 *src);
 
-asmlinkage void cast6_cbc_dec_8way(struct cast6_ctx *ctx, u8 *dst,
+asmlinkage __visible void cast6_cbc_dec_8way(struct cast6_ctx *ctx, u8 *dst,
 				   const u8 *src);
-asmlinkage void cast6_ctr_8way(struct cast6_ctx *ctx, u8 *dst, const u8 *src,
+asmlinkage __visible void cast6_ctr_8way(struct cast6_ctx *ctx, u8 *dst, const u8 *src,
 			       le128 *iv);
 
-asmlinkage void cast6_xts_enc_8way(struct cast6_ctx *ctx, u8 *dst,
+asmlinkage __visible void cast6_xts_enc_8way(struct cast6_ctx *ctx, u8 *dst,
 				   const u8 *src, le128 *iv);
-asmlinkage void cast6_xts_dec_8way(struct cast6_ctx *ctx, u8 *dst,
+asmlinkage __visible void cast6_xts_dec_8way(struct cast6_ctx *ctx, u8 *dst,
 				   const u8 *src, le128 *iv);
 
 static void cast6_xts_enc(void *ctx, u128 *dst, const u128 *src, le128 *iv)
diff --git a/arch/x86/crypto/crc32c-intel_glue.c b/arch/x86/crypto/crc32c-intel_glue.c
index 6812ad9..2b4a1d1 100644
--- a/arch/x86/crypto/crc32c-intel_glue.c
+++ b/arch/x86/crypto/crc32c-intel_glue.c
@@ -56,7 +56,7 @@
 #define CRC32C_PCL_BREAKEVEN_EAGERFPU	512
 #define CRC32C_PCL_BREAKEVEN_NOEAGERFPU	1024
 
-asmlinkage unsigned int crc_pcl(const u8 *buffer, int len,
+asmlinkage __visible unsigned int crc_pcl(const u8 *buffer, int len,
 				unsigned int crc_init);
 static int crc32c_pcl_breakeven = CRC32C_PCL_BREAKEVEN_EAGERFPU;
 #if defined(X86_FEATURE_EAGER_FPU)
diff --git a/arch/x86/crypto/crct10dif-pclmul_glue.c b/arch/x86/crypto/crct10dif-pclmul_glue.c
index 7845d7f..c3879bd 100644
--- a/arch/x86/crypto/crct10dif-pclmul_glue.c
+++ b/arch/x86/crypto/crct10dif-pclmul_glue.c
@@ -33,7 +33,7 @@
 #include <asm/cpufeature.h>
 #include <asm/cpu_device_id.h>
 
-asmlinkage __u16 crc_t10dif_pcl(__u16 crc, const unsigned char *buf,
+asmlinkage __visible __u16 crc_t10dif_pcl(__u16 crc, const unsigned char *buf,
 				size_t len);
 
 struct chksum_desc_ctx {
diff --git a/arch/x86/crypto/salsa20_glue.c b/arch/x86/crypto/salsa20_glue.c
index 5e8e677..7c8d764 100644
--- a/arch/x86/crypto/salsa20_glue.c
+++ b/arch/x86/crypto/salsa20_glue.c
@@ -31,10 +31,10 @@ struct salsa20_ctx
 	u32 input[16];
 };
 
-asmlinkage void salsa20_keysetup(struct salsa20_ctx *ctx, const u8 *k,
+asmlinkage __visible void salsa20_keysetup(struct salsa20_ctx *ctx, const u8 *k,
 				 u32 keysize, u32 ivsize);
-asmlinkage void salsa20_ivsetup(struct salsa20_ctx *ctx, const u8 *iv);
-asmlinkage void salsa20_encrypt_bytes(struct salsa20_ctx *ctx,
+asmlinkage __visible void salsa20_ivsetup(struct salsa20_ctx *ctx, const u8 *iv);
+asmlinkage __visible void salsa20_encrypt_bytes(struct salsa20_ctx *ctx,
 				      const u8 *src, u8 *dst, u32 bytes);
 
 static int setkey(struct crypto_tfm *tfm, const u8 *key,
diff --git a/arch/x86/crypto/serpent_avx2_glue.c b/arch/x86/crypto/serpent_avx2_glue.c
index 2fae489..790c413 100644
--- a/arch/x86/crypto/serpent_avx2_glue.c
+++ b/arch/x86/crypto/serpent_avx2_glue.c
@@ -28,17 +28,17 @@
 #define SERPENT_AVX2_PARALLEL_BLOCKS 16
 
 /* 16-way AVX2 parallel cipher functions */
-asmlinkage void serpent_ecb_enc_16way(struct serpent_ctx *ctx, u8 *dst,
+asmlinkage __visible void serpent_ecb_enc_16way(struct serpent_ctx *ctx, u8 *dst,
 				      const u8 *src);
-asmlinkage void serpent_ecb_dec_16way(struct serpent_ctx *ctx, u8 *dst,
+asmlinkage __visible void serpent_ecb_dec_16way(struct serpent_ctx *ctx, u8 *dst,
 				      const u8 *src);
-asmlinkage void serpent_cbc_dec_16way(void *ctx, u128 *dst, const u128 *src);
+asmlinkage __visible void serpent_cbc_dec_16way(void *ctx, u128 *dst, const u128 *src);
 
-asmlinkage void serpent_ctr_16way(void *ctx, u128 *dst, const u128 *src,
+asmlinkage __visible void serpent_ctr_16way(void *ctx, u128 *dst, const u128 *src,
 				  le128 *iv);
-asmlinkage void serpent_xts_enc_16way(struct serpent_ctx *ctx, u8 *dst,
+asmlinkage __visible void serpent_xts_enc_16way(struct serpent_ctx *ctx, u8 *dst,
 				      const u8 *src, le128 *iv);
-asmlinkage void serpent_xts_dec_16way(struct serpent_ctx *ctx, u8 *dst,
+asmlinkage __visible void serpent_xts_dec_16way(struct serpent_ctx *ctx, u8 *dst,
 				      const u8 *src, le128 *iv);
 
 static const struct common_glue_ctx serpent_enc = {
diff --git a/arch/x86/crypto/serpent_avx_glue.c b/arch/x86/crypto/serpent_avx_glue.c
index ff48708..f3b6455 100644
--- a/arch/x86/crypto/serpent_avx_glue.c
+++ b/arch/x86/crypto/serpent_avx_glue.c
@@ -42,27 +42,27 @@
 #include <asm/crypto/glue_helper.h>
 
 /* 8-way parallel cipher functions */
-asmlinkage void serpent_ecb_enc_8way_avx(struct serpent_ctx *ctx, u8 *dst,
+asmlinkage __visible void serpent_ecb_enc_8way_avx(struct serpent_ctx *ctx, u8 *dst,
 					 const u8 *src);
 EXPORT_SYMBOL_GPL(serpent_ecb_enc_8way_avx);
 
-asmlinkage void serpent_ecb_dec_8way_avx(struct serpent_ctx *ctx, u8 *dst,
+asmlinkage __visible void serpent_ecb_dec_8way_avx(struct serpent_ctx *ctx, u8 *dst,
 					 const u8 *src);
 EXPORT_SYMBOL_GPL(serpent_ecb_dec_8way_avx);
 
-asmlinkage void serpent_cbc_dec_8way_avx(struct serpent_ctx *ctx, u8 *dst,
+asmlinkage __visible void serpent_cbc_dec_8way_avx(struct serpent_ctx *ctx, u8 *dst,
 					 const u8 *src);
 EXPORT_SYMBOL_GPL(serpent_cbc_dec_8way_avx);
 
-asmlinkage void serpent_ctr_8way_avx(struct serpent_ctx *ctx, u8 *dst,
+asmlinkage __visible void serpent_ctr_8way_avx(struct serpent_ctx *ctx, u8 *dst,
 				     const u8 *src, le128 *iv);
 EXPORT_SYMBOL_GPL(serpent_ctr_8way_avx);
 
-asmlinkage void serpent_xts_enc_8way_avx(struct serpent_ctx *ctx, u8 *dst,
+asmlinkage __visible void serpent_xts_enc_8way_avx(struct serpent_ctx *ctx, u8 *dst,
 					 const u8 *src, le128 *iv);
 EXPORT_SYMBOL_GPL(serpent_xts_enc_8way_avx);
 
-asmlinkage void serpent_xts_dec_8way_avx(struct serpent_ctx *ctx, u8 *dst,
+asmlinkage __visible void serpent_xts_dec_8way_avx(struct serpent_ctx *ctx, u8 *dst,
 					 const u8 *src, le128 *iv);
 EXPORT_SYMBOL_GPL(serpent_xts_dec_8way_avx);
 
diff --git a/arch/x86/crypto/sha1_ssse3_glue.c b/arch/x86/crypto/sha1_ssse3_glue.c
index 4a11a9d..b24aec2 100644
--- a/arch/x86/crypto/sha1_ssse3_glue.c
+++ b/arch/x86/crypto/sha1_ssse3_glue.c
@@ -33,10 +33,10 @@
 #include <asm/xsave.h>
 
 
-asmlinkage void sha1_transform_ssse3(u32 *digest, const char *data,
+asmlinkage __visible void sha1_transform_ssse3(u32 *digest, const char *data,
 				     unsigned int rounds);
 #ifdef CONFIG_AS_AVX
-asmlinkage void sha1_transform_avx(u32 *digest, const char *data,
+asmlinkage __visible void sha1_transform_avx(u32 *digest, const char *data,
 				   unsigned int rounds);
 #endif
 
diff --git a/arch/x86/crypto/sha256_ssse3_glue.c b/arch/x86/crypto/sha256_ssse3_glue.c
index f248546..c71c65e 100644
--- a/arch/x86/crypto/sha256_ssse3_glue.c
+++ b/arch/x86/crypto/sha256_ssse3_glue.c
@@ -42,14 +42,14 @@
 #include <asm/xsave.h>
 #include <linux/string.h>
 
-asmlinkage void sha256_transform_ssse3(const char *data, u32 *digest,
+asmlinkage __visible void sha256_transform_ssse3(const char *data, u32 *digest,
 				     u64 rounds);
 #ifdef CONFIG_AS_AVX
-asmlinkage void sha256_transform_avx(const char *data, u32 *digest,
+asmlinkage __visible void sha256_transform_avx(const char *data, u32 *digest,
 				     u64 rounds);
 #endif
 #ifdef CONFIG_AS_AVX2
-asmlinkage void sha256_transform_rorx(const char *data, u32 *digest,
+asmlinkage __visible void sha256_transform_rorx(const char *data, u32 *digest,
 				     u64 rounds);
 #endif
 
diff --git a/arch/x86/crypto/sha512_ssse3_glue.c b/arch/x86/crypto/sha512_ssse3_glue.c
index f30cd10..45cea90 100644
--- a/arch/x86/crypto/sha512_ssse3_glue.c
+++ b/arch/x86/crypto/sha512_ssse3_glue.c
@@ -41,14 +41,14 @@
 
 #include <linux/string.h>
 
-asmlinkage void sha512_transform_ssse3(const char *data, u64 *digest,
+asmlinkage __visible void sha512_transform_ssse3(const char *data, u64 *digest,
 				     u64 rounds);
 #ifdef CONFIG_AS_AVX
-asmlinkage void sha512_transform_avx(const char *data, u64 *digest,
+asmlinkage __visible void sha512_transform_avx(const char *data, u64 *digest,
 				     u64 rounds);
 #endif
 #ifdef CONFIG_AS_AVX2
-asmlinkage void sha512_transform_rorx(const char *data, u64 *digest,
+asmlinkage __visible void sha512_transform_rorx(const char *data, u64 *digest,
 				     u64 rounds);
 #endif
 
diff --git a/arch/x86/crypto/twofish_avx_glue.c b/arch/x86/crypto/twofish_avx_glue.c
index 4e3c665..1b2181a 100644
--- a/arch/x86/crypto/twofish_avx_glue.c
+++ b/arch/x86/crypto/twofish_avx_glue.c
@@ -48,19 +48,19 @@
 #define TWOFISH_PARALLEL_BLOCKS 8
 
 /* 8-way parallel cipher functions */
-asmlinkage void twofish_ecb_enc_8way(struct twofish_ctx *ctx, u8 *dst,
+asmlinkage __visible void twofish_ecb_enc_8way(struct twofish_ctx *ctx, u8 *dst,
 				     const u8 *src);
-asmlinkage void twofish_ecb_dec_8way(struct twofish_ctx *ctx, u8 *dst,
+asmlinkage __visible void twofish_ecb_dec_8way(struct twofish_ctx *ctx, u8 *dst,
 				     const u8 *src);
 
-asmlinkage void twofish_cbc_dec_8way(struct twofish_ctx *ctx, u8 *dst,
+asmlinkage __visible void twofish_cbc_dec_8way(struct twofish_ctx *ctx, u8 *dst,
 				     const u8 *src);
-asmlinkage void twofish_ctr_8way(struct twofish_ctx *ctx, u8 *dst,
+asmlinkage __visible void twofish_ctr_8way(struct twofish_ctx *ctx, u8 *dst,
 				 const u8 *src, le128 *iv);
 
-asmlinkage void twofish_xts_enc_8way(struct twofish_ctx *ctx, u8 *dst,
+asmlinkage __visible void twofish_xts_enc_8way(struct twofish_ctx *ctx, u8 *dst,
 				     const u8 *src, le128 *iv);
-asmlinkage void twofish_xts_dec_8way(struct twofish_ctx *ctx, u8 *dst,
+asmlinkage __visible void twofish_xts_dec_8way(struct twofish_ctx *ctx, u8 *dst,
 				     const u8 *src, le128 *iv);
 
 static inline void twofish_enc_blk_3way(struct twofish_ctx *ctx, u8 *dst,
diff --git a/arch/x86/crypto/twofish_glue.c b/arch/x86/crypto/twofish_glue.c
index 0a52023..5992a66 100644
--- a/arch/x86/crypto/twofish_glue.c
+++ b/arch/x86/crypto/twofish_glue.c
@@ -44,10 +44,10 @@
 #include <linux/module.h>
 #include <linux/types.h>
 
-asmlinkage void twofish_enc_blk(struct twofish_ctx *ctx, u8 *dst,
+asmlinkage __visible void twofish_enc_blk(struct twofish_ctx *ctx, u8 *dst,
 				const u8 *src);
 EXPORT_SYMBOL_GPL(twofish_enc_blk);
-asmlinkage void twofish_dec_blk(struct twofish_ctx *ctx, u8 *dst,
+asmlinkage __visible void twofish_dec_blk(struct twofish_ctx *ctx, u8 *dst,
 				const u8 *src);
 EXPORT_SYMBOL_GPL(twofish_dec_blk);
 
diff --git a/arch/x86/include/asm/crypto/camellia.h b/arch/x86/include/asm/crypto/camellia.h
index bb93333..43c6106 100644
--- a/arch/x86/include/asm/crypto/camellia.h
+++ b/arch/x86/include/asm/crypto/camellia.h
@@ -37,31 +37,31 @@ extern int xts_camellia_setkey(struct crypto_tfm *tfm, const u8 *key,
 			       unsigned int keylen);
 
 /* regular block cipher functions */
-asmlinkage void __camellia_enc_blk(struct camellia_ctx *ctx, u8 *dst,
+asmlinkage __visible void __camellia_enc_blk(struct camellia_ctx *ctx, u8 *dst,
 				   const u8 *src, bool xor);
-asmlinkage void camellia_dec_blk(struct camellia_ctx *ctx, u8 *dst,
+asmlinkage __visible void camellia_dec_blk(struct camellia_ctx *ctx, u8 *dst,
 				 const u8 *src);
 
 /* 2-way parallel cipher functions */
-asmlinkage void __camellia_enc_blk_2way(struct camellia_ctx *ctx, u8 *dst,
+asmlinkage __visible void __camellia_enc_blk_2way(struct camellia_ctx *ctx, u8 *dst,
 					const u8 *src, bool xor);
-asmlinkage void camellia_dec_blk_2way(struct camellia_ctx *ctx, u8 *dst,
+asmlinkage __visible void camellia_dec_blk_2way(struct camellia_ctx *ctx, u8 *dst,
 				      const u8 *src);
 
 /* 16-way parallel cipher functions (avx/aes-ni) */
-asmlinkage void camellia_ecb_enc_16way(struct camellia_ctx *ctx, u8 *dst,
+asmlinkage __visible void camellia_ecb_enc_16way(struct camellia_ctx *ctx, u8 *dst,
 				       const u8 *src);
-asmlinkage void camellia_ecb_dec_16way(struct camellia_ctx *ctx, u8 *dst,
+asmlinkage __visible void camellia_ecb_dec_16way(struct camellia_ctx *ctx, u8 *dst,
 				       const u8 *src);
 
-asmlinkage void camellia_cbc_dec_16way(struct camellia_ctx *ctx, u8 *dst,
+asmlinkage __visible void camellia_cbc_dec_16way(struct camellia_ctx *ctx, u8 *dst,
 				       const u8 *src);
-asmlinkage void camellia_ctr_16way(struct camellia_ctx *ctx, u8 *dst,
+asmlinkage __visible void camellia_ctr_16way(struct camellia_ctx *ctx, u8 *dst,
 				   const u8 *src, le128 *iv);
 
-asmlinkage void camellia_xts_enc_16way(struct camellia_ctx *ctx, u8 *dst,
+asmlinkage __visible void camellia_xts_enc_16way(struct camellia_ctx *ctx, u8 *dst,
 				       const u8 *src, le128 *iv);
-asmlinkage void camellia_xts_dec_16way(struct camellia_ctx *ctx, u8 *dst,
+asmlinkage __visible void camellia_xts_dec_16way(struct camellia_ctx *ctx, u8 *dst,
 				       const u8 *src, le128 *iv);
 
 static inline void camellia_enc_blk(struct camellia_ctx *ctx, u8 *dst,
diff --git a/arch/x86/include/asm/crypto/serpent-avx.h b/arch/x86/include/asm/crypto/serpent-avx.h
index 33c2b8a..895b421 100644
--- a/arch/x86/include/asm/crypto/serpent-avx.h
+++ b/arch/x86/include/asm/crypto/serpent-avx.h
@@ -16,19 +16,19 @@ struct serpent_xts_ctx {
 	struct serpent_ctx crypt_ctx;
 };
 
-asmlinkage void serpent_ecb_enc_8way_avx(struct serpent_ctx *ctx, u8 *dst,
+asmlinkage __visible void serpent_ecb_enc_8way_avx(struct serpent_ctx *ctx, u8 *dst,
 					 const u8 *src);
-asmlinkage void serpent_ecb_dec_8way_avx(struct serpent_ctx *ctx, u8 *dst,
+asmlinkage __visible void serpent_ecb_dec_8way_avx(struct serpent_ctx *ctx, u8 *dst,
 					 const u8 *src);
 
-asmlinkage void serpent_cbc_dec_8way_avx(struct serpent_ctx *ctx, u8 *dst,
+asmlinkage __visible void serpent_cbc_dec_8way_avx(struct serpent_ctx *ctx, u8 *dst,
 					 const u8 *src);
-asmlinkage void serpent_ctr_8way_avx(struct serpent_ctx *ctx, u8 *dst,
+asmlinkage __visible void serpent_ctr_8way_avx(struct serpent_ctx *ctx, u8 *dst,
 				     const u8 *src, le128 *iv);
 
-asmlinkage void serpent_xts_enc_8way_avx(struct serpent_ctx *ctx, u8 *dst,
+asmlinkage __visible void serpent_xts_enc_8way_avx(struct serpent_ctx *ctx, u8 *dst,
 					 const u8 *src, le128 *iv);
-asmlinkage void serpent_xts_dec_8way_avx(struct serpent_ctx *ctx, u8 *dst,
+asmlinkage __visible void serpent_xts_dec_8way_avx(struct serpent_ctx *ctx, u8 *dst,
 					 const u8 *src, le128 *iv);
 
 extern void __serpent_crypt_ctr(void *ctx, u128 *dst, const u128 *src,
diff --git a/arch/x86/include/asm/crypto/serpent-sse2.h b/arch/x86/include/asm/crypto/serpent-sse2.h
index e6e77df..c669be7 100644
--- a/arch/x86/include/asm/crypto/serpent-sse2.h
+++ b/arch/x86/include/asm/crypto/serpent-sse2.h
@@ -8,9 +8,9 @@
 
 #define SERPENT_PARALLEL_BLOCKS 4
 
-asmlinkage void __serpent_enc_blk_4way(struct serpent_ctx *ctx, u8 *dst,
+asmlinkage __visible void __serpent_enc_blk_4way(struct serpent_ctx *ctx, u8 *dst,
 				       const u8 *src, bool xor);
-asmlinkage void serpent_dec_blk_4way(struct serpent_ctx *ctx, u8 *dst,
+asmlinkage __visible void serpent_dec_blk_4way(struct serpent_ctx *ctx, u8 *dst,
 				     const u8 *src);
 
 static inline void serpent_enc_blk_xway(struct serpent_ctx *ctx, u8 *dst,
@@ -35,9 +35,9 @@ static inline void serpent_dec_blk_xway(struct serpent_ctx *ctx, u8 *dst,
 
 #define SERPENT_PARALLEL_BLOCKS 8
 
-asmlinkage void __serpent_enc_blk_8way(struct serpent_ctx *ctx, u8 *dst,
+asmlinkage __visible void __serpent_enc_blk_8way(struct serpent_ctx *ctx, u8 *dst,
 				       const u8 *src, bool xor);
-asmlinkage void serpent_dec_blk_8way(struct serpent_ctx *ctx, u8 *dst,
+asmlinkage __visible void serpent_dec_blk_8way(struct serpent_ctx *ctx, u8 *dst,
 				     const u8 *src);
 
 static inline void serpent_enc_blk_xway(struct serpent_ctx *ctx, u8 *dst,
diff --git a/arch/x86/include/asm/crypto/twofish.h b/arch/x86/include/asm/crypto/twofish.h
index 878c51c..3d9c344 100644
--- a/arch/x86/include/asm/crypto/twofish.h
+++ b/arch/x86/include/asm/crypto/twofish.h
@@ -17,15 +17,15 @@ struct twofish_xts_ctx {
 };
 
 /* regular block cipher functions from twofish_x86_64 module */
-asmlinkage void twofish_enc_blk(struct twofish_ctx *ctx, u8 *dst,
+asmlinkage __visible void twofish_enc_blk(struct twofish_ctx *ctx, u8 *dst,
 				const u8 *src);
-asmlinkage void twofish_dec_blk(struct twofish_ctx *ctx, u8 *dst,
+asmlinkage __visible void twofish_dec_blk(struct twofish_ctx *ctx, u8 *dst,
 				const u8 *src);
 
 /* 3-way parallel cipher functions */
-asmlinkage void __twofish_enc_blk_3way(struct twofish_ctx *ctx, u8 *dst,
+asmlinkage __visible void __twofish_enc_blk_3way(struct twofish_ctx *ctx, u8 *dst,
 				       const u8 *src, bool xor);
-asmlinkage void twofish_dec_blk_3way(struct twofish_ctx *ctx, u8 *dst,
+asmlinkage __visible void twofish_dec_blk_3way(struct twofish_ctx *ctx, u8 *dst,
 				     const u8 *src);
 
 /* helpers from twofish_x86_64-3way module */
-- 
1.8.5.2


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 4/4] asmlinkage: Add explicit __visible to drivers/*, lib/*, kernel/*
  2014-04-01 17:32 Fix up asmlinkage Andi Kleen
                   ` (2 preceding siblings ...)
  2014-04-01 17:32 ` [PATCH 3/4] asmlinkage, x86: Add explicit __visible to arch/x86/crypto/* Andi Kleen
@ 2014-04-01 17:32 ` Andi Kleen
  3 siblings, 0 replies; 9+ messages in thread
From: Andi Kleen @ 2014-04-01 17:32 UTC (permalink / raw)
  To: x86; +Cc: linux-kernel, torvalds, Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

As requested by Linus add explicit __visible to the asmlinkage users.
This marks both functions visible to assembler and some functions
defined in assembler to make it clear to the  compiler that they
exist elsewhere.

Tree sweep for rest of tree.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 drivers/pnp/pnpbios/bioscalls.c |  2 +-
 init/main.c                     |  2 +-
 kernel/context_tracking.c       |  2 +-
 kernel/locking/lockdep.c        |  2 +-
 kernel/power/snapshot.c         |  2 +-
 kernel/printk/printk.c          |  4 ++--
 kernel/sched/core.c             | 10 +++++-----
 kernel/softirq.c                |  4 ++--
 lib/dump_stack.c                |  4 ++--
 9 files changed, 16 insertions(+), 16 deletions(-)

diff --git a/drivers/pnp/pnpbios/bioscalls.c b/drivers/pnp/pnpbios/bioscalls.c
index deb7f4b..438d4c7 100644
--- a/drivers/pnp/pnpbios/bioscalls.c
+++ b/drivers/pnp/pnpbios/bioscalls.c
@@ -37,7 +37,7 @@ __visible struct {
  * kernel begins at offset 3GB...
  */
 
-asmlinkage void pnp_bios_callfunc(void);
+asmlinkage __visible void pnp_bios_callfunc(void);
 
 __asm__(".text			\n"
 	__ALIGN_STR "\n"
diff --git a/init/main.c b/init/main.c
index 9c7fd4c..48655ce 100644
--- a/init/main.c
+++ b/init/main.c
@@ -476,7 +476,7 @@ static void __init mm_init(void)
 	vmalloc_init();
 }
 
-asmlinkage void __init start_kernel(void)
+asmlinkage __visible void __init start_kernel(void)
 {
 	char * command_line;
 	extern const struct kernel_param __start___param[], __stop___param[];
diff --git a/kernel/context_tracking.c b/kernel/context_tracking.c
index 6cb20d2..019d450 100644
--- a/kernel/context_tracking.c
+++ b/kernel/context_tracking.c
@@ -120,7 +120,7 @@ void context_tracking_user_enter(void)
  * instead of preempt_schedule() to exit user context if needed before
  * calling the scheduler.
  */
-asmlinkage void __sched notrace preempt_schedule_context(void)
+asmlinkage __visible void __sched notrace preempt_schedule_context(void)
 {
 	enum ctx_state prev_ctx;
 
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index b0e9467..d24e433 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -4188,7 +4188,7 @@ void debug_show_held_locks(struct task_struct *task)
 }
 EXPORT_SYMBOL_GPL(debug_show_held_locks);
 
-asmlinkage void lockdep_sys_exit(void)
+asmlinkage __visible void lockdep_sys_exit(void)
 {
 	struct task_struct *curr = current;
 
diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
index d9f61a1..7567b0d 100644
--- a/kernel/power/snapshot.c
+++ b/kernel/power/snapshot.c
@@ -1585,7 +1585,7 @@ swsusp_alloc(struct memory_bitmap *orig_bm, struct memory_bitmap *copy_bm,
 	return -ENOMEM;
 }
 
-asmlinkage int swsusp_save(void)
+asmlinkage __visible int swsusp_save(void)
 {
 	unsigned int nr_pages, nr_highmem;
 
diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
index 4dae9cb..17a73b4 100644
--- a/kernel/printk/printk.c
+++ b/kernel/printk/printk.c
@@ -1671,7 +1671,7 @@ EXPORT_SYMBOL(printk_emit);
  *
  * See the vsnprintf() documentation for format string extensions over C99.
  */
-asmlinkage int printk(const char *fmt, ...)
+asmlinkage __visible int printk(const char *fmt, ...)
 {
 	va_list args;
 	int r;
@@ -1734,7 +1734,7 @@ void early_vprintk(const char *fmt, va_list ap)
 	}
 }
 
-asmlinkage void early_printk(const char *fmt, ...)
+asmlinkage __visible void early_printk(const char *fmt, ...)
 {
 	va_list ap;
 
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index a47902c..82eee7f 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2194,7 +2194,7 @@ static inline void post_schedule(struct rq *rq)
  * schedule_tail - first thing a freshly forked thread must call.
  * @prev: the thread we just switched away from.
  */
-asmlinkage void schedule_tail(struct task_struct *prev)
+asmlinkage __visible void schedule_tail(struct task_struct *prev)
 	__releases(rq->lock)
 {
 	struct rq *rq = this_rq();
@@ -2743,7 +2743,7 @@ static inline void sched_submit_work(struct task_struct *tsk)
 		blk_schedule_flush_plug(tsk);
 }
 
-asmlinkage void __sched schedule(void)
+asmlinkage __visible void __sched schedule(void)
 {
 	struct task_struct *tsk = current;
 
@@ -2753,7 +2753,7 @@ asmlinkage void __sched schedule(void)
 EXPORT_SYMBOL(schedule);
 
 #ifdef CONFIG_CONTEXT_TRACKING
-asmlinkage void __sched schedule_user(void)
+asmlinkage __visible void __sched schedule_user(void)
 {
 	/*
 	 * If we come here after a random call to set_need_resched(),
@@ -2785,7 +2785,7 @@ void __sched schedule_preempt_disabled(void)
  * off of preempt_enable. Kernel preemptions off return from interrupt
  * occur there and call schedule directly.
  */
-asmlinkage void __sched notrace preempt_schedule(void)
+asmlinkage __visible void __sched notrace preempt_schedule(void)
 {
 	/*
 	 * If there is a non-zero preempt_count or interrupts are disabled,
@@ -2815,7 +2815,7 @@ EXPORT_SYMBOL(preempt_schedule);
  * Note, that this is called and return with irqs disabled. This will
  * protect us against recursive calling from irq.
  */
-asmlinkage void __sched preempt_schedule_irq(void)
+asmlinkage __visible void __sched preempt_schedule_irq(void)
 {
 	enum ctx_state prev_state;
 
diff --git a/kernel/softirq.c b/kernel/softirq.c
index 490fcbb..6761973 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -222,7 +222,7 @@ static inline bool lockdep_softirq_start(void) { return false; }
 static inline void lockdep_softirq_end(bool in_hardirq) { }
 #endif
 
-asmlinkage void __do_softirq(void)
+asmlinkage __visible void __do_softirq(void)
 {
 	unsigned long end = jiffies + MAX_SOFTIRQ_TIME;
 	unsigned long old_flags = current->flags;
@@ -298,7 +298,7 @@ restart:
 	tsk_restore_flags(current, old_flags, PF_MEMALLOC);
 }
 
-asmlinkage void do_softirq(void)
+asmlinkage __visible void do_softirq(void)
 {
 	__u32 pending;
 	unsigned long flags;
diff --git a/lib/dump_stack.c b/lib/dump_stack.c
index f23b63f..6745c62 100644
--- a/lib/dump_stack.c
+++ b/lib/dump_stack.c
@@ -23,7 +23,7 @@ static void __dump_stack(void)
 #ifdef CONFIG_SMP
 static atomic_t dump_lock = ATOMIC_INIT(-1);
 
-asmlinkage void dump_stack(void)
+asmlinkage __visible void dump_stack(void)
 {
 	int was_locked;
 	int old;
@@ -55,7 +55,7 @@ retry:
 	preempt_enable();
 }
 #else
-asmlinkage void dump_stack(void)
+asmlinkage __visible void dump_stack(void)
 {
 	__dump_stack();
 }
-- 
1.8.5.2


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH 2/4] asmlinkage, x86: Add explicit __visible to arch/x86/*
  2014-04-01 17:32 ` [PATCH 2/4] asmlinkage, x86: Add explicit __visible to arch/x86/* Andi Kleen
@ 2014-04-01 18:33   ` Linus Torvalds
  0 siblings, 0 replies; 9+ messages in thread
From: Linus Torvalds @ 2014-04-01 18:33 UTC (permalink / raw)
  To: Andi Kleen
  Cc: the arch/x86 maintainers, Linux Kernel Mailing List, Andi Kleen

This still has these kinds of pointless changes. Why?

In the declaration of the function, the __visible is just noise. It's
not interesting to the caller, and doesn't add anything. In fact, I'd
argue that these kinds of declarations are not just noise, they are
*stupid* noise, because if the function is ever used by C code than
that "__visible" is pointless.

So get rid of the __visible. It's just more of the same confusion
where you think __visible and asmlinkage go together. They damn well
don't.

                  Linus

On Tue, Apr 1, 2014 at 10:32 AM, Andi Kleen <andi@firstfloor.org> wrote:
>  #include <asm/sections.h>
>
>  /* Interrupt handlers registered during init_IRQ */
> -extern asmlinkage void apic_timer_interrupt(void);
> -extern asmlinkage void x86_platform_ipi(void);
> -extern asmlinkage void kvm_posted_intr_ipi(void);
> -extern asmlinkage void error_interrupt(void);
> -extern asmlinkage void irq_work_interrupt(void);
...
> +extern asmlinkage __visible void apic_timer_interrupt(void);
> +extern asmlinkage __visible void x86_platform_ipi(void);
> +extern asmlinkage __visible void kvm_posted_intr_ipi(void);
> +extern asmlinkage __visible void error_interrupt(void);
> +extern asmlinkage __visible void irq_work_interrupt(void);

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 3/4] asmlinkage, x86: Add explicit __visible to arch/x86/crypto/*
  2014-04-01 17:32 ` [PATCH 3/4] asmlinkage, x86: Add explicit __visible to arch/x86/crypto/* Andi Kleen
@ 2014-04-01 18:53   ` Linus Torvalds
  2014-04-01 19:09     ` Linus Torvalds
  0 siblings, 1 reply; 9+ messages in thread
From: Linus Torvalds @ 2014-04-01 18:53 UTC (permalink / raw)
  To: Andi Kleen
  Cc: the arch/x86 maintainers, Linux Kernel Mailing List, Andi Kleen,
	Herbert Xu

On Tue, Apr 1, 2014 at 10:32 AM, Andi Kleen <andi@firstfloor.org> wrote:
>
> -asmlinkage void aes_enc_blk(struct crypto_aes_ctx *ctx, u8 *out, const u8 *in);
> -asmlinkage void aes_dec_blk(struct crypto_aes_ctx *ctx, u8 *out, const u8 *in);
> +asmlinkage __visible void aes_enc_blk(struct crypto_aes_ctx *ctx, u8 *out, const u8 *in);
> +asmlinkage __visible void aes_dec_blk(struct crypto_aes_ctx *ctx, u8 *out, const u8 *in);

This seems to be more of the same "__visible in declaration" badness.

Don't do it.

As far as I can tell, the only point of "__visible" is on C symbols
called from assembly language.

You're adding them to assembly routines called from C, which is
exactly the wrong way around, and pointless. And it's worse than
pointless churn, it just confuses people, and shows that you are
confused about the meaning of it.

Again, it seems to be because you've mentally tied "asmlinkage"
together with "__visible", but they are totally disjoint. One is about
calling conventions, the other is about the C compiler not hiding the
function when using -fwhole-program.

STOP CONFUSING THE TWO. They are independent and have absolutely
*NOTHING* to do with each other.

             Linus

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 3/4] asmlinkage, x86: Add explicit __visible to arch/x86/crypto/*
  2014-04-01 18:53   ` Linus Torvalds
@ 2014-04-01 19:09     ` Linus Torvalds
  2014-04-01 23:22       ` Andi Kleen
  0 siblings, 1 reply; 9+ messages in thread
From: Linus Torvalds @ 2014-04-01 19:09 UTC (permalink / raw)
  To: Andi Kleen
  Cc: the arch/x86 maintainers, Linux Kernel Mailing List, Andi Kleen,
	Herbert Xu

On Tue, Apr 1, 2014 at 11:53 AM, Linus Torvalds
<torvalds@linux-foundation.org> wrote:
>
> You're adding them to assembly routines called from C, which is
> exactly the wrong way around, and pointless. And it's worse than
> pointless churn, it just confuses people, and shows that you are
> confused about the meaning of it.

Basically, all these grep's should return the empty set:

   git grep static.*__visible
   git grep extern.*__visible
   git grep "__visible.*(.*);"

because they are all signs of confusion. A 'static' variable
(declaration _or_ definition) should never be externally visible (as a
definition it might be called from inline asm, I guess, but then it
should be done as an argument so that the compiler sees the use). An
extern declaration can never sanely be marked "__visible", because the
only use of such a declaration is for C code (which by definition
doesn't need it). And the last case is for a function declaration,
which have an implicit extern.

And yeah, we do have a few confused users already (28, to be exact).
They should be fixed. But more importantly, we certainly shouldn't be
adding more of them.

                    Linus

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 3/4] asmlinkage, x86: Add explicit __visible to arch/x86/crypto/*
  2014-04-01 19:09     ` Linus Torvalds
@ 2014-04-01 23:22       ` Andi Kleen
  0 siblings, 0 replies; 9+ messages in thread
From: Andi Kleen @ 2014-04-01 23:22 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Andi Kleen, the arch/x86 maintainers, Linux Kernel Mailing List,
	Herbert Xu

> Basically, all these grep's should return the empty set:
> 
>    git grep static.*__visible

All empty.


>    git grep extern.*__visible

There are some left over for this for those from earlier changes. 
I'll send another patch to fix those too.

>    git grep "__visible.*(.*);"

Same as above.

-Andi

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2014-04-01 23:22 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-04-01 17:32 Fix up asmlinkage Andi Kleen
2014-04-01 17:32 ` [PATCH 1/4] Revert "lto: Make asmlinkage __visible" Andi Kleen
2014-04-01 17:32 ` [PATCH 2/4] asmlinkage, x86: Add explicit __visible to arch/x86/* Andi Kleen
2014-04-01 18:33   ` Linus Torvalds
2014-04-01 17:32 ` [PATCH 3/4] asmlinkage, x86: Add explicit __visible to arch/x86/crypto/* Andi Kleen
2014-04-01 18:53   ` Linus Torvalds
2014-04-01 19:09     ` Linus Torvalds
2014-04-01 23:22       ` Andi Kleen
2014-04-01 17:32 ` [PATCH 4/4] asmlinkage: Add explicit __visible to drivers/*, lib/*, kernel/* Andi Kleen

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.