linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/6] Baby steps toward cleaning up KERNEL_STACK_OFFSET
@ 2015-03-06  3:19 Andy Lutomirski
  2015-03-06  3:19 ` [PATCH v2 1/6] x86: Add this_cpu_sp0() to read sp0 for the current cpu Andy Lutomirski
                   ` (5 more replies)
  0 siblings, 6 replies; 14+ messages in thread
From: Andy Lutomirski @ 2015-03-06  3:19 UTC (permalink / raw)
  To: x86, linux-kernel, =Ingo Molnar
  Cc: Borislav Petkov, Oleg Nesterov, Denys Vlasenko, Andy Lutomirski

Denys is right that KERNEL_STACK_OFFSET is a mess.  Let's start fixing
it.

This removes all C code that *reads* kernel_stack.  It also fixes the
KERNEL_STACK_OFFSET abomination in ia32_sysenter_target.

It does not fix the KERNEL_STACK_OFFSET abomination in GET_THREAD_INFO
and THREAD_INFO.  I think that should be its own patch.

It also doesn't change the two syscall targets.  To fix them, we should
make a decision.  Either we should make KERNEL_STACK_OFFSET have the
correct nonzero value to save an instruction or we should get rid of
kernel_stack entirely.

Changes from v1:
 - Fix missing export.
 - Fix lguest code.
 - Add more init_tss naming cleanups (Ingo's suggestion).
 - Changelog improvements (Ingo).
 - Improve the check in ist_begin_non_atomic (Denys).

Andy Lutomirski (6):
  x86: Add this_cpu_sp0() to read sp0 for the current cpu
  x86: Switch all C consumers of kernel_stack to this_cpu_sp0
  x86, asm: Change the 32-bit sysenter code to use sp0
  x86: Rename init_tss to cpu_tss
  x86: Remove INIT_TSS and fold the definitions into cpu_tss
  x86, asm: Rename INIT_TSS_IST to TSS_IST

 arch/x86/ia32/ia32entry.S          |  3 +--
 arch/x86/include/asm/processor.h   | 27 ++++++---------------------
 arch/x86/include/asm/thread_info.h |  3 +--
 arch/x86/kernel/asm-offsets_64.c   |  1 +
 arch/x86/kernel/cpu/common.c       |  6 +++---
 arch/x86/kernel/entry_64.S         |  6 +++---
 arch/x86/kernel/ioport.c           |  2 +-
 arch/x86/kernel/process.c          | 23 +++++++++++++++++++++--
 arch/x86/kernel/process_32.c       |  2 +-
 arch/x86/kernel/process_64.c       |  2 +-
 arch/x86/kernel/traps.c            |  4 ++--
 arch/x86/kernel/vm86_32.c          |  4 ++--
 arch/x86/lguest/boot.c             |  1 +
 arch/x86/power/cpu.c               |  2 +-
 arch/x86/xen/enlighten.c           |  1 +
 15 files changed, 46 insertions(+), 41 deletions(-)

-- 
2.1.0


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v2 1/6] x86: Add this_cpu_sp0() to read sp0 for the current cpu
  2015-03-06  3:19 [PATCH v2 0/6] Baby steps toward cleaning up KERNEL_STACK_OFFSET Andy Lutomirski
@ 2015-03-06  3:19 ` Andy Lutomirski
  2015-03-06  8:37   ` [tip:x86/asm] x86/asm/entry: " tip-bot for Andy Lutomirski
  2015-03-06  3:19 ` [PATCH v2 2/6] x86: Switch all C consumers of kernel_stack to this_cpu_sp0 Andy Lutomirski
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 14+ messages in thread
From: Andy Lutomirski @ 2015-03-06  3:19 UTC (permalink / raw)
  To: x86, linux-kernel, =Ingo Molnar
  Cc: Borislav Petkov, Oleg Nesterov, Denys Vlasenko, Andy Lutomirski,
	Konrad Rzeszutek Wilk, Boris Ostrovsky, Rusty Russell

We currently store references to the top of the kernel stack in
multiple places: kernel_stack (with an offset) and
init_tss.x86_tss.sp0 (no offset).  The latter is defined by hardware
and is a clean canonical way to find the top of the stack.  Add an
accessor so we can start using it.

This needs minor paravirt tweaks.  On native, sp0 defines the top of
the kernel stack and is therefore always correct.  On Xen and
lguest, the hypervisor tracks the top of the stack, but we want to
start reading sp0 in the kernel.  Fixing this is simple: just update
our local copy of sp0 as well as the hypervisor's copy on task
switches.

Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Andy Lutomirski <luto@amacapital.net>
---
 arch/x86/include/asm/processor.h | 5 +++++
 arch/x86/kernel/process.c        | 1 +
 arch/x86/lguest/boot.c           | 1 +
 arch/x86/xen/enlighten.c         | 1 +
 4 files changed, 8 insertions(+)

diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
index 7be2c9a6caba..71c3a826a690 100644
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -564,6 +564,11 @@ static inline void native_swapgs(void)
 #endif
 }
 
+static inline unsigned long this_cpu_sp0(void)
+{
+	return this_cpu_read_stable(init_tss.x86_tss.sp0);
+}
+
 #ifdef CONFIG_PARAVIRT
 #include <asm/paravirt.h>
 #else
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index 046e2d620bbe..ff5c9088b1c5 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -38,6 +38,7 @@
  * on exact cacheline boundaries, to eliminate cacheline ping-pong.
  */
 __visible DEFINE_PER_CPU_SHARED_ALIGNED(struct tss_struct, init_tss) = INIT_TSS;
+EXPORT_PER_CPU_SYMBOL_GPL(init_tss);
 
 #ifdef CONFIG_X86_64
 static DEFINE_PER_CPU(unsigned char, is_idle);
diff --git a/arch/x86/lguest/boot.c b/arch/x86/lguest/boot.c
index ac4453d8520e..8561585ee2c6 100644
--- a/arch/x86/lguest/boot.c
+++ b/arch/x86/lguest/boot.c
@@ -1076,6 +1076,7 @@ static void lguest_load_sp0(struct tss_struct *tss,
 {
 	lazy_hcall3(LHCALL_SET_STACK, __KERNEL_DS | 0x1, thread->sp0,
 		   THREAD_SIZE / PAGE_SIZE);
+	tss->x86_tss.sp0 = thread->sp0;
 }
 
 /* Let's just say, I wouldn't do debugging under a Guest. */
diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index 5240f563076d..81665c9f2132 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -912,6 +912,7 @@ static void xen_load_sp0(struct tss_struct *tss,
 	mcs = xen_mc_entry(0);
 	MULTI_stack_switch(mcs.mc, __KERNEL_DS, thread->sp0);
 	xen_mc_issue(PARAVIRT_LAZY_CPU);
+	tss->x86_tss.sp0 = thread->sp0;
 }
 
 static void xen_set_iopl_mask(unsigned mask)
-- 
2.1.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 2/6] x86: Switch all C consumers of kernel_stack to this_cpu_sp0
  2015-03-06  3:19 [PATCH v2 0/6] Baby steps toward cleaning up KERNEL_STACK_OFFSET Andy Lutomirski
  2015-03-06  3:19 ` [PATCH v2 1/6] x86: Add this_cpu_sp0() to read sp0 for the current cpu Andy Lutomirski
@ 2015-03-06  3:19 ` Andy Lutomirski
  2015-03-06  8:38   ` [tip:x86/asm] x86/asm/entry: Switch all C consumers of kernel_stack to this_cpu_sp0() tip-bot for Andy Lutomirski
  2015-03-06  3:19 ` [PATCH v2 3/6] x86, asm: Change the 32-bit sysenter code to use sp0 Andy Lutomirski
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 14+ messages in thread
From: Andy Lutomirski @ 2015-03-06  3:19 UTC (permalink / raw)
  To: x86, linux-kernel, =Ingo Molnar
  Cc: Borislav Petkov, Oleg Nesterov, Denys Vlasenko, Andy Lutomirski

This will make modifying the semantics of kernel_stack easier.

The change to ist_begin_non_atomic() is necessary because sp0 no
longer points to the same THREAD_SIZE-aligned region as rsp; it's
one byte too high for that.  At Denys' suggestion, rather than
offsetting it, just check explicitly that we're in the correct range
ending at sp0.  This has the added benefit that we no longer assume
that the thread stack is aligned to THREAD_SIZE.

Signed-off-by: Andy Lutomirski <luto@amacapital.net>
---
 arch/x86/include/asm/thread_info.h | 3 +--
 arch/x86/kernel/traps.c            | 4 ++--
 2 files changed, 3 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/thread_info.h b/arch/x86/include/asm/thread_info.h
index 1d4e4f279a32..a2fa1899494e 100644
--- a/arch/x86/include/asm/thread_info.h
+++ b/arch/x86/include/asm/thread_info.h
@@ -159,8 +159,7 @@ DECLARE_PER_CPU(unsigned long, kernel_stack);
 static inline struct thread_info *current_thread_info(void)
 {
 	struct thread_info *ti;
-	ti = (void *)(this_cpu_read_stable(kernel_stack) +
-		      KERNEL_STACK_OFFSET - THREAD_SIZE);
+	ti = (void *)(this_cpu_sp0() - THREAD_SIZE);
 	return ti;
 }
 
diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
index 42819886be0c..484eb03a3f32 100644
--- a/arch/x86/kernel/traps.c
+++ b/arch/x86/kernel/traps.c
@@ -174,8 +174,8 @@ void ist_begin_non_atomic(struct pt_regs *regs)
 	 * will catch asm bugs and any attempt to use ist_preempt_enable
 	 * from double_fault.
 	 */
-	BUG_ON(((current_stack_pointer() ^ this_cpu_read_stable(kernel_stack))
-		& ~(THREAD_SIZE - 1)) != 0);
+	BUG_ON((unsigned long)(this_cpu_sp0() - current_stack_pointer()) >=
+	       THREAD_SIZE);
 
 	preempt_count_sub(HARDIRQ_OFFSET);
 }
-- 
2.1.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 3/6] x86, asm: Change the 32-bit sysenter code to use sp0
  2015-03-06  3:19 [PATCH v2 0/6] Baby steps toward cleaning up KERNEL_STACK_OFFSET Andy Lutomirski
  2015-03-06  3:19 ` [PATCH v2 1/6] x86: Add this_cpu_sp0() to read sp0 for the current cpu Andy Lutomirski
  2015-03-06  3:19 ` [PATCH v2 2/6] x86: Switch all C consumers of kernel_stack to this_cpu_sp0 Andy Lutomirski
@ 2015-03-06  3:19 ` Andy Lutomirski
  2015-03-06  8:38   ` [tip:x86/asm] x86/asm/entry/64/compat: " tip-bot for Andy Lutomirski
  2015-03-06  3:19 ` [PATCH v2 4/6] x86: Rename init_tss to cpu_tss Andy Lutomirski
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 14+ messages in thread
From: Andy Lutomirski @ 2015-03-06  3:19 UTC (permalink / raw)
  To: x86, linux-kernel, =Ingo Molnar
  Cc: Borislav Petkov, Oleg Nesterov, Denys Vlasenko, Andy Lutomirski

The ia32 sysenter code loaded the top of the kernel stack into rsp
by loading kernel_stack and then adjusting it.  It can be simplified
to just read sp0 directly.

This requires the addition of a new asm-offsets entry for sp0.

Signed-off-by: Andy Lutomirski <luto@amacapital.net>
---
 arch/x86/ia32/ia32entry.S        | 3 +--
 arch/x86/kernel/asm-offsets_64.c | 1 +
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/ia32/ia32entry.S b/arch/x86/ia32/ia32entry.S
index ed9746340363..719db63b35c4 100644
--- a/arch/x86/ia32/ia32entry.S
+++ b/arch/x86/ia32/ia32entry.S
@@ -113,8 +113,7 @@ ENTRY(ia32_sysenter_target)
 	CFI_DEF_CFA	rsp,0
 	CFI_REGISTER	rsp,rbp
 	SWAPGS_UNSAFE_STACK
-	movq	PER_CPU_VAR(kernel_stack), %rsp
-	addq	$(KERNEL_STACK_OFFSET),%rsp
+	movq	PER_CPU_VAR(init_tss + TSS_sp0), %rsp
 	/*
 	 * No need to follow this irqs on/off section: the syscall
 	 * disabled irqs, here we enable it straight after entry:
diff --git a/arch/x86/kernel/asm-offsets_64.c b/arch/x86/kernel/asm-offsets_64.c
index fdcbb4d27c9f..5ce6f2da8763 100644
--- a/arch/x86/kernel/asm-offsets_64.c
+++ b/arch/x86/kernel/asm-offsets_64.c
@@ -81,6 +81,7 @@ int main(void)
 #undef ENTRY
 
 	OFFSET(TSS_ist, tss_struct, x86_tss.ist);
+	OFFSET(TSS_sp0, tss_struct, x86_tss.sp0);
 	BLANK();
 
 	DEFINE(__NR_syscall_max, sizeof(syscalls_64) - 1);
-- 
2.1.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 4/6] x86: Rename init_tss to cpu_tss
  2015-03-06  3:19 [PATCH v2 0/6] Baby steps toward cleaning up KERNEL_STACK_OFFSET Andy Lutomirski
                   ` (2 preceding siblings ...)
  2015-03-06  3:19 ` [PATCH v2 3/6] x86, asm: Change the 32-bit sysenter code to use sp0 Andy Lutomirski
@ 2015-03-06  3:19 ` Andy Lutomirski
  2015-03-06  8:38   ` [tip:x86/asm] x86/asm/entry: Rename 'init_tss' to 'cpu_tss' tip-bot for Andy Lutomirski
  2015-03-06  3:19 ` [PATCH v2 5/6] x86: Remove INIT_TSS and fold the definitions into cpu_tss Andy Lutomirski
  2015-03-06  3:19 ` [PATCH v2 6/6] x86, asm: Rename INIT_TSS_IST to TSS_IST Andy Lutomirski
  5 siblings, 1 reply; 14+ messages in thread
From: Andy Lutomirski @ 2015-03-06  3:19 UTC (permalink / raw)
  To: x86, linux-kernel, =Ingo Molnar
  Cc: Borislav Petkov, Oleg Nesterov, Denys Vlasenko, Andy Lutomirski

It has nothing to do with init -- there's only one tss per cpu.

Other names considered include:
 - current_tss: Confusing because we never switch the tss.
 - singleton_tss: Too long.

This patch was generated with 's/init_tss/cpu_tss/g'.  Followup patches
will fix INIT_TSS and INIT_TSS_IST by hand.

Signed-off-by: Andy Lutomirski <luto@amacapital.net>
---
 arch/x86/ia32/ia32entry.S        | 2 +-
 arch/x86/include/asm/processor.h | 4 ++--
 arch/x86/kernel/cpu/common.c     | 6 +++---
 arch/x86/kernel/entry_64.S       | 2 +-
 arch/x86/kernel/ioport.c         | 2 +-
 arch/x86/kernel/process.c        | 6 +++---
 arch/x86/kernel/process_32.c     | 2 +-
 arch/x86/kernel/process_64.c     | 2 +-
 arch/x86/kernel/vm86_32.c        | 4 ++--
 arch/x86/power/cpu.c             | 2 +-
 10 files changed, 16 insertions(+), 16 deletions(-)

diff --git a/arch/x86/ia32/ia32entry.S b/arch/x86/ia32/ia32entry.S
index 719db63b35c4..ad9efef65a6b 100644
--- a/arch/x86/ia32/ia32entry.S
+++ b/arch/x86/ia32/ia32entry.S
@@ -113,7 +113,7 @@ ENTRY(ia32_sysenter_target)
 	CFI_DEF_CFA	rsp,0
 	CFI_REGISTER	rsp,rbp
 	SWAPGS_UNSAFE_STACK
-	movq	PER_CPU_VAR(init_tss + TSS_sp0), %rsp
+	movq	PER_CPU_VAR(cpu_tss + TSS_sp0), %rsp
 	/*
 	 * No need to follow this irqs on/off section: the syscall
 	 * disabled irqs, here we enable it straight after entry:
diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
index 71c3a826a690..117ee65473e2 100644
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -282,7 +282,7 @@ struct tss_struct {
 
 } ____cacheline_aligned;
 
-DECLARE_PER_CPU_SHARED_ALIGNED(struct tss_struct, init_tss);
+DECLARE_PER_CPU_SHARED_ALIGNED(struct tss_struct, cpu_tss);
 
 /*
  * Save the original ist values for checking stack pointers during debugging
@@ -566,7 +566,7 @@ static inline void native_swapgs(void)
 
 static inline unsigned long this_cpu_sp0(void)
 {
-	return this_cpu_read_stable(init_tss.x86_tss.sp0);
+	return this_cpu_read_stable(cpu_tss.x86_tss.sp0);
 }
 
 #ifdef CONFIG_PARAVIRT
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 2346c95c6ab1..5d0f0cc7ea26 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -979,7 +979,7 @@ static void syscall32_cpu_init(void)
 void enable_sep_cpu(void)
 {
 	int cpu = get_cpu();
-	struct tss_struct *tss = &per_cpu(init_tss, cpu);
+	struct tss_struct *tss = &per_cpu(cpu_tss, cpu);
 
 	if (!boot_cpu_has(X86_FEATURE_SEP)) {
 		put_cpu();
@@ -1307,7 +1307,7 @@ void cpu_init(void)
 	 */
 	load_ucode_ap();
 
-	t = &per_cpu(init_tss, cpu);
+	t = &per_cpu(cpu_tss, cpu);
 	oist = &per_cpu(orig_ist, cpu);
 
 #ifdef CONFIG_NUMA
@@ -1391,7 +1391,7 @@ void cpu_init(void)
 {
 	int cpu = smp_processor_id();
 	struct task_struct *curr = current;
-	struct tss_struct *t = &per_cpu(init_tss, cpu);
+	struct tss_struct *t = &per_cpu(cpu_tss, cpu);
 	struct thread_struct *thread = &curr->thread;
 
 	wait_for_master_cpu(cpu);
diff --git a/arch/x86/kernel/entry_64.S b/arch/x86/kernel/entry_64.S
index 622ce4254893..0c00fd80249a 100644
--- a/arch/x86/kernel/entry_64.S
+++ b/arch/x86/kernel/entry_64.S
@@ -959,7 +959,7 @@ apicinterrupt IRQ_WORK_VECTOR \
 /*
  * Exception entry points.
  */
-#define INIT_TSS_IST(x) PER_CPU_VAR(init_tss) + (TSS_ist + ((x) - 1) * 8)
+#define INIT_TSS_IST(x) PER_CPU_VAR(cpu_tss) + (TSS_ist + ((x) - 1) * 8)
 
 .macro idtentry sym do_sym has_error_code:req paranoid=0 shift_ist=-1
 ENTRY(\sym)
diff --git a/arch/x86/kernel/ioport.c b/arch/x86/kernel/ioport.c
index 4ddaf66ea35f..37dae792dbbe 100644
--- a/arch/x86/kernel/ioport.c
+++ b/arch/x86/kernel/ioport.c
@@ -54,7 +54,7 @@ asmlinkage long sys_ioperm(unsigned long from, unsigned long num, int turn_on)
 	 * because the ->io_bitmap_max value must match the bitmap
 	 * contents:
 	 */
-	tss = &per_cpu(init_tss, get_cpu());
+	tss = &per_cpu(cpu_tss, get_cpu());
 
 	if (turn_on)
 		bitmap_clear(t->io_bitmap_ptr, from, num);
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index ff5c9088b1c5..6f6087349231 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -37,8 +37,8 @@
  * section. Since TSS's are completely CPU-local, we want them
  * on exact cacheline boundaries, to eliminate cacheline ping-pong.
  */
-__visible DEFINE_PER_CPU_SHARED_ALIGNED(struct tss_struct, init_tss) = INIT_TSS;
-EXPORT_PER_CPU_SYMBOL_GPL(init_tss);
+__visible DEFINE_PER_CPU_SHARED_ALIGNED(struct tss_struct, cpu_tss) = INIT_TSS;
+EXPORT_PER_CPU_SYMBOL_GPL(cpu_tss);
 
 #ifdef CONFIG_X86_64
 static DEFINE_PER_CPU(unsigned char, is_idle);
@@ -110,7 +110,7 @@ void exit_thread(void)
 	unsigned long *bp = t->io_bitmap_ptr;
 
 	if (bp) {
-		struct tss_struct *tss = &per_cpu(init_tss, get_cpu());
+		struct tss_struct *tss = &per_cpu(cpu_tss, get_cpu());
 
 		t->io_bitmap_ptr = NULL;
 		clear_thread_flag(TIF_IO_BITMAP);
diff --git a/arch/x86/kernel/process_32.c b/arch/x86/kernel/process_32.c
index 603c4f99cb5a..d3460af3d27a 100644
--- a/arch/x86/kernel/process_32.c
+++ b/arch/x86/kernel/process_32.c
@@ -248,7 +248,7 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
 	struct thread_struct *prev = &prev_p->thread,
 				 *next = &next_p->thread;
 	int cpu = smp_processor_id();
-	struct tss_struct *tss = &per_cpu(init_tss, cpu);
+	struct tss_struct *tss = &per_cpu(cpu_tss, cpu);
 	fpu_switch_t fpu;
 
 	/* never put a printk in __switch_to... printk() calls wake_up*() indirectly */
diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
index 854b5981b327..2cd562f96c1f 100644
--- a/arch/x86/kernel/process_64.c
+++ b/arch/x86/kernel/process_64.c
@@ -277,7 +277,7 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
 	struct thread_struct *prev = &prev_p->thread;
 	struct thread_struct *next = &next_p->thread;
 	int cpu = smp_processor_id();
-	struct tss_struct *tss = &per_cpu(init_tss, cpu);
+	struct tss_struct *tss = &per_cpu(cpu_tss, cpu);
 	unsigned fsindex, gsindex;
 	fpu_switch_t fpu;
 
diff --git a/arch/x86/kernel/vm86_32.c b/arch/x86/kernel/vm86_32.c
index e8edcf52e069..fc9db6ef2a95 100644
--- a/arch/x86/kernel/vm86_32.c
+++ b/arch/x86/kernel/vm86_32.c
@@ -150,7 +150,7 @@ struct pt_regs *save_v86_state(struct kernel_vm86_regs *regs)
 		do_exit(SIGSEGV);
 	}
 
-	tss = &per_cpu(init_tss, get_cpu());
+	tss = &per_cpu(cpu_tss, get_cpu());
 	current->thread.sp0 = current->thread.saved_sp0;
 	current->thread.sysenter_cs = __KERNEL_CS;
 	load_sp0(tss, &current->thread);
@@ -318,7 +318,7 @@ static void do_sys_vm86(struct kernel_vm86_struct *info, struct task_struct *tsk
 	tsk->thread.saved_fs = info->regs32->fs;
 	tsk->thread.saved_gs = get_user_gs(info->regs32);
 
-	tss = &per_cpu(init_tss, get_cpu());
+	tss = &per_cpu(cpu_tss, get_cpu());
 	tsk->thread.sp0 = (unsigned long) &info->VM86_TSS_ESP0;
 	if (cpu_has_sep)
 		tsk->thread.sysenter_cs = 0;
diff --git a/arch/x86/power/cpu.c b/arch/x86/power/cpu.c
index 3e32ed5648a0..757678fb26e1 100644
--- a/arch/x86/power/cpu.c
+++ b/arch/x86/power/cpu.c
@@ -134,7 +134,7 @@ static void do_fpu_end(void)
 static void fix_processor_context(void)
 {
 	int cpu = smp_processor_id();
-	struct tss_struct *t = &per_cpu(init_tss, cpu);
+	struct tss_struct *t = &per_cpu(cpu_tss, cpu);
 #ifdef CONFIG_X86_64
 	struct desc_struct *desc = get_cpu_gdt_table(cpu);
 	tss_desc tss;
-- 
2.1.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 5/6] x86: Remove INIT_TSS and fold the definitions into cpu_tss
  2015-03-06  3:19 [PATCH v2 0/6] Baby steps toward cleaning up KERNEL_STACK_OFFSET Andy Lutomirski
                   ` (3 preceding siblings ...)
  2015-03-06  3:19 ` [PATCH v2 4/6] x86: Rename init_tss to cpu_tss Andy Lutomirski
@ 2015-03-06  3:19 ` Andy Lutomirski
  2015-03-06  8:39   ` [tip:x86/asm] x86/asm/entry: Remove INIT_TSS and fold the definitions into 'cpu_tss' tip-bot for Andy Lutomirski
  2015-03-06  3:19 ` [PATCH v2 6/6] x86, asm: Rename INIT_TSS_IST to TSS_IST Andy Lutomirski
  5 siblings, 1 reply; 14+ messages in thread
From: Andy Lutomirski @ 2015-03-06  3:19 UTC (permalink / raw)
  To: x86, linux-kernel, =Ingo Molnar
  Cc: Borislav Petkov, Oleg Nesterov, Denys Vlasenko, Andy Lutomirski

The INIT_TSS is unnecessary.  Just define the initial TSS where cpu_tss
is defined.

While we're at it, merge the 32-bit and 64-bit definitions.  The only
syntactic change is that 32-bit kernels were computing sp0 as long, but
now they compute it as unsigned long.

Verified by objdump: the contents and relocations of
.data..percpu..shared_aligned are unchanged on 32-bit and 64-bit
kernels.

Signed-off-by: Andy Lutomirski <luto@amacapital.net>
---
 arch/x86/include/asm/processor.h | 20 --------------------
 arch/x86/kernel/process.c        | 20 +++++++++++++++++++-
 2 files changed, 19 insertions(+), 21 deletions(-)

diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
index 117ee65473e2..f5e3ec63767d 100644
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -818,22 +818,6 @@ static inline void spin_lock_prefetch(const void *x)
 	.io_bitmap_ptr		= NULL,					  \
 }
 
-/*
- * Note that the .io_bitmap member must be extra-big. This is because
- * the CPU will access an additional byte beyond the end of the IO
- * permission bitmap. The extra byte must be all 1 bits, and must
- * be within the limit.
- */
-#define INIT_TSS  {							  \
-	.x86_tss = {							  \
-		.sp0		= sizeof(init_stack) + (long)&init_stack, \
-		.ss0		= __KERNEL_DS,				  \
-		.ss1		= __KERNEL_CS,				  \
-		.io_bitmap_base	= INVALID_IO_BITMAP_OFFSET,		  \
-	 },								  \
-	.io_bitmap		= { [0 ... IO_BITMAP_LONGS] = ~0 },	  \
-}
-
 extern unsigned long thread_saved_pc(struct task_struct *tsk);
 
 #define THREAD_SIZE_LONGS      (THREAD_SIZE/sizeof(unsigned long))
@@ -892,10 +876,6 @@ extern unsigned long thread_saved_pc(struct task_struct *tsk);
 	.sp0 = (unsigned long)&init_stack + sizeof(init_stack) \
 }
 
-#define INIT_TSS  { \
-	.x86_tss.sp0 = (unsigned long)&init_stack + sizeof(init_stack) \
-}
-
 /*
  * Return saved PC of a blocked thread.
  * What is this good for? it will be always the scheduler or ret_from_fork.
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index 6f6087349231..f4c0af7fc3a0 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -37,7 +37,25 @@
  * section. Since TSS's are completely CPU-local, we want them
  * on exact cacheline boundaries, to eliminate cacheline ping-pong.
  */
-__visible DEFINE_PER_CPU_SHARED_ALIGNED(struct tss_struct, cpu_tss) = INIT_TSS;
+__visible DEFINE_PER_CPU_SHARED_ALIGNED(struct tss_struct, cpu_tss) = {
+	.x86_tss = {
+		.sp0 = (unsigned long)&init_stack + sizeof(init_stack),
+#ifdef CONFIG_X86_32
+		.ss0 = __KERNEL_DS,
+		.ss1 = __KERNEL_CS,
+		.io_bitmap_base	= INVALID_IO_BITMAP_OFFSET,
+#endif
+	 },
+#ifdef CONFIG_X86_32
+	 /*
+	  * Note that the .io_bitmap member must be extra-big. This is because
+	  * the CPU will access an additional byte beyond the end of the IO
+	  * permission bitmap. The extra byte must be all 1 bits, and must
+	  * be within the limit.
+	  */
+	.io_bitmap		= { [0 ... IO_BITMAP_LONGS] = ~0 },
+#endif
+};
 EXPORT_PER_CPU_SYMBOL_GPL(cpu_tss);
 
 #ifdef CONFIG_X86_64
-- 
2.1.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 6/6] x86, asm: Rename INIT_TSS_IST to TSS_IST
  2015-03-06  3:19 [PATCH v2 0/6] Baby steps toward cleaning up KERNEL_STACK_OFFSET Andy Lutomirski
                   ` (4 preceding siblings ...)
  2015-03-06  3:19 ` [PATCH v2 5/6] x86: Remove INIT_TSS and fold the definitions into cpu_tss Andy Lutomirski
@ 2015-03-06  3:19 ` Andy Lutomirski
  2015-03-06  7:30   ` Ingo Molnar
  2015-03-06  8:39   ` [tip:x86/asm] x86/asm/entry: Rename 'INIT_TSS_IST' to ' CPU_TSS_IST' tip-bot for Andy Lutomirski
  5 siblings, 2 replies; 14+ messages in thread
From: Andy Lutomirski @ 2015-03-06  3:19 UTC (permalink / raw)
  To: x86, linux-kernel, =Ingo Molnar
  Cc: Borislav Petkov, Oleg Nesterov, Denys Vlasenko, Andy Lutomirski

This has nothing to do with the init thread or the initial anything.
It's just the TSS.

Signed-off-by: Andy Lutomirski <luto@amacapital.net>
---
 arch/x86/kernel/entry_64.S | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kernel/entry_64.S b/arch/x86/kernel/entry_64.S
index 0c00fd80249a..c86f83e95f15 100644
--- a/arch/x86/kernel/entry_64.S
+++ b/arch/x86/kernel/entry_64.S
@@ -959,7 +959,7 @@ apicinterrupt IRQ_WORK_VECTOR \
 /*
  * Exception entry points.
  */
-#define INIT_TSS_IST(x) PER_CPU_VAR(cpu_tss) + (TSS_ist + ((x) - 1) * 8)
+#define TSS_IST(x) PER_CPU_VAR(cpu_tss) + (TSS_ist + ((x) - 1) * 8)
 
 .macro idtentry sym do_sym has_error_code:req paranoid=0 shift_ist=-1
 ENTRY(\sym)
@@ -1015,13 +1015,13 @@ ENTRY(\sym)
 	.endif
 
 	.if \shift_ist != -1
-	subq $EXCEPTION_STKSZ, INIT_TSS_IST(\shift_ist)
+	subq $EXCEPTION_STKSZ, TSS_IST(\shift_ist)
 	.endif
 
 	call \do_sym
 
 	.if \shift_ist != -1
-	addq $EXCEPTION_STKSZ, INIT_TSS_IST(\shift_ist)
+	addq $EXCEPTION_STKSZ, TSS_IST(\shift_ist)
 	.endif
 
 	/* these procedures expect "no swapgs" flag in ebx */
-- 
2.1.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 6/6] x86, asm: Rename INIT_TSS_IST to TSS_IST
  2015-03-06  3:19 ` [PATCH v2 6/6] x86, asm: Rename INIT_TSS_IST to TSS_IST Andy Lutomirski
@ 2015-03-06  7:30   ` Ingo Molnar
  2015-03-06  8:39   ` [tip:x86/asm] x86/asm/entry: Rename 'INIT_TSS_IST' to ' CPU_TSS_IST' tip-bot for Andy Lutomirski
  1 sibling, 0 replies; 14+ messages in thread
From: Ingo Molnar @ 2015-03-06  7:30 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: x86, linux-kernel, Borislav Petkov, Oleg Nesterov,
	Denys Vlasenko, Linus Torvalds, Thomas Gleixner, H. Peter Anvin


* Andy Lutomirski <luto@amacapital.net> wrote:

> This has nothing to do with the init thread or the initial anything.
> It's just the TSS.
> 
> Signed-off-by: Andy Lutomirski <luto@amacapital.net>
> ---
>  arch/x86/kernel/entry_64.S | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/x86/kernel/entry_64.S b/arch/x86/kernel/entry_64.S
> index 0c00fd80249a..c86f83e95f15 100644
> --- a/arch/x86/kernel/entry_64.S
> +++ b/arch/x86/kernel/entry_64.S
> @@ -959,7 +959,7 @@ apicinterrupt IRQ_WORK_VECTOR \
>  /*
>   * Exception entry points.
>   */
> -#define INIT_TSS_IST(x) PER_CPU_VAR(cpu_tss) + (TSS_ist + ((x) - 1) * 8)
> +#define TSS_IST(x) PER_CPU_VAR(cpu_tss) + (TSS_ist + ((x) - 1) * 8)
>  
>  .macro idtentry sym do_sym has_error_code:req paranoid=0 shift_ist=-1
>  ENTRY(\sym)
> @@ -1015,13 +1015,13 @@ ENTRY(\sym)
>  	.endif
>  
>  	.if \shift_ist != -1
> -	subq $EXCEPTION_STKSZ, INIT_TSS_IST(\shift_ist)
> +	subq $EXCEPTION_STKSZ, TSS_IST(\shift_ist)
>  	.endif
>  
>  	call \do_sym
>  
>  	.if \shift_ist != -1
> -	addq $EXCEPTION_STKSZ, INIT_TSS_IST(\shift_ist)
> +	addq $EXCEPTION_STKSZ, TSS_IST(\shift_ist)
>  	.endif
>  
>  	/* these procedures expect "no swapgs" flag in ebx */

If you don't mind I've renamed this to 'CPU_TSS_IST', to be in line 
with cpu_tss.

The per-cpuness of this symbol gets lost at the usage sites, because 
the PER_CPU_VAR() reference is hidden in a macro.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [tip:x86/asm] x86/asm/entry: Add this_cpu_sp0() to read sp0 for the current cpu
  2015-03-06  3:19 ` [PATCH v2 1/6] x86: Add this_cpu_sp0() to read sp0 for the current cpu Andy Lutomirski
@ 2015-03-06  8:37   ` tip-bot for Andy Lutomirski
  0 siblings, 0 replies; 14+ messages in thread
From: tip-bot for Andy Lutomirski @ 2015-03-06  8:37 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: mingo, torvalds, rusty, konrad.wilk, linux-kernel, dvlasenk,
	boris.ostrovsky, hpa, bp, tglx, luto, oleg

Commit-ID:  8ef46a672a7d852709561d10672b6eaa8a4acd82
Gitweb:     http://git.kernel.org/tip/8ef46a672a7d852709561d10672b6eaa8a4acd82
Author:     Andy Lutomirski <luto@amacapital.net>
AuthorDate: Thu, 5 Mar 2015 19:19:02 -0800
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Fri, 6 Mar 2015 08:32:57 +0100

x86/asm/entry: Add this_cpu_sp0() to read sp0 for the current cpu

We currently store references to the top of the kernel stack in
multiple places: kernel_stack (with an offset) and
init_tss.x86_tss.sp0 (no offset).  The latter is defined by
hardware and is a clean canonical way to find the top of the
stack.  Add an accessor so we can start using it.

This needs minor paravirt tweaks.  On native, sp0 defines the
top of the kernel stack and is therefore always correct.  On Xen
and lguest, the hypervisor tracks the top of the stack, but we
want to start reading sp0 in the kernel.  Fixing this is simple:
just update our local copy of sp0 as well as the hypervisor's
copy on task switches.

Signed-off-by: Andy Lutomirski <luto@amacapital.net>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/8d675581859712bee09a055ed8f785d80dac1eca.1425611534.git.luto@amacapital.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/x86/include/asm/processor.h | 5 +++++
 arch/x86/kernel/process.c        | 1 +
 arch/x86/lguest/boot.c           | 1 +
 arch/x86/xen/enlighten.c         | 1 +
 4 files changed, 8 insertions(+)

diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
index 7be2c9a..71c3a82 100644
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -564,6 +564,11 @@ static inline void native_swapgs(void)
 #endif
 }
 
+static inline unsigned long this_cpu_sp0(void)
+{
+	return this_cpu_read_stable(init_tss.x86_tss.sp0);
+}
+
 #ifdef CONFIG_PARAVIRT
 #include <asm/paravirt.h>
 #else
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index 046e2d6..ff5c908 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -38,6 +38,7 @@
  * on exact cacheline boundaries, to eliminate cacheline ping-pong.
  */
 __visible DEFINE_PER_CPU_SHARED_ALIGNED(struct tss_struct, init_tss) = INIT_TSS;
+EXPORT_PER_CPU_SYMBOL_GPL(init_tss);
 
 #ifdef CONFIG_X86_64
 static DEFINE_PER_CPU(unsigned char, is_idle);
diff --git a/arch/x86/lguest/boot.c b/arch/x86/lguest/boot.c
index ac4453d..8561585 100644
--- a/arch/x86/lguest/boot.c
+++ b/arch/x86/lguest/boot.c
@@ -1076,6 +1076,7 @@ static void lguest_load_sp0(struct tss_struct *tss,
 {
 	lazy_hcall3(LHCALL_SET_STACK, __KERNEL_DS | 0x1, thread->sp0,
 		   THREAD_SIZE / PAGE_SIZE);
+	tss->x86_tss.sp0 = thread->sp0;
 }
 
 /* Let's just say, I wouldn't do debugging under a Guest. */
diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index 5240f56..81665c9 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -912,6 +912,7 @@ static void xen_load_sp0(struct tss_struct *tss,
 	mcs = xen_mc_entry(0);
 	MULTI_stack_switch(mcs.mc, __KERNEL_DS, thread->sp0);
 	xen_mc_issue(PARAVIRT_LAZY_CPU);
+	tss->x86_tss.sp0 = thread->sp0;
 }
 
 static void xen_set_iopl_mask(unsigned mask)

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [tip:x86/asm] x86/asm/entry: Switch all C consumers of kernel_stack to this_cpu_sp0()
  2015-03-06  3:19 ` [PATCH v2 2/6] x86: Switch all C consumers of kernel_stack to this_cpu_sp0 Andy Lutomirski
@ 2015-03-06  8:38   ` tip-bot for Andy Lutomirski
  0 siblings, 0 replies; 14+ messages in thread
From: tip-bot for Andy Lutomirski @ 2015-03-06  8:38 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: bp, mingo, oleg, torvalds, linux-kernel, hpa, tglx, luto, dvlasenk

Commit-ID:  75182b1632a89f12540baa1806a7c5c180db620c
Gitweb:     http://git.kernel.org/tip/75182b1632a89f12540baa1806a7c5c180db620c
Author:     Andy Lutomirski <luto@amacapital.net>
AuthorDate: Thu, 5 Mar 2015 19:19:03 -0800
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Fri, 6 Mar 2015 08:32:57 +0100

x86/asm/entry: Switch all C consumers of kernel_stack to this_cpu_sp0()

This will make modifying the semantics of kernel_stack easier.

The change to ist_begin_non_atomic() is necessary because sp0 no
longer points to the same THREAD_SIZE-aligned region as RSP;
it's one byte too high for that.  At Denys' suggestion, rather
than offsetting it, just check explicitly that we're in the
correct range ending at sp0.  This has the added benefit that we
no longer assume that the thread stack is aligned to
THREAD_SIZE.

Suggested-by: Denys Vlasenko <dvlasenk@redhat.com>
Signed-off-by: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/ef8254ad414cbb8034c9a56396eeb24f5dd5b0de.1425611534.git.luto@amacapital.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/x86/include/asm/thread_info.h | 3 +--
 arch/x86/kernel/traps.c            | 4 ++--
 2 files changed, 3 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/thread_info.h b/arch/x86/include/asm/thread_info.h
index 1d4e4f2..a2fa189 100644
--- a/arch/x86/include/asm/thread_info.h
+++ b/arch/x86/include/asm/thread_info.h
@@ -159,8 +159,7 @@ DECLARE_PER_CPU(unsigned long, kernel_stack);
 static inline struct thread_info *current_thread_info(void)
 {
 	struct thread_info *ti;
-	ti = (void *)(this_cpu_read_stable(kernel_stack) +
-		      KERNEL_STACK_OFFSET - THREAD_SIZE);
+	ti = (void *)(this_cpu_sp0() - THREAD_SIZE);
 	return ti;
 }
 
diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
index 9965bd1..fa29058 100644
--- a/arch/x86/kernel/traps.c
+++ b/arch/x86/kernel/traps.c
@@ -174,8 +174,8 @@ void ist_begin_non_atomic(struct pt_regs *regs)
 	 * will catch asm bugs and any attempt to use ist_preempt_enable
 	 * from double_fault.
 	 */
-	BUG_ON(((current_stack_pointer() ^ this_cpu_read_stable(kernel_stack))
-		& ~(THREAD_SIZE - 1)) != 0);
+	BUG_ON((unsigned long)(this_cpu_sp0() - current_stack_pointer()) >=
+	       THREAD_SIZE);
 
 	preempt_count_sub(HARDIRQ_OFFSET);
 }

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [tip:x86/asm] x86/asm/entry/64/compat: Change the 32-bit sysenter code to use sp0
  2015-03-06  3:19 ` [PATCH v2 3/6] x86, asm: Change the 32-bit sysenter code to use sp0 Andy Lutomirski
@ 2015-03-06  8:38   ` tip-bot for Andy Lutomirski
  0 siblings, 0 replies; 14+ messages in thread
From: tip-bot for Andy Lutomirski @ 2015-03-06  8:38 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: hpa, dvlasenk, linux-kernel, bp, mingo, oleg, luto, torvalds, tglx

Commit-ID:  9d0c914c60f4d3123debb653340dc1f7cf44939d
Gitweb:     http://git.kernel.org/tip/9d0c914c60f4d3123debb653340dc1f7cf44939d
Author:     Andy Lutomirski <luto@amacapital.net>
AuthorDate: Thu, 5 Mar 2015 19:19:04 -0800
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Fri, 6 Mar 2015 08:32:58 +0100

x86/asm/entry/64/compat: Change the 32-bit sysenter code to use sp0

The ia32 sysenter code loaded the top of the kernel stack into
rsp by loading kernel_stack and then adjusting it.  It can be
simplified to just read sp0 directly.

This requires the addition of a new asm-offsets entry for sp0.

Signed-off-by: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/88ff9006163d296a0665338585c36d9bfb85235d.1425611534.git.luto@amacapital.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/x86/ia32/ia32entry.S        | 3 +--
 arch/x86/kernel/asm-offsets_64.c | 1 +
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/ia32/ia32entry.S b/arch/x86/ia32/ia32entry.S
index ed97463..719db63 100644
--- a/arch/x86/ia32/ia32entry.S
+++ b/arch/x86/ia32/ia32entry.S
@@ -113,8 +113,7 @@ ENTRY(ia32_sysenter_target)
 	CFI_DEF_CFA	rsp,0
 	CFI_REGISTER	rsp,rbp
 	SWAPGS_UNSAFE_STACK
-	movq	PER_CPU_VAR(kernel_stack), %rsp
-	addq	$(KERNEL_STACK_OFFSET),%rsp
+	movq	PER_CPU_VAR(init_tss + TSS_sp0), %rsp
 	/*
 	 * No need to follow this irqs on/off section: the syscall
 	 * disabled irqs, here we enable it straight after entry:
diff --git a/arch/x86/kernel/asm-offsets_64.c b/arch/x86/kernel/asm-offsets_64.c
index fdcbb4d..5ce6f2d 100644
--- a/arch/x86/kernel/asm-offsets_64.c
+++ b/arch/x86/kernel/asm-offsets_64.c
@@ -81,6 +81,7 @@ int main(void)
 #undef ENTRY
 
 	OFFSET(TSS_ist, tss_struct, x86_tss.ist);
+	OFFSET(TSS_sp0, tss_struct, x86_tss.sp0);
 	BLANK();
 
 	DEFINE(__NR_syscall_max, sizeof(syscalls_64) - 1);

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [tip:x86/asm] x86/asm/entry: Rename 'init_tss' to 'cpu_tss'
  2015-03-06  3:19 ` [PATCH v2 4/6] x86: Rename init_tss to cpu_tss Andy Lutomirski
@ 2015-03-06  8:38   ` tip-bot for Andy Lutomirski
  0 siblings, 0 replies; 14+ messages in thread
From: tip-bot for Andy Lutomirski @ 2015-03-06  8:38 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: tglx, bp, luto, hpa, mingo, dvlasenk, torvalds, oleg, linux-kernel

Commit-ID:  24933b82c0d9a711475a5ef7904eb733f561e637
Gitweb:     http://git.kernel.org/tip/24933b82c0d9a711475a5ef7904eb733f561e637
Author:     Andy Lutomirski <luto@amacapital.net>
AuthorDate: Thu, 5 Mar 2015 19:19:05 -0800
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Fri, 6 Mar 2015 08:32:58 +0100

x86/asm/entry: Rename 'init_tss' to 'cpu_tss'

It has nothing to do with init -- there's only one TSS per cpu.

Other names considered include:

 - current_tss: Confusing because we never switch the tss.
 - singleton_tss: Too long.

This patch was generated with 's/init_tss/cpu_tss/g'.  Followup
patches will fix INIT_TSS and INIT_TSS_IST by hand.

Signed-off-by: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/da29fb2a793e4f649d93ce2d1ed320ebe8516262.1425611534.git.luto@amacapital.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/x86/ia32/ia32entry.S        | 2 +-
 arch/x86/include/asm/processor.h | 4 ++--
 arch/x86/kernel/cpu/common.c     | 6 +++---
 arch/x86/kernel/entry_64.S       | 2 +-
 arch/x86/kernel/ioport.c         | 2 +-
 arch/x86/kernel/process.c        | 6 +++---
 arch/x86/kernel/process_32.c     | 2 +-
 arch/x86/kernel/process_64.c     | 2 +-
 arch/x86/kernel/vm86_32.c        | 4 ++--
 arch/x86/power/cpu.c             | 2 +-
 10 files changed, 16 insertions(+), 16 deletions(-)

diff --git a/arch/x86/ia32/ia32entry.S b/arch/x86/ia32/ia32entry.S
index 719db63..ad9efef 100644
--- a/arch/x86/ia32/ia32entry.S
+++ b/arch/x86/ia32/ia32entry.S
@@ -113,7 +113,7 @@ ENTRY(ia32_sysenter_target)
 	CFI_DEF_CFA	rsp,0
 	CFI_REGISTER	rsp,rbp
 	SWAPGS_UNSAFE_STACK
-	movq	PER_CPU_VAR(init_tss + TSS_sp0), %rsp
+	movq	PER_CPU_VAR(cpu_tss + TSS_sp0), %rsp
 	/*
 	 * No need to follow this irqs on/off section: the syscall
 	 * disabled irqs, here we enable it straight after entry:
diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
index 71c3a82..117ee65 100644
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -282,7 +282,7 @@ struct tss_struct {
 
 } ____cacheline_aligned;
 
-DECLARE_PER_CPU_SHARED_ALIGNED(struct tss_struct, init_tss);
+DECLARE_PER_CPU_SHARED_ALIGNED(struct tss_struct, cpu_tss);
 
 /*
  * Save the original ist values for checking stack pointers during debugging
@@ -566,7 +566,7 @@ static inline void native_swapgs(void)
 
 static inline unsigned long this_cpu_sp0(void)
 {
-	return this_cpu_read_stable(init_tss.x86_tss.sp0);
+	return this_cpu_read_stable(cpu_tss.x86_tss.sp0);
 }
 
 #ifdef CONFIG_PARAVIRT
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 2346c95..5d0f0cc 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -979,7 +979,7 @@ static void syscall32_cpu_init(void)
 void enable_sep_cpu(void)
 {
 	int cpu = get_cpu();
-	struct tss_struct *tss = &per_cpu(init_tss, cpu);
+	struct tss_struct *tss = &per_cpu(cpu_tss, cpu);
 
 	if (!boot_cpu_has(X86_FEATURE_SEP)) {
 		put_cpu();
@@ -1307,7 +1307,7 @@ void cpu_init(void)
 	 */
 	load_ucode_ap();
 
-	t = &per_cpu(init_tss, cpu);
+	t = &per_cpu(cpu_tss, cpu);
 	oist = &per_cpu(orig_ist, cpu);
 
 #ifdef CONFIG_NUMA
@@ -1391,7 +1391,7 @@ void cpu_init(void)
 {
 	int cpu = smp_processor_id();
 	struct task_struct *curr = current;
-	struct tss_struct *t = &per_cpu(init_tss, cpu);
+	struct tss_struct *t = &per_cpu(cpu_tss, cpu);
 	struct thread_struct *thread = &curr->thread;
 
 	wait_for_master_cpu(cpu);
diff --git a/arch/x86/kernel/entry_64.S b/arch/x86/kernel/entry_64.S
index 622ce42..0c00fd8 100644
--- a/arch/x86/kernel/entry_64.S
+++ b/arch/x86/kernel/entry_64.S
@@ -959,7 +959,7 @@ apicinterrupt IRQ_WORK_VECTOR \
 /*
  * Exception entry points.
  */
-#define INIT_TSS_IST(x) PER_CPU_VAR(init_tss) + (TSS_ist + ((x) - 1) * 8)
+#define INIT_TSS_IST(x) PER_CPU_VAR(cpu_tss) + (TSS_ist + ((x) - 1) * 8)
 
 .macro idtentry sym do_sym has_error_code:req paranoid=0 shift_ist=-1
 ENTRY(\sym)
diff --git a/arch/x86/kernel/ioport.c b/arch/x86/kernel/ioport.c
index 4ddaf66..37dae79 100644
--- a/arch/x86/kernel/ioport.c
+++ b/arch/x86/kernel/ioport.c
@@ -54,7 +54,7 @@ asmlinkage long sys_ioperm(unsigned long from, unsigned long num, int turn_on)
 	 * because the ->io_bitmap_max value must match the bitmap
 	 * contents:
 	 */
-	tss = &per_cpu(init_tss, get_cpu());
+	tss = &per_cpu(cpu_tss, get_cpu());
 
 	if (turn_on)
 		bitmap_clear(t->io_bitmap_ptr, from, num);
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index ff5c908..6f60873 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -37,8 +37,8 @@
  * section. Since TSS's are completely CPU-local, we want them
  * on exact cacheline boundaries, to eliminate cacheline ping-pong.
  */
-__visible DEFINE_PER_CPU_SHARED_ALIGNED(struct tss_struct, init_tss) = INIT_TSS;
-EXPORT_PER_CPU_SYMBOL_GPL(init_tss);
+__visible DEFINE_PER_CPU_SHARED_ALIGNED(struct tss_struct, cpu_tss) = INIT_TSS;
+EXPORT_PER_CPU_SYMBOL_GPL(cpu_tss);
 
 #ifdef CONFIG_X86_64
 static DEFINE_PER_CPU(unsigned char, is_idle);
@@ -110,7 +110,7 @@ void exit_thread(void)
 	unsigned long *bp = t->io_bitmap_ptr;
 
 	if (bp) {
-		struct tss_struct *tss = &per_cpu(init_tss, get_cpu());
+		struct tss_struct *tss = &per_cpu(cpu_tss, get_cpu());
 
 		t->io_bitmap_ptr = NULL;
 		clear_thread_flag(TIF_IO_BITMAP);
diff --git a/arch/x86/kernel/process_32.c b/arch/x86/kernel/process_32.c
index 603c4f9..d3460af 100644
--- a/arch/x86/kernel/process_32.c
+++ b/arch/x86/kernel/process_32.c
@@ -248,7 +248,7 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
 	struct thread_struct *prev = &prev_p->thread,
 				 *next = &next_p->thread;
 	int cpu = smp_processor_id();
-	struct tss_struct *tss = &per_cpu(init_tss, cpu);
+	struct tss_struct *tss = &per_cpu(cpu_tss, cpu);
 	fpu_switch_t fpu;
 
 	/* never put a printk in __switch_to... printk() calls wake_up*() indirectly */
diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
index 854b598..2cd562f 100644
--- a/arch/x86/kernel/process_64.c
+++ b/arch/x86/kernel/process_64.c
@@ -277,7 +277,7 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
 	struct thread_struct *prev = &prev_p->thread;
 	struct thread_struct *next = &next_p->thread;
 	int cpu = smp_processor_id();
-	struct tss_struct *tss = &per_cpu(init_tss, cpu);
+	struct tss_struct *tss = &per_cpu(cpu_tss, cpu);
 	unsigned fsindex, gsindex;
 	fpu_switch_t fpu;
 
diff --git a/arch/x86/kernel/vm86_32.c b/arch/x86/kernel/vm86_32.c
index e8edcf5..fc9db6e 100644
--- a/arch/x86/kernel/vm86_32.c
+++ b/arch/x86/kernel/vm86_32.c
@@ -150,7 +150,7 @@ struct pt_regs *save_v86_state(struct kernel_vm86_regs *regs)
 		do_exit(SIGSEGV);
 	}
 
-	tss = &per_cpu(init_tss, get_cpu());
+	tss = &per_cpu(cpu_tss, get_cpu());
 	current->thread.sp0 = current->thread.saved_sp0;
 	current->thread.sysenter_cs = __KERNEL_CS;
 	load_sp0(tss, &current->thread);
@@ -318,7 +318,7 @@ static void do_sys_vm86(struct kernel_vm86_struct *info, struct task_struct *tsk
 	tsk->thread.saved_fs = info->regs32->fs;
 	tsk->thread.saved_gs = get_user_gs(info->regs32);
 
-	tss = &per_cpu(init_tss, get_cpu());
+	tss = &per_cpu(cpu_tss, get_cpu());
 	tsk->thread.sp0 = (unsigned long) &info->VM86_TSS_ESP0;
 	if (cpu_has_sep)
 		tsk->thread.sysenter_cs = 0;
diff --git a/arch/x86/power/cpu.c b/arch/x86/power/cpu.c
index 3e32ed5..757678f 100644
--- a/arch/x86/power/cpu.c
+++ b/arch/x86/power/cpu.c
@@ -134,7 +134,7 @@ static void do_fpu_end(void)
 static void fix_processor_context(void)
 {
 	int cpu = smp_processor_id();
-	struct tss_struct *t = &per_cpu(init_tss, cpu);
+	struct tss_struct *t = &per_cpu(cpu_tss, cpu);
 #ifdef CONFIG_X86_64
 	struct desc_struct *desc = get_cpu_gdt_table(cpu);
 	tss_desc tss;

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [tip:x86/asm] x86/asm/entry: Remove INIT_TSS and fold the definitions into 'cpu_tss'
  2015-03-06  3:19 ` [PATCH v2 5/6] x86: Remove INIT_TSS and fold the definitions into cpu_tss Andy Lutomirski
@ 2015-03-06  8:39   ` tip-bot for Andy Lutomirski
  0 siblings, 0 replies; 14+ messages in thread
From: tip-bot for Andy Lutomirski @ 2015-03-06  8:39 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: hpa, dvlasenk, mingo, luto, bp, oleg, torvalds, tglx, linux-kernel

Commit-ID:  d0a0de21f82bbc1737ea3c831f018d0c2bc6b9c2
Gitweb:     http://git.kernel.org/tip/d0a0de21f82bbc1737ea3c831f018d0c2bc6b9c2
Author:     Andy Lutomirski <luto@amacapital.net>
AuthorDate: Thu, 5 Mar 2015 19:19:06 -0800
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Fri, 6 Mar 2015 08:32:58 +0100

x86/asm/entry: Remove INIT_TSS and fold the definitions into 'cpu_tss'

The INIT_TSS is unnecessary.  Just define the initial TSS where
'cpu_tss' is defined.

While we're at it, merge the 32-bit and 64-bit definitions.  The
only syntactic change is that 32-bit kernels were computing sp0
as long, but now they compute it as unsigned long.

Verified by objdump: the contents and relocations of
.data..percpu..shared_aligned are unchanged on 32-bit and 64-bit
kernels.

Signed-off-by: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/8fc39fa3f6c5d635e93afbdd1a0fe0678a6d7913.1425611534.git.luto@amacapital.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/x86/include/asm/processor.h | 20 --------------------
 arch/x86/kernel/process.c        | 20 +++++++++++++++++++-
 2 files changed, 19 insertions(+), 21 deletions(-)

diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
index 117ee65..f5e3ec6 100644
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -818,22 +818,6 @@ static inline void spin_lock_prefetch(const void *x)
 	.io_bitmap_ptr		= NULL,					  \
 }
 
-/*
- * Note that the .io_bitmap member must be extra-big. This is because
- * the CPU will access an additional byte beyond the end of the IO
- * permission bitmap. The extra byte must be all 1 bits, and must
- * be within the limit.
- */
-#define INIT_TSS  {							  \
-	.x86_tss = {							  \
-		.sp0		= sizeof(init_stack) + (long)&init_stack, \
-		.ss0		= __KERNEL_DS,				  \
-		.ss1		= __KERNEL_CS,				  \
-		.io_bitmap_base	= INVALID_IO_BITMAP_OFFSET,		  \
-	 },								  \
-	.io_bitmap		= { [0 ... IO_BITMAP_LONGS] = ~0 },	  \
-}
-
 extern unsigned long thread_saved_pc(struct task_struct *tsk);
 
 #define THREAD_SIZE_LONGS      (THREAD_SIZE/sizeof(unsigned long))
@@ -892,10 +876,6 @@ extern unsigned long thread_saved_pc(struct task_struct *tsk);
 	.sp0 = (unsigned long)&init_stack + sizeof(init_stack) \
 }
 
-#define INIT_TSS  { \
-	.x86_tss.sp0 = (unsigned long)&init_stack + sizeof(init_stack) \
-}
-
 /*
  * Return saved PC of a blocked thread.
  * What is this good for? it will be always the scheduler or ret_from_fork.
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index 6f60873..f4c0af7 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -37,7 +37,25 @@
  * section. Since TSS's are completely CPU-local, we want them
  * on exact cacheline boundaries, to eliminate cacheline ping-pong.
  */
-__visible DEFINE_PER_CPU_SHARED_ALIGNED(struct tss_struct, cpu_tss) = INIT_TSS;
+__visible DEFINE_PER_CPU_SHARED_ALIGNED(struct tss_struct, cpu_tss) = {
+	.x86_tss = {
+		.sp0 = (unsigned long)&init_stack + sizeof(init_stack),
+#ifdef CONFIG_X86_32
+		.ss0 = __KERNEL_DS,
+		.ss1 = __KERNEL_CS,
+		.io_bitmap_base	= INVALID_IO_BITMAP_OFFSET,
+#endif
+	 },
+#ifdef CONFIG_X86_32
+	 /*
+	  * Note that the .io_bitmap member must be extra-big. This is because
+	  * the CPU will access an additional byte beyond the end of the IO
+	  * permission bitmap. The extra byte must be all 1 bits, and must
+	  * be within the limit.
+	  */
+	.io_bitmap		= { [0 ... IO_BITMAP_LONGS] = ~0 },
+#endif
+};
 EXPORT_PER_CPU_SYMBOL_GPL(cpu_tss);
 
 #ifdef CONFIG_X86_64

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [tip:x86/asm] x86/asm/entry: Rename 'INIT_TSS_IST' to ' CPU_TSS_IST'
  2015-03-06  3:19 ` [PATCH v2 6/6] x86, asm: Rename INIT_TSS_IST to TSS_IST Andy Lutomirski
  2015-03-06  7:30   ` Ingo Molnar
@ 2015-03-06  8:39   ` tip-bot for Andy Lutomirski
  1 sibling, 0 replies; 14+ messages in thread
From: tip-bot for Andy Lutomirski @ 2015-03-06  8:39 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: dvlasenk, tglx, oleg, bp, torvalds, mingo, luto, linux-kernel, hpa

Commit-ID:  9b47668843d800ed57f6f6bfd6f5c4cffdf201c6
Gitweb:     http://git.kernel.org/tip/9b47668843d800ed57f6f6bfd6f5c4cffdf201c6
Author:     Andy Lutomirski <luto@amacapital.net>
AuthorDate: Thu, 5 Mar 2015 19:19:07 -0800
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Fri, 6 Mar 2015 08:32:58 +0100

x86/asm/entry: Rename 'INIT_TSS_IST' to 'CPU_TSS_IST'

This has nothing to do with the init thread or the initial
anything. It's just the CPU's TSS.

Signed-off-by: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/a0bd5e26b32a2e1f08ff99017d0997118fbb2485.1425611534.git.luto@amacapital.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/x86/kernel/entry_64.S | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kernel/entry_64.S b/arch/x86/kernel/entry_64.S
index 0c00fd8..5117a2b 100644
--- a/arch/x86/kernel/entry_64.S
+++ b/arch/x86/kernel/entry_64.S
@@ -959,7 +959,7 @@ apicinterrupt IRQ_WORK_VECTOR \
 /*
  * Exception entry points.
  */
-#define INIT_TSS_IST(x) PER_CPU_VAR(cpu_tss) + (TSS_ist + ((x) - 1) * 8)
+#define CPU_TSS_IST(x) PER_CPU_VAR(cpu_tss) + (TSS_ist + ((x) - 1) * 8)
 
 .macro idtentry sym do_sym has_error_code:req paranoid=0 shift_ist=-1
 ENTRY(\sym)
@@ -1015,13 +1015,13 @@ ENTRY(\sym)
 	.endif
 
 	.if \shift_ist != -1
-	subq $EXCEPTION_STKSZ, INIT_TSS_IST(\shift_ist)
+	subq $EXCEPTION_STKSZ, CPU_TSS_IST(\shift_ist)
 	.endif
 
 	call \do_sym
 
 	.if \shift_ist != -1
-	addq $EXCEPTION_STKSZ, INIT_TSS_IST(\shift_ist)
+	addq $EXCEPTION_STKSZ, CPU_TSS_IST(\shift_ist)
 	.endif
 
 	/* these procedures expect "no swapgs" flag in ebx */

^ permalink raw reply related	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2015-03-06  8:40 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-03-06  3:19 [PATCH v2 0/6] Baby steps toward cleaning up KERNEL_STACK_OFFSET Andy Lutomirski
2015-03-06  3:19 ` [PATCH v2 1/6] x86: Add this_cpu_sp0() to read sp0 for the current cpu Andy Lutomirski
2015-03-06  8:37   ` [tip:x86/asm] x86/asm/entry: " tip-bot for Andy Lutomirski
2015-03-06  3:19 ` [PATCH v2 2/6] x86: Switch all C consumers of kernel_stack to this_cpu_sp0 Andy Lutomirski
2015-03-06  8:38   ` [tip:x86/asm] x86/asm/entry: Switch all C consumers of kernel_stack to this_cpu_sp0() tip-bot for Andy Lutomirski
2015-03-06  3:19 ` [PATCH v2 3/6] x86, asm: Change the 32-bit sysenter code to use sp0 Andy Lutomirski
2015-03-06  8:38   ` [tip:x86/asm] x86/asm/entry/64/compat: " tip-bot for Andy Lutomirski
2015-03-06  3:19 ` [PATCH v2 4/6] x86: Rename init_tss to cpu_tss Andy Lutomirski
2015-03-06  8:38   ` [tip:x86/asm] x86/asm/entry: Rename 'init_tss' to 'cpu_tss' tip-bot for Andy Lutomirski
2015-03-06  3:19 ` [PATCH v2 5/6] x86: Remove INIT_TSS and fold the definitions into cpu_tss Andy Lutomirski
2015-03-06  8:39   ` [tip:x86/asm] x86/asm/entry: Remove INIT_TSS and fold the definitions into 'cpu_tss' tip-bot for Andy Lutomirski
2015-03-06  3:19 ` [PATCH v2 6/6] x86, asm: Rename INIT_TSS_IST to TSS_IST Andy Lutomirski
2015-03-06  7:30   ` Ingo Molnar
2015-03-06  8:39   ` [tip:x86/asm] x86/asm/entry: Rename 'INIT_TSS_IST' to ' CPU_TSS_IST' tip-bot for Andy Lutomirski

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).