From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Google-Smtp-Source: AH8x224+vAta2QSxdx1C7XbWknMYO1R73EhuM24n0W3CwRa5nGdByFP6ImUQwbSUHH3xQMQwCSqQ ARC-Seal: i=1; a=rsa-sha256; t=1517306236; cv=none; d=google.com; s=arc-20160816; b=qIHE4N0strdQ72Xj/RMOd/XP+tSVzHnMAGtB0kIN7hxe76GRXn/D8ZlTu/uw1HWXEr ZMRuer5M3gYhlRABoVUi1EBIucvnxcHPpHt9K13jL+m04EryS3DNnlUSyg7nvqZLfXXU HPBSb+jng8gHyMSENuy0M3VZMoQN9znrZOw0fsA78idy5jQbDy7uEOLjUbmT7tHQEpWm aTtoFff18PaNgD5a6WyYl0IS2tzP9WnW7trx1AvFspP1FyZ1wNv+Qv2uRIBtLw3nNcfe XTGGAXN5PJX2y0JAZXhgYFtb30p6cqfsBSV9eZHtA6yoMaoRoQ8ReYOo1EX/YTnqZI9w nlwA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=user-agent:in-reply-to:content-disposition:mime-version:references :message-id:subject:cc:to:from:date:dkim-signature :arc-authentication-results; bh=wrc4QF0VYaRaZGNqaVjtB3I+Jt7wlzh0gLMcDUbohhI=; b=XkOx1pqREBxLX+0mcPzmYeQpVmK0RUe8ldePxu/Fl4mjPzEkBMMWPiDyOhN9VRhqUP HPAlK1NOjKuNfNmW86MF2ZWXAEFCRTe+UqHFOhOLCYR5hgUCZ11Vq8ozVFDN4evwqcy4 FaTWgAUMTxuISfvXsK/mNR/UrkTeYomryFtQ9RDsyX+sFPpoBa76KFyjt2fOMA87tD+x XCvNfv9ONBNWZDwJsZIDdc+f+fuEJd5C9Z87r/CgcJH3M22GKOjhbXxgF9jIe7bGJj5f JfGMy/w3TB1zrvz5xLTg27i4ONoBYV4MtFyZrJI4CadB1E3cm+mOdN9J8o5wodFlgB6G xLOQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=merlin.20170209 header.b=pFLM3ClR; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 205.233.59.134 as permitted sender) smtp.mailfrom=peterz@infradead.org Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=merlin.20170209 header.b=pFLM3ClR; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 205.233.59.134 as permitted sender) smtp.mailfrom=peterz@infradead.org Date: Tue, 30 Jan 2018 10:56:53 +0100 From: Peter Zijlstra To: Josh Poimboeuf Cc: David Woodhouse , Thomas Gleixner , linux-kernel@vger.kernel.org, Dave Hansen , Ashok Raj , Tim Chen , Andy Lutomirski , Linus Torvalds , Greg KH , Andrea Arcangeli , Andi Kleen , Arjan Van De Ven , Dan Williams , Paolo Bonzini , Jun Nakajima , Asit Mallick , Jason Baron Subject: Re: [PATCH 20/24] objtool: Another static block fail Message-ID: <20180130095653.GZ2269@hirez.programming.kicks-ass.net> References: <20180123152539.374360046@infradead.org> <20180123152639.170696914@infradead.org> <20180129225252.bi2etgk3eqprcv3x@treble> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180129225252.bi2etgk3eqprcv3x@treble> User-Agent: Mutt/1.9.2 (2017-12-15) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: =?utf-8?q?1590397857832042088?= X-GMAIL-MSGID: =?utf-8?q?1591010903804219332?= X-Mailing-List: linux-kernel@vger.kernel.org List-ID: On Mon, Jan 29, 2018 at 04:52:53PM -0600, Josh Poimboeuf wrote: > On Tue, Jan 23, 2018 at 04:25:59PM +0100, Peter Zijlstra wrote: > > I've observed GCC generate: > > > > sym: > > NOP/JMP 1f (static_branch) > > JMP 2f > > 1: /* crud */ > > JMP 3f > > 2: /* other crud */ > > > > 3: RETQ > > > > > > This means we need to follow unconditional jumps; be conservative and > > only follow if its a unique jump. > > > > (I've not yet figured out which CONFIG option is responsible for this, > > a normal defconfig build does not generate crap like this) > > > > Signed-off-by: Peter Zijlstra (Intel) > > Any chance we can just add a compiler barrier to the assertion macro and > avoid all this grow_static_blocks() mess? It seems a bit... fragile. It is all rather unfortunate yes.. :/ I've tried to keep the grow stuff as conservative as possible while still covering all the weirdness I found. And while it was great fun, I do agree it would be much better to not have to do this. You're thinking of something like this? static __always_inline void arch_static_assert(void) { asm volatile ("1:\n\t" ".pushsection .discard.jump_assert \n\t" _ASM_ALIGN "\n\t" _ASM_PTR "1b \n\t" - ".popsection \n\t"); + ".popsection \n\t" ::: "memory"); } That doesn't seem to matter much; see here: static void ttwu_stat(struct task_struct *p, int cpu, int wake_flags) { struct rq *rq; if (!schedstat_enabled()) return; rq = this_rq(); $ objdump -dr build/kernel/sched/core.o 0000000000001910 : 1910: e8 00 00 00 00 callq 1915 1911: R_X86_64_PC32 __fentry__-0x4 1915: 41 57 push %r15 1917: 41 56 push %r14 1919: 41 55 push %r13 191b: 41 54 push %r12 191d: 55 push %rbp 191e: 53 push %rbx 191f: 0f 1f 44 00 00 nopl 0x0(%rax,%rax,1) 1924: eb 25 jmp 194b 1926: 41 89 d5 mov %edx,%r13d 1929: 41 89 f4 mov %esi,%r12d 192c: 48 89 fb mov %rdi,%rbx 192f: 49 c7 c6 00 00 00 00 mov $0x0,%r14 1932: R_X86_64_32S runqueues $ objdump -j __jump_table -sr build/kernel/sched.o 0000000000000048 R_X86_64_64 .text+0x000000000000191f 0000000000000050 R_X86_64_64 .text+0x0000000000001926 0000000000000058 R_X86_64_64 sched_schedstats $ objdump -j .discard.jump_assert -dr build/kernel/sched.o 0000000000000000 R_X86_64_64 .text+0x000000000000192f It still lifts random crud over that first initial statement (the rq load).