From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS,USER_AGENT_MUTT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8044DC28CC5 for ; Wed, 5 Jun 2019 20:40:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 4B3A52067C for ; Wed, 5 Jun 2019 20:40:30 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="rjQbWXan" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726604AbfFEUk3 (ORCPT ); Wed, 5 Jun 2019 16:40:29 -0400 Received: from merlin.infradead.org ([205.233.59.134]:55670 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726305AbfFEUk3 (ORCPT ); Wed, 5 Jun 2019 16:40:29 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=In-Reply-To:Content-Transfer-Encoding: Content-Type:MIME-Version:References:Message-ID:Subject:Cc:To:From:Date: Sender:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=Bdl+GToplZYKFl2/CZwK1ShFFbX2KkgoYa/IQrkAuz4=; b=rjQbWXan+6Rg3IlYBqU+ZkIeqs 1t9EtuwW9iZ++dFEqThm5G+AubmH6Sp2/HBdaWcip/N6nFIu4mRZGVfCJzVOmCRPb0un9kMjCWEvW x8Wd21a9+eUqrO5wN0Nodpu9SAZVA3ViJvHJudrbvifdZyA9aWfKe80I40SxCKiHV+s7Awm/MdwIj XbaxEr6DejPaDSoZ5hIamojbNR0fm/0pHtUXyGZgrlI2AiDBuqHRx6n8WWdtAXB4lvpa44tWVTHUF L64dGlYH7wjSNLbcMwsL85JtoeTMIGtFFIBKYkKE07assZ8unvum/mVH22z9wpJB8rBmP0XCS2Fux HccFHKTQ==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=hirez.programming.kicks-ass.net) by merlin.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1hYchZ-0001Du-8j; Wed, 05 Jun 2019 20:40:05 +0000 Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000) id DEB6720763536; Wed, 5 Jun 2019 22:40:03 +0200 (CEST) Date: Wed, 5 Jun 2019 22:40:03 +0200 From: Peter Zijlstra To: Alex Kogan Cc: Waiman Long , linux@armlinux.org.uk, mingo@redhat.com, will.deacon@arm.com, arnd@arndb.de, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Thomas Gleixner , bp@alien8.de, hpa@zytor.com, x86@kernel.org, Steven Sistare , Daniel Jordan , dave.dice@oracle.com, Rahul Yadav Subject: Re: [PATCH v2 3/5] locking/qspinlock: Introduce CNA into the slow path of qspinlock Message-ID: <20190605204003.GC3402@hirez.programming.kicks-ass.net> References: <20190329152006.110370-1-alex.kogan@oracle.com> <20190329152006.110370-4-alex.kogan@oracle.com> <60a3a2d8-d222-73aa-2df1-64c9d3fa3241@redhat.com> <20190402094320.GM11158@hirez.programming.kicks-ass.net> <6AEDE4F2-306A-4DF9-9307-9E3517C68A2B@oracle.com> <20190403160112.GK4038@hirez.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Jun 04, 2019 at 07:21:13PM -0400, Alex Kogan wrote: > Trying to resume this work, I am looking for concrete steps required > to integrate CNA with the paravirt patching. > > Looking at alternative_instructions(), I wonder if I need to add > another call, something like apply_numa() similar to apply_paravirt(), > and do the patch work there. Or perhaps I should “just" initialize > the pv_ops structure with the corresponding > numa_queued_spinlock_slowpath() in paravirt.c? Yeah, just initialize the pv_ops.lock.* thingies to contain the numa variant before apply_paravirt() happens. > Also, the paravirt code is under arch/x86, while CNA is generic (not > x86-specific). Do you still want to see CNA-related patching residing > under arch/x86? > > We still need a config option (something like NUMA_AWARE_SPINLOCKS) to > enable CNA patching under this config only, correct? There is the static_call() stuff that could be generic; I posted a new version of that today (x86 only for now, but IIRC there's arm64 patches for that around somewhere too). https://lkml.kernel.org/r/20190605130753.327195108@infradead.org Which would allow something a little like this: diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h index bd5ac6cc37db..01feaf912bd7 100644 --- a/arch/x86/include/asm/qspinlock.h +++ b/arch/x86/include/asm/qspinlock.h @@ -63,29 +63,7 @@ static inline bool vcpu_is_preempted(long cpu) #endif #ifdef CONFIG_PARAVIRT -DECLARE_STATIC_KEY_TRUE(virt_spin_lock_key); - void native_pv_lock_init(void) __init; - -#define virt_spin_lock virt_spin_lock -static inline bool virt_spin_lock(struct qspinlock *lock) -{ - if (!static_branch_likely(&virt_spin_lock_key)) - return false; - - /* - * On hypervisors without PARAVIRT_SPINLOCKS support we fall - * back to a Test-and-Set spinlock, because fair locks have - * horrible lock 'holder' preemption issues. - */ - - do { - while (atomic_read(&lock->val) != 0) - cpu_relax(); - } while (atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL) != 0); - - return true; -} #else static inline void native_pv_lock_init(void) { diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c index 5169b8cc35bb..78be9e474e94 100644 --- a/arch/x86/kernel/kvm.c +++ b/arch/x86/kernel/kvm.c @@ -531,7 +531,7 @@ static void __init kvm_smp_prepare_cpus(unsigned int max_cpus) { native_smp_prepare_cpus(max_cpus); if (kvm_para_has_hint(KVM_HINTS_REALTIME)) - static_branch_disable(&virt_spin_lock_key); + static_call_update(queued_spin_lock_slowpath, __queued_spin_lock_slowpath); } static void __init kvm_smp_prepare_boot_cpu(void) diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c index 98039d7fb998..ae6d15f84867 100644 --- a/arch/x86/kernel/paravirt.c +++ b/arch/x86/kernel/paravirt.c @@ -105,12 +105,10 @@ static unsigned paravirt_patch_jmp(void *insn_buff, const void *target, } #endif -DEFINE_STATIC_KEY_TRUE(virt_spin_lock_key); - void __init native_pv_lock_init(void) { - if (!boot_cpu_has(X86_FEATURE_HYPERVISOR)) - static_branch_disable(&virt_spin_lock_key); + if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) + static_call_update(queued_spin_lock_slowpath, __tas_spin_lock_slowpath); } unsigned paravirt_patch_default(u8 type, void *insn_buff, diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c index 3776122c87cc..86808127b6e6 100644 --- a/arch/x86/xen/spinlock.c +++ b/arch/x86/xen/spinlock.c @@ -70,7 +70,7 @@ void xen_init_lock_cpu(int cpu) if (!xen_pvspin) { if (cpu == 0) - static_branch_disable(&virt_spin_lock_key); + static_call_update(queued_spin_lock_slowpath, __queued_spin_lock_slowpath); return; } diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h index fde943d180e0..8ca4dd9db931 100644 --- a/include/asm-generic/qspinlock.h +++ b/include/asm-generic/qspinlock.h @@ -65,7 +65,9 @@ static __always_inline int queued_spin_trylock(struct qspinlock *lock) return likely(atomic_try_cmpxchg_acquire(&lock->val, &val, _Q_LOCKED_VAL)); } -extern void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val); +extern void __queued_spin_lock_slowpath(struct qspinlock *lock, u32 val); + +DECLARE_STATIC_CALL(queued_spin_lock_slowpath, __queued_spin_lock_slowpath); /** * queued_spin_lock - acquire a queued spinlock @@ -78,7 +80,7 @@ static __always_inline void queued_spin_lock(struct qspinlock *lock) if (likely(atomic_try_cmpxchg_acquire(&lock->val, &val, _Q_LOCKED_VAL))) return; - queued_spin_lock_slowpath(lock, val); + static_call(queued_spin_lock_slowpath, lock, val); } #ifndef queued_spin_unlock @@ -95,13 +97,6 @@ static __always_inline void queued_spin_unlock(struct qspinlock *lock) } #endif -#ifndef virt_spin_lock -static __always_inline bool virt_spin_lock(struct qspinlock *lock) -{ - return false; -} -#endif - /* * Remapping spinlock architecture specific functions to the corresponding * queued spinlock functions. diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c index 2473f10c6956..0e9e61637d56 100644 --- a/kernel/locking/qspinlock.c +++ b/kernel/locking/qspinlock.c @@ -290,6 +290,20 @@ static __always_inline u32 __pv_wait_head_or_lock(struct qspinlock *lock, #endif /* _GEN_PV_LOCK_SLOWPATH */ +void __tas_spin_lock_slowpath(struct qspinlock *lock, u32 val) +{ + /* + * On hypervisors without PARAVIRT_SPINLOCKS support we fall + * back to a Test-and-Set spinlock, because fair locks have + * horrible lock 'holder' preemption issues. + */ + + do { + while (atomic_read(&lock->val) != 0) + cpu_relax(); + } while (atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL) != 0); +} + /** * queued_spin_lock_slowpath - acquire the queued spinlock * @lock: Pointer to queued spinlock structure @@ -311,7 +325,7 @@ static __always_inline u32 __pv_wait_head_or_lock(struct qspinlock *lock, * contended : (*,x,y) +--> (*,0,0) ---> (*,0,1) -' : * queue : ^--' : */ -void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) +void __queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) { struct mcs_spinlock *prev, *next, *node; u32 old, tail; @@ -322,9 +336,6 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) if (pv_enabled()) goto pv_queue; - if (virt_spin_lock(lock)) - return; - /* * Wait for in-progress pending->locked hand-overs with a bounded * number of spins so that we guarantee forward progress. @@ -558,7 +569,9 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) */ __this_cpu_dec(qnodes[0].mcs.count); } -EXPORT_SYMBOL(queued_spin_lock_slowpath); +EXPORT_SYMBOL(__queued_spin_lock_slowpath); + +DEFINE_STATIC_CALL(queued_spin_lock_slowpath, __queued_spin_lock_slowpath); /* * Generate the paravirt code for queued_spin_unlock_slowpath(). From mboxrd@z Thu Jan 1 00:00:00 1970 From: Peter Zijlstra Subject: Re: [PATCH v2 3/5] locking/qspinlock: Introduce CNA into the slow path of qspinlock Date: Wed, 5 Jun 2019 22:40:03 +0200 Message-ID: <20190605204003.GC3402@hirez.programming.kicks-ass.net> References: <20190329152006.110370-1-alex.kogan@oracle.com> <20190329152006.110370-4-alex.kogan@oracle.com> <60a3a2d8-d222-73aa-2df1-64c9d3fa3241@redhat.com> <20190402094320.GM11158@hirez.programming.kicks-ass.net> <6AEDE4F2-306A-4DF9-9307-9E3517C68A2B@oracle.com> <20190403160112.GK4038@hirez.programming.kicks-ass.net> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 Return-path: Content-Disposition: inline In-Reply-To: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=m.gmane.org@lists.infradead.org To: Alex Kogan Cc: linux-arch@vger.kernel.org, arnd@arndb.de, dave.dice@oracle.com, x86@kernel.org, will.deacon@arm.com, linux@armlinux.org.uk, Steven Sistare , linux-kernel@vger.kernel.org, Rahul Yadav , mingo@redhat.com, bp@alien8.de, hpa@zytor.com, Waiman Long , Thomas Gleixner , Daniel Jordan , linux-arm-kernel@lists.infradead.org List-Id: linux-arch.vger.kernel.org T24gVHVlLCBKdW4gMDQsIDIwMTkgYXQgMDc6MjE6MTNQTSAtMDQwMCwgQWxleCBLb2dhbiB3cm90 ZToKCj4gVHJ5aW5nIHRvIHJlc3VtZSB0aGlzIHdvcmssIEkgYW0gbG9va2luZyBmb3IgY29uY3Jl dGUgc3RlcHMgcmVxdWlyZWQKPiB0byBpbnRlZ3JhdGUgQ05BIHdpdGggdGhlIHBhcmF2aXJ0IHBh dGNoaW5nLgo+IAo+IExvb2tpbmcgYXQgYWx0ZXJuYXRpdmVfaW5zdHJ1Y3Rpb25zKCksIEkgd29u ZGVyIGlmIEkgbmVlZCB0byBhZGQKPiBhbm90aGVyIGNhbGwsIHNvbWV0aGluZyBsaWtlIGFwcGx5 X251bWEoKSBzaW1pbGFyIHRvIGFwcGx5X3BhcmF2aXJ0KCksCj4gYW5kIGRvIHRoZSBwYXRjaCB3 b3JrIHRoZXJlLiAgT3IgcGVyaGFwcyBJIHNob3VsZCDigJxqdXN0IiBpbml0aWFsaXplCj4gdGhl IHB2X29wcyBzdHJ1Y3R1cmUgd2l0aCB0aGUgY29ycmVzcG9uZGluZwo+IG51bWFfcXVldWVkX3Nw aW5sb2NrX3Nsb3dwYXRoKCkgaW4gcGFyYXZpcnQuYz8KClllYWgsIGp1c3QgaW5pdGlhbGl6ZSB0 aGUgcHZfb3BzLmxvY2suKiB0aGluZ2llcyB0byBjb250YWluIHRoZSBudW1hCnZhcmlhbnQgYmVm b3JlIGFwcGx5X3BhcmF2aXJ0KCkgaGFwcGVucy4KCj4gQWxzbywgdGhlIHBhcmF2aXJ0IGNvZGUg aXMgdW5kZXIgYXJjaC94ODYsIHdoaWxlIENOQSBpcyBnZW5lcmljIChub3QKPiB4ODYtc3BlY2lm aWMpLiAgRG8geW91IHN0aWxsIHdhbnQgdG8gc2VlIENOQS1yZWxhdGVkIHBhdGNoaW5nIHJlc2lk aW5nCj4gdW5kZXIgYXJjaC94ODY/Cj4gCj4gV2Ugc3RpbGwgbmVlZCBhIGNvbmZpZyBvcHRpb24g KHNvbWV0aGluZyBsaWtlIE5VTUFfQVdBUkVfU1BJTkxPQ0tTKSB0bwo+IGVuYWJsZSBDTkEgcGF0 Y2hpbmcgdW5kZXIgdGhpcyBjb25maWcgb25seSwgY29ycmVjdD8KClRoZXJlIGlzIHRoZSBzdGF0 aWNfY2FsbCgpIHN0dWZmIHRoYXQgY291bGQgYmUgZ2VuZXJpYzsgSSBwb3N0ZWQgYSBuZXcKdmVy c2lvbiBvZiB0aGF0IHRvZGF5ICh4ODYgb25seSBmb3Igbm93LCBidXQgSUlSQyB0aGVyZSdzIGFy bTY0IHBhdGNoZXMKZm9yIHRoYXQgYXJvdW5kIHNvbWV3aGVyZSB0b28pLgoKaHR0cHM6Ly9sa21s Lmtlcm5lbC5vcmcvci8yMDE5MDYwNTEzMDc1My4zMjcxOTUxMDhAaW5mcmFkZWFkLm9yZwoKV2hp Y2ggd291bGQgYWxsb3cgc29tZXRoaW5nIGEgbGl0dGxlIGxpa2UgdGhpczoKCgpkaWZmIC0tZ2l0 IGEvYXJjaC94ODYvaW5jbHVkZS9hc20vcXNwaW5sb2NrLmggYi9hcmNoL3g4Ni9pbmNsdWRlL2Fz bS9xc3BpbmxvY2suaAppbmRleCBiZDVhYzZjYzM3ZGIuLjAxZmVhZjkxMmJkNyAxMDA2NDQKLS0t IGEvYXJjaC94ODYvaW5jbHVkZS9hc20vcXNwaW5sb2NrLmgKKysrIGIvYXJjaC94ODYvaW5jbHVk ZS9hc20vcXNwaW5sb2NrLmgKQEAgLTYzLDI5ICs2Myw3IEBAIHN0YXRpYyBpbmxpbmUgYm9vbCB2 Y3B1X2lzX3ByZWVtcHRlZChsb25nIGNwdSkKICNlbmRpZgogCiAjaWZkZWYgQ09ORklHX1BBUkFW SVJUCi1ERUNMQVJFX1NUQVRJQ19LRVlfVFJVRSh2aXJ0X3NwaW5fbG9ja19rZXkpOwotCiB2b2lk IG5hdGl2ZV9wdl9sb2NrX2luaXQodm9pZCkgX19pbml0OwotCi0jZGVmaW5lIHZpcnRfc3Bpbl9s b2NrIHZpcnRfc3Bpbl9sb2NrCi1zdGF0aWMgaW5saW5lIGJvb2wgdmlydF9zcGluX2xvY2soc3Ry dWN0IHFzcGlubG9jayAqbG9jaykKLXsKLQlpZiAoIXN0YXRpY19icmFuY2hfbGlrZWx5KCZ2aXJ0 X3NwaW5fbG9ja19rZXkpKQotCQlyZXR1cm4gZmFsc2U7Ci0KLQkvKgotCSAqIE9uIGh5cGVydmlz b3JzIHdpdGhvdXQgUEFSQVZJUlRfU1BJTkxPQ0tTIHN1cHBvcnQgd2UgZmFsbAotCSAqIGJhY2sg dG8gYSBUZXN0LWFuZC1TZXQgc3BpbmxvY2ssIGJlY2F1c2UgZmFpciBsb2NrcyBoYXZlCi0JICog aG9ycmlibGUgbG9jayAnaG9sZGVyJyBwcmVlbXB0aW9uIGlzc3Vlcy4KLQkgKi8KLQotCWRvIHsK LQkJd2hpbGUgKGF0b21pY19yZWFkKCZsb2NrLT52YWwpICE9IDApCi0JCQljcHVfcmVsYXgoKTsK LQl9IHdoaWxlIChhdG9taWNfY21weGNoZygmbG9jay0+dmFsLCAwLCBfUV9MT0NLRURfVkFMKSAh PSAwKTsKLQotCXJldHVybiB0cnVlOwotfQogI2Vsc2UKIHN0YXRpYyBpbmxpbmUgdm9pZCBuYXRp dmVfcHZfbG9ja19pbml0KHZvaWQpCiB7CmRpZmYgLS1naXQgYS9hcmNoL3g4Ni9rZXJuZWwva3Zt LmMgYi9hcmNoL3g4Ni9rZXJuZWwva3ZtLmMKaW5kZXggNTE2OWI4Y2MzNWJiLi43OGJlOWU0NzRl OTQgMTAwNjQ0Ci0tLSBhL2FyY2gveDg2L2tlcm5lbC9rdm0uYworKysgYi9hcmNoL3g4Ni9rZXJu ZWwva3ZtLmMKQEAgLTUzMSw3ICs1MzEsNyBAQCBzdGF0aWMgdm9pZCBfX2luaXQga3ZtX3NtcF9w cmVwYXJlX2NwdXModW5zaWduZWQgaW50IG1heF9jcHVzKQogewogCW5hdGl2ZV9zbXBfcHJlcGFy ZV9jcHVzKG1heF9jcHVzKTsKIAlpZiAoa3ZtX3BhcmFfaGFzX2hpbnQoS1ZNX0hJTlRTX1JFQUxU SU1FKSkKLQkJc3RhdGljX2JyYW5jaF9kaXNhYmxlKCZ2aXJ0X3NwaW5fbG9ja19rZXkpOworCQlz dGF0aWNfY2FsbF91cGRhdGUocXVldWVkX3NwaW5fbG9ja19zbG93cGF0aCwgX19xdWV1ZWRfc3Bp bl9sb2NrX3Nsb3dwYXRoKTsKIH0KIAogc3RhdGljIHZvaWQgX19pbml0IGt2bV9zbXBfcHJlcGFy ZV9ib290X2NwdSh2b2lkKQpkaWZmIC0tZ2l0IGEvYXJjaC94ODYva2VybmVsL3BhcmF2aXJ0LmMg Yi9hcmNoL3g4Ni9rZXJuZWwvcGFyYXZpcnQuYwppbmRleCA5ODAzOWQ3ZmI5OTguLmFlNmQxNWY4 NDg2NyAxMDA2NDQKLS0tIGEvYXJjaC94ODYva2VybmVsL3BhcmF2aXJ0LmMKKysrIGIvYXJjaC94 ODYva2VybmVsL3BhcmF2aXJ0LmMKQEAgLTEwNSwxMiArMTA1LDEwIEBAIHN0YXRpYyB1bnNpZ25l ZCBwYXJhdmlydF9wYXRjaF9qbXAodm9pZCAqaW5zbl9idWZmLCBjb25zdCB2b2lkICp0YXJnZXQs CiB9CiAjZW5kaWYKIAotREVGSU5FX1NUQVRJQ19LRVlfVFJVRSh2aXJ0X3NwaW5fbG9ja19rZXkp OwotCiB2b2lkIF9faW5pdCBuYXRpdmVfcHZfbG9ja19pbml0KHZvaWQpCiB7Ci0JaWYgKCFib290 X2NwdV9oYXMoWDg2X0ZFQVRVUkVfSFlQRVJWSVNPUikpCi0JCXN0YXRpY19icmFuY2hfZGlzYWJs ZSgmdmlydF9zcGluX2xvY2tfa2V5KTsKKwlpZiAoYm9vdF9jcHVfaGFzKFg4Nl9GRUFUVVJFX0hZ UEVSVklTT1IpKQorCQlzdGF0aWNfY2FsbF91cGRhdGUocXVldWVkX3NwaW5fbG9ja19zbG93cGF0 aCwgX190YXNfc3Bpbl9sb2NrX3Nsb3dwYXRoKTsKIH0KIAogdW5zaWduZWQgcGFyYXZpcnRfcGF0 Y2hfZGVmYXVsdCh1OCB0eXBlLCB2b2lkICppbnNuX2J1ZmYsCmRpZmYgLS1naXQgYS9hcmNoL3g4 Ni94ZW4vc3BpbmxvY2suYyBiL2FyY2gveDg2L3hlbi9zcGlubG9jay5jCmluZGV4IDM3NzYxMjJj ODdjYy4uODY4MDgxMjdiNmU2IDEwMDY0NAotLS0gYS9hcmNoL3g4Ni94ZW4vc3BpbmxvY2suYwor KysgYi9hcmNoL3g4Ni94ZW4vc3BpbmxvY2suYwpAQCAtNzAsNyArNzAsNyBAQCB2b2lkIHhlbl9p bml0X2xvY2tfY3B1KGludCBjcHUpCiAKIAlpZiAoIXhlbl9wdnNwaW4pIHsKIAkJaWYgKGNwdSA9 PSAwKQotCQkJc3RhdGljX2JyYW5jaF9kaXNhYmxlKCZ2aXJ0X3NwaW5fbG9ja19rZXkpOworCQkJ c3RhdGljX2NhbGxfdXBkYXRlKHF1ZXVlZF9zcGluX2xvY2tfc2xvd3BhdGgsIF9fcXVldWVkX3Nw aW5fbG9ja19zbG93cGF0aCk7CiAJCXJldHVybjsKIAl9CiAKZGlmZiAtLWdpdCBhL2luY2x1ZGUv YXNtLWdlbmVyaWMvcXNwaW5sb2NrLmggYi9pbmNsdWRlL2FzbS1nZW5lcmljL3FzcGlubG9jay5o CmluZGV4IGZkZTk0M2QxODBlMC4uOGNhNGRkOWRiOTMxIDEwMDY0NAotLS0gYS9pbmNsdWRlL2Fz bS1nZW5lcmljL3FzcGlubG9jay5oCisrKyBiL2luY2x1ZGUvYXNtLWdlbmVyaWMvcXNwaW5sb2Nr LmgKQEAgLTY1LDcgKzY1LDkgQEAgc3RhdGljIF9fYWx3YXlzX2lubGluZSBpbnQgcXVldWVkX3Nw aW5fdHJ5bG9jayhzdHJ1Y3QgcXNwaW5sb2NrICpsb2NrKQogCXJldHVybiBsaWtlbHkoYXRvbWlj X3RyeV9jbXB4Y2hnX2FjcXVpcmUoJmxvY2stPnZhbCwgJnZhbCwgX1FfTE9DS0VEX1ZBTCkpOwog fQogCi1leHRlcm4gdm9pZCBxdWV1ZWRfc3Bpbl9sb2NrX3Nsb3dwYXRoKHN0cnVjdCBxc3Bpbmxv Y2sgKmxvY2ssIHUzMiB2YWwpOworZXh0ZXJuIHZvaWQgX19xdWV1ZWRfc3Bpbl9sb2NrX3Nsb3dw YXRoKHN0cnVjdCBxc3BpbmxvY2sgKmxvY2ssIHUzMiB2YWwpOworCitERUNMQVJFX1NUQVRJQ19D QUxMKHF1ZXVlZF9zcGluX2xvY2tfc2xvd3BhdGgsIF9fcXVldWVkX3NwaW5fbG9ja19zbG93cGF0 aCk7CiAKIC8qKgogICogcXVldWVkX3NwaW5fbG9jayAtIGFjcXVpcmUgYSBxdWV1ZWQgc3Bpbmxv Y2sKQEAgLTc4LDcgKzgwLDcgQEAgc3RhdGljIF9fYWx3YXlzX2lubGluZSB2b2lkIHF1ZXVlZF9z cGluX2xvY2soc3RydWN0IHFzcGlubG9jayAqbG9jaykKIAlpZiAobGlrZWx5KGF0b21pY190cnlf Y21weGNoZ19hY3F1aXJlKCZsb2NrLT52YWwsICZ2YWwsIF9RX0xPQ0tFRF9WQUwpKSkKIAkJcmV0 dXJuOwogCi0JcXVldWVkX3NwaW5fbG9ja19zbG93cGF0aChsb2NrLCB2YWwpOworCXN0YXRpY19j YWxsKHF1ZXVlZF9zcGluX2xvY2tfc2xvd3BhdGgsIGxvY2ssIHZhbCk7CiB9CiAKICNpZm5kZWYg cXVldWVkX3NwaW5fdW5sb2NrCkBAIC05NSwxMyArOTcsNiBAQCBzdGF0aWMgX19hbHdheXNfaW5s aW5lIHZvaWQgcXVldWVkX3NwaW5fdW5sb2NrKHN0cnVjdCBxc3BpbmxvY2sgKmxvY2spCiB9CiAj ZW5kaWYKIAotI2lmbmRlZiB2aXJ0X3NwaW5fbG9jawotc3RhdGljIF9fYWx3YXlzX2lubGluZSBi b29sIHZpcnRfc3Bpbl9sb2NrKHN0cnVjdCBxc3BpbmxvY2sgKmxvY2spCi17Ci0JcmV0dXJuIGZh bHNlOwotfQotI2VuZGlmCi0KIC8qCiAgKiBSZW1hcHBpbmcgc3BpbmxvY2sgYXJjaGl0ZWN0dXJl IHNwZWNpZmljIGZ1bmN0aW9ucyB0byB0aGUgY29ycmVzcG9uZGluZwogICogcXVldWVkIHNwaW5s b2NrIGZ1bmN0aW9ucy4KZGlmZiAtLWdpdCBhL2tlcm5lbC9sb2NraW5nL3FzcGlubG9jay5jIGIv a2VybmVsL2xvY2tpbmcvcXNwaW5sb2NrLmMKaW5kZXggMjQ3M2YxMGM2OTU2Li4wZTllNjE2Mzdk NTYgMTAwNjQ0Ci0tLSBhL2tlcm5lbC9sb2NraW5nL3FzcGlubG9jay5jCisrKyBiL2tlcm5lbC9s b2NraW5nL3FzcGlubG9jay5jCkBAIC0yOTAsNiArMjkwLDIwIEBAIHN0YXRpYyBfX2Fsd2F5c19p bmxpbmUgdTMyICBfX3B2X3dhaXRfaGVhZF9vcl9sb2NrKHN0cnVjdCBxc3BpbmxvY2sgKmxvY2ss CiAKICNlbmRpZiAvKiBfR0VOX1BWX0xPQ0tfU0xPV1BBVEggKi8KIAordm9pZCBfX3Rhc19zcGlu X2xvY2tfc2xvd3BhdGgoc3RydWN0IHFzcGlubG9jayAqbG9jaywgdTMyIHZhbCkKK3sKKwkvKgor CSAqIE9uIGh5cGVydmlzb3JzIHdpdGhvdXQgUEFSQVZJUlRfU1BJTkxPQ0tTIHN1cHBvcnQgd2Ug ZmFsbAorCSAqIGJhY2sgdG8gYSBUZXN0LWFuZC1TZXQgc3BpbmxvY2ssIGJlY2F1c2UgZmFpciBs b2NrcyBoYXZlCisJICogaG9ycmlibGUgbG9jayAnaG9sZGVyJyBwcmVlbXB0aW9uIGlzc3Vlcy4K KwkgKi8KKworCWRvIHsKKwkJd2hpbGUgKGF0b21pY19yZWFkKCZsb2NrLT52YWwpICE9IDApCisJ CQljcHVfcmVsYXgoKTsKKwl9IHdoaWxlIChhdG9taWNfY21weGNoZygmbG9jay0+dmFsLCAwLCBf UV9MT0NLRURfVkFMKSAhPSAwKTsKK30KKwogLyoqCiAgKiBxdWV1ZWRfc3Bpbl9sb2NrX3Nsb3dw YXRoIC0gYWNxdWlyZSB0aGUgcXVldWVkIHNwaW5sb2NrCiAgKiBAbG9jazogUG9pbnRlciB0byBx dWV1ZWQgc3BpbmxvY2sgc3RydWN0dXJlCkBAIC0zMTEsNyArMzI1LDcgQEAgc3RhdGljIF9fYWx3 YXlzX2lubGluZSB1MzIgIF9fcHZfd2FpdF9oZWFkX29yX2xvY2soc3RydWN0IHFzcGlubG9jayAq bG9jaywKICAqIGNvbnRlbmRlZCAgICAgICAgICAgICA6ICAgICgqLHgseSkgKy0tPiAoKiwwLDAp IC0tLT4gKCosMCwxKSAtJyAgOgogICogICBxdWV1ZSAgICAgICAgICAgICAgIDogICAgICAgICBe LS0nICAgICAgICAgICAgICAgICAgICAgICAgICAgICA6CiAgKi8KLXZvaWQgcXVldWVkX3NwaW5f bG9ja19zbG93cGF0aChzdHJ1Y3QgcXNwaW5sb2NrICpsb2NrLCB1MzIgdmFsKQordm9pZCBfX3F1 ZXVlZF9zcGluX2xvY2tfc2xvd3BhdGgoc3RydWN0IHFzcGlubG9jayAqbG9jaywgdTMyIHZhbCkK IHsKIAlzdHJ1Y3QgbWNzX3NwaW5sb2NrICpwcmV2LCAqbmV4dCwgKm5vZGU7CiAJdTMyIG9sZCwg dGFpbDsKQEAgLTMyMiw5ICszMzYsNiBAQCB2b2lkIHF1ZXVlZF9zcGluX2xvY2tfc2xvd3BhdGgo c3RydWN0IHFzcGlubG9jayAqbG9jaywgdTMyIHZhbCkKIAlpZiAocHZfZW5hYmxlZCgpKQogCQln b3RvIHB2X3F1ZXVlOwogCi0JaWYgKHZpcnRfc3Bpbl9sb2NrKGxvY2spKQotCQlyZXR1cm47Ci0K IAkvKgogCSAqIFdhaXQgZm9yIGluLXByb2dyZXNzIHBlbmRpbmctPmxvY2tlZCBoYW5kLW92ZXJz IHdpdGggYSBib3VuZGVkCiAJICogbnVtYmVyIG9mIHNwaW5zIHNvIHRoYXQgd2UgZ3VhcmFudGVl IGZvcndhcmQgcHJvZ3Jlc3MuCkBAIC01NTgsNyArNTY5LDkgQEAgdm9pZCBxdWV1ZWRfc3Bpbl9s b2NrX3Nsb3dwYXRoKHN0cnVjdCBxc3BpbmxvY2sgKmxvY2ssIHUzMiB2YWwpCiAJICovCiAJX190 aGlzX2NwdV9kZWMocW5vZGVzWzBdLm1jcy5jb3VudCk7CiB9Ci1FWFBPUlRfU1lNQk9MKHF1ZXVl ZF9zcGluX2xvY2tfc2xvd3BhdGgpOworRVhQT1JUX1NZTUJPTChfX3F1ZXVlZF9zcGluX2xvY2tf c2xvd3BhdGgpOworCitERUZJTkVfU1RBVElDX0NBTEwocXVldWVkX3NwaW5fbG9ja19zbG93cGF0 aCwgX19xdWV1ZWRfc3Bpbl9sb2NrX3Nsb3dwYXRoKTsKIAogLyoKICAqIEdlbmVyYXRlIHRoZSBw YXJhdmlydCBjb2RlIGZvciBxdWV1ZWRfc3Bpbl91bmxvY2tfc2xvd3BhdGgoKS4KCl9fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fCmxpbnV4LWFybS1rZXJuZWwg bWFpbGluZyBsaXN0CmxpbnV4LWFybS1rZXJuZWxAbGlzdHMuaW5mcmFkZWFkLm9yZwpodHRwOi8v bGlzdHMuaW5mcmFkZWFkLm9yZy9tYWlsbWFuL2xpc3RpbmZvL2xpbnV4LWFybS1rZXJuZWwK