From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-0.6 required=3.0 tests=DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,T_DKIM_INVALID autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by aws-us-west-2-korg-lkml-1.web.codeaurora.org (Postfix) with ESMTP id C5E4FC433EF for ; Thu, 14 Jun 2018 14:43:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 5C8DF208CB for ; Thu, 14 Jun 2018 14:43:28 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="LfPOP8U+" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5C8DF208CB Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S965477AbeFNOn0 (ORCPT ); Thu, 14 Jun 2018 10:43:26 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:56090 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S965278AbeFNOnW (ORCPT ); Thu, 14 Jun 2018 10:43:22 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=In-Reply-To:Content-Type:MIME-Version :References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=XOJbCG9lsU/Uw+/funevXmFWZD6fT+h4A2bXbNu8yuE=; b=LfPOP8U+fViCTqg+al1Y6meYS 8q+XH81tB/S3XTKXWugkMA3h/5TZCuBMqxyx1hrtIxlgTppIwN9WlwaBacA6AgZDQ94MVWXlkwciP WjlktiGyqMjMFNF9TQIajHKXaw43M35ZZrU8AqOyRSXglOL7Tcl8j7icYcVAEvmxMc6/tp8QE8gxw VVJO/ojxAhn1ibtCQufmTTzKaKjB873xrq2HBNOg5hvmxngQPBJgr4d/P4MJDeZrx+txyRiyccH2c 1gMUFr8W1fsncjiKrn59Nx3syIdKRSXP1eQB+foqgkQWUKIZdFWanXQDCFMbSAVk08RcIjnXDpy2j uiqR312Qg==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=hirez.programming.kicks-ass.net) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1fTTSi-0005fv-Oa; Thu, 14 Jun 2018 14:42:57 +0000 Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000) id A65FE20268300; Thu, 14 Jun 2018 16:42:54 +0200 (CEST) Date: Thu, 14 Jun 2018 16:42:54 +0200 From: Peter Zijlstra To: Thomas Hellstrom Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, Ingo Molnar , Jonathan Corbet , Gustavo Padovan , Maarten Lankhorst , Sean Paul , David Airlie , Davidlohr Bueso , "Paul E. McKenney" , Josh Triplett , Thomas Gleixner , Kate Stewart , Philippe Ombredanne , Greg Kroah-Hartman , linux-doc@vger.kernel.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org Subject: Re: [PATCH 1/2] locking: Implement an algorithm choice for Wound-Wait mutexes Message-ID: <20180614144254.GB12198@hirez.programming.kicks-ass.net> References: <20180613074745.14750-1-thellstrom@vmware.com> <20180613074745.14750-2-thellstrom@vmware.com> <20180613095012.GW12198@hirez.programming.kicks-ass.net> <69f3dee9-4782-bc90-3ee2-813ac6835c4a@vmware.com> <20180613131000.GX12198@hirez.programming.kicks-ass.net> <9afd482d-7082-fa17-5e34-179a652376e5@vmware.com> <20180614105151.GY12198@hirez.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.9.5 (2018-04-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Jun 14, 2018 at 01:48:39PM +0200, Thomas Hellstrom wrote: > The literature makes a distinction between "killed" and "wounded". In our > context, "Killed" is when a transaction actually receives an -EDEADLK and > needs to back off. "Wounded" is when someone (typically another transaction) > requests a transaction to kill itself. A wound will often, but not always, > lead to a kill. If the wounded transaction has finished its locking > sequence, or has the opportunity to grab uncontended ww mutexes or steal > contended (non-handoff) ww mutexes to finish its transaction it will do so > and never kill itself. Hopefully I got it all right this time; I folded your patch in and mucked around with it a bit, but haven't done anything except compile it. I left the context/transaction thing because well, that's what we called the thing. diff --git a/include/linux/ww_mutex.h b/include/linux/ww_mutex.h index 39fda195bf78..50ef5a10cfa0 100644 --- a/include/linux/ww_mutex.h +++ b/include/linux/ww_mutex.h @@ -8,6 +8,8 @@ * * Wound/wait implementation: * Copyright (C) 2013 Canonical Ltd. + * Choice of algorithm: + * Copyright (C) 2018 WMWare Inc. * * This file contains the main data structure and API definitions. */ @@ -23,14 +25,17 @@ struct ww_class { struct lock_class_key mutex_key; const char *acquire_name; const char *mutex_name; + unsigned int is_wait_die; }; struct ww_acquire_ctx { struct task_struct *task; unsigned long stamp; - unsigned acquired; + unsigned int acquired; + unsigned short wounded; + unsigned short is_wait_die; #ifdef CONFIG_DEBUG_MUTEXES - unsigned done_acquire; + unsigned int done_acquire; struct ww_class *ww_class; struct ww_mutex *contending_lock; #endif @@ -38,8 +43,8 @@ struct ww_acquire_ctx { struct lockdep_map dep_map; #endif #ifdef CONFIG_DEBUG_WW_MUTEX_SLOWPATH - unsigned deadlock_inject_interval; - unsigned deadlock_inject_countdown; + unsigned int deadlock_inject_interval; + unsigned int deadlock_inject_countdown; #endif }; @@ -58,17 +63,21 @@ struct ww_mutex { # define __WW_CLASS_MUTEX_INITIALIZER(lockname, class) #endif -#define __WW_CLASS_INITIALIZER(ww_class) \ +#define __WW_CLASS_INITIALIZER(ww_class, _is_wait_die) \ { .stamp = ATOMIC_LONG_INIT(0) \ , .acquire_name = #ww_class "_acquire" \ - , .mutex_name = #ww_class "_mutex" } + , .mutex_name = #ww_class "_mutex" \ + , .is_wait_die = _is_wait_die } #define __WW_MUTEX_INITIALIZER(lockname, class) \ { .base = __MUTEX_INITIALIZER(lockname.base) \ __WW_CLASS_MUTEX_INITIALIZER(lockname, class) } +#define DEFINE_WD_CLASS(classname) \ + struct ww_class classname = __WW_CLASS_INITIALIZER(classname, 1) + #define DEFINE_WW_CLASS(classname) \ - struct ww_class classname = __WW_CLASS_INITIALIZER(classname) + struct ww_class classname = __WW_CLASS_INITIALIZER(classname, 0) #define DEFINE_WW_MUTEX(mutexname, ww_class) \ struct ww_mutex mutexname = __WW_MUTEX_INITIALIZER(mutexname, ww_class) @@ -123,6 +132,8 @@ static inline void ww_acquire_init(struct ww_acquire_ctx *ctx, ctx->task = current; ctx->stamp = atomic_long_inc_return_relaxed(&ww_class->stamp); ctx->acquired = 0; + ctx->wounded = false; + ctx->is_wait_die = ww_class->is_wait_die; #ifdef CONFIG_DEBUG_MUTEXES ctx->ww_class = ww_class; ctx->done_acquire = 0; diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index f44f658ae629..9e244af4647d 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -244,6 +244,22 @@ void __sched mutex_lock(struct mutex *lock) EXPORT_SYMBOL(mutex_lock); #endif +/* + * Wait-Die: + * The newer transactions are killed when: + * It (the new transaction) makes a request for a lock being held + * by an older transaction. + * + * Wound-Wait: + * The newer transactions are wounded when: + * An older transaction makes a request for a lock being held by + * the newer transaction. + */ + +/* + * Associate the ww_mutex @ww with the context @ww_ctx under which we acquired + * it. + */ static __always_inline void ww_mutex_lock_acquired(struct ww_mutex *ww, struct ww_acquire_ctx *ww_ctx) { @@ -282,26 +298,96 @@ ww_mutex_lock_acquired(struct ww_mutex *ww, struct ww_acquire_ctx *ww_ctx) DEBUG_LOCKS_WARN_ON(ww_ctx->ww_class != ww->ww_class); #endif ww_ctx->acquired++; + ww->ctx = ww_ctx; } +/* + * Determine if context @a is 'after' context @b. IOW, @a should be wounded in + * favour of @b. + */ static inline bool __sched __ww_ctx_stamp_after(struct ww_acquire_ctx *a, struct ww_acquire_ctx *b) { - return a->stamp - b->stamp <= LONG_MAX && - (a->stamp != b->stamp || a > b); + + return (signed long)(a->stamp - b->stamp) > 0; } /* - * Wake up any waiters that may have to back off when the lock is held by the - * given context. + * Wait-Die; wake a younger waiter context (when locks held) such that it can die. * - * Due to the invariants on the wait list, this can only affect the first - * waiter with a context. + * Among waiters with context, only the first one can have other locks acquired + * already (ctx->acquired > 0), because __ww_mutex_add_waiter() and + * __ww_mutex_check_wound() wake any but the earliest context. + */ +static bool __ww_mutex_die(struct mutex *lock, struct mutex_waiter *waiter, + struct ww_acquire_ctx *ww_ctx) +{ + if (!ww_ctx->is_wait_die) + return false; + + if (waiter->ww_ctx->acquired > 0 && + __ww_ctx_stamp_after(waiter->ww_ctx, ww_ctx)) { + debug_mutex_wake_waiter(lock, waiter); + wake_up_process(waiter->task); + } + + return true; +} + +/* + * Wound-Wait; wound a younger @hold_ctx (if it has locks held). + * + * XXX more; explain why we too only need to wake the first. + */ +static bool __ww_mutex_wound(struct mutex *lock, + struct ww_acquire_ctx *ww_ctx, + struct ww_acquire_ctx *hold_ctx) +{ + struct task_struct *owner = __mutex_owner(lock); + + lockdep_assert_held(&lock->wait_lock); + + /* + * Possible through __ww_mutex_add_waiter() when we race with + * ww_mutex_set_context_fastpath(). In that case we'll get here again + * through __ww_mutex_check_waiters(). + */ + if (!hold_ctx) + return false; + + /* + * Can have !owner because of __mutex_unlock_slowpath(), but if owner, + * it cannot go away because we'll have FLAG_WAITERS set and hold + * wait_lock. + */ + if (!owner) + return false; + + if (ww_ctx->acquired > 0 && __ww_ctx_stamp_after(hold_ctx, ww_ctx)) { + hold_ctx->wounded = 1; + if (owner != current) + wake_up_process(owner); + + return true; + } + + return false; +} + +/* + * We just acquired @lock under @ww_ctx, if there are later contexts waiting + * behind us on the wait-list, check if they need wounding/killing. + * + * See __ww_mutex_add_waiter() for the list-order construction; basically the + * list is ordered by stamp, smallest (oldest) first. + * + * This relies on never mixing wait-die/wound-wait on the same wait-list; which is + * currently ensured by that being a ww_class property. * * The current task must not be on the wait list. */ static void __sched -__ww_mutex_wakeup_for_backoff(struct mutex *lock, struct ww_acquire_ctx *ww_ctx) +__ww_mutex_check_waiters(struct mutex *lock, struct ww_acquire_ctx *ww_ctx) { struct mutex_waiter *cur; @@ -311,66 +397,50 @@ __ww_mutex_wakeup_for_backoff(struct mutex *lock, struct ww_acquire_ctx *ww_ctx) if (!cur->ww_ctx) continue; - if (cur->ww_ctx->acquired > 0 && - __ww_ctx_stamp_after(cur->ww_ctx, ww_ctx)) { - debug_mutex_wake_waiter(lock, cur); - wake_up_process(cur->task); - } - - break; + if (__ww_mutex_die(lock, cur, ww_ctx) || + __ww_mutex_wound(lock, cur->ww_ctx, ww_ctx)) + break; } } /* - * After acquiring lock with fastpath or when we lost out in contested - * slowpath, set ctx and wake up any waiters so they can recheck. + * After acquiring lock with fastpath, where we do not hold wait_lock, set ctx + * and wake up any waiters so they can recheck. */ static __always_inline void ww_mutex_set_context_fastpath(struct ww_mutex *lock, struct ww_acquire_ctx *ctx) { ww_mutex_lock_acquired(lock, ctx); - lock->ctx = ctx; - /* * The lock->ctx update should be visible on all cores before - * the atomic read is done, otherwise contended waiters might be + * the list_empty check is done, otherwise contended waiters might be * missed. The contended waiters will either see ww_ctx == NULL * and keep spinning, or it will acquire wait_lock, add itself * to waiter list and sleep. */ - smp_mb(); /* ^^^ */ + smp_mb(); /* See comments above and below. */ /* - * Check if lock is contended, if not there is nobody to wake up + * [W] ww->ctx = ctx [W] list_add_tail() + * MB MB + * [R] list_empty() [R] ww->ctx + * + * The memory barrier above pairs with the memory barrier in + * __ww_mutex_add_waiter() and makes sure we either observe ww->ctx + * and/or !empty list. */ - if (likely(!(atomic_long_read(&lock->base.owner) & MUTEX_FLAG_WAITERS))) + if (likely(list_empty(&lock->base.wait_list))) return; /* - * Uh oh, we raced in fastpath, wake up everyone in this case, - * so they can see the new lock->ctx. + * Uh oh, we raced in fastpath, check if any of the waiters need wounding. */ spin_lock(&lock->base.wait_lock); - __ww_mutex_wakeup_for_backoff(&lock->base, ctx); + __ww_mutex_check_waiters(&lock->base, ctx); spin_unlock(&lock->base.wait_lock); } -/* - * After acquiring lock in the slowpath set ctx. - * - * Unlike for the fast path, the caller ensures that waiters are woken up where - * necessary. - * - * Callers must hold the mutex wait_lock. - */ -static __always_inline void -ww_mutex_set_context_slowpath(struct ww_mutex *lock, struct ww_acquire_ctx *ctx) -{ - ww_mutex_lock_acquired(lock, ctx); - lock->ctx = ctx; -} - #ifdef CONFIG_MUTEX_SPIN_ON_OWNER static inline @@ -646,37 +716,83 @@ void __sched ww_mutex_unlock(struct ww_mutex *lock) } EXPORT_SYMBOL(ww_mutex_unlock); + +static __always_inline int __sched +__ww_mutex_kill(struct mutex *lock, struct ww_acquire_ctx *ww_ctx) +{ + if (ww_ctx->acquired > 0) { +#ifdef CONFIG_DEBUG_MUTEXES + struct ww_mutex *ww; + + ww = container_of(lock, struct ww_mutex, base); + DEBUG_LOCKS_WARN_ON(ww_ctx->contending_lock); + ww_ctx->contending_lock = ww; +#endif + return -EDEADLK; + } + + return 0; +} + + +/* + * Check the wound condition for the current lock acquire. + * + * Wound-Wait: If we're wounded, kill ourself. + * + * Wait-Die: If we're trying to acquire a lock already held by an older + * context, kill ourselves. + * + * Since __ww_mutex_add_waiter() orders the wait-list on stamp, we only have to + * look at waiters before us in the wait-list. + */ static inline int __sched -__ww_mutex_lock_check_stamp(struct mutex *lock, struct mutex_waiter *waiter, +__ww_mutex_check_wound(struct mutex *lock, struct mutex_waiter *waiter, struct ww_acquire_ctx *ctx) { struct ww_mutex *ww = container_of(lock, struct ww_mutex, base); struct ww_acquire_ctx *hold_ctx = READ_ONCE(ww->ctx); struct mutex_waiter *cur; + if (ctx->acquired == 0) + return 0; + + if (!ctx->is_wait_die) { + if (ctx->wounded) + return __ww_mutex_kill(lock, ctx); + + return 0; + } + if (hold_ctx && __ww_ctx_stamp_after(ctx, hold_ctx)) - goto deadlock; + return __ww_mutex_kill(lock, ctx); /* * If there is a waiter in front of us that has a context, then its - * stamp is earlier than ours and we must back off. + * stamp is earlier than ours and we must wound ourself. */ cur = waiter; list_for_each_entry_continue_reverse(cur, &lock->wait_list, list) { - if (cur->ww_ctx) - goto deadlock; + if (!cur->ww_ctx) + continue; + + return __ww_mutex_kill(lock, ctx); } return 0; - -deadlock: -#ifdef CONFIG_DEBUG_MUTEXES - DEBUG_LOCKS_WARN_ON(ctx->contending_lock); - ctx->contending_lock = ww; -#endif - return -EDEADLK; } +/* + * Add @waiter to the wait-list, keep the wait-list ordered by stamp, smallest + * first. Such that older contexts are preferred to acquire the lock over + * younger contexts. + * + * Waiters without context are interspersed in FIFO order. + * + * Furthermore, for Wait-Die kill ourself immediately when possible (there are + * older contexts already waiting) to avoid unnecessary waiting and for + * Wound-Wait ensure we wound the owning context when it is younger. + */ static inline int __sched __ww_mutex_add_waiter(struct mutex_waiter *waiter, struct mutex *lock, @@ -684,16 +800,21 @@ __ww_mutex_add_waiter(struct mutex_waiter *waiter, { struct mutex_waiter *cur; struct list_head *pos; + bool is_wait_die; if (!ww_ctx) { list_add_tail(&waiter->list, &lock->wait_list); return 0; } + is_wait_die = ww_ctx->is_wait_die; + /* * Add the waiter before the first waiter with a higher stamp. * Waiters without a context are skipped to avoid starving - * them. + * them. Wait-Die waiters may back off here. Wound-Wait waiters + * never back off here, but they are sorted in stamp order and + * may wound the lock holder. */ pos = &lock->wait_list; list_for_each_entry_reverse(cur, &lock->wait_list, list) { @@ -701,16 +822,16 @@ __ww_mutex_add_waiter(struct mutex_waiter *waiter, continue; if (__ww_ctx_stamp_after(ww_ctx, cur->ww_ctx)) { - /* Back off immediately if necessary. */ - if (ww_ctx->acquired > 0) { -#ifdef CONFIG_DEBUG_MUTEXES - struct ww_mutex *ww; - - ww = container_of(lock, struct ww_mutex, base); - DEBUG_LOCKS_WARN_ON(ww_ctx->contending_lock); - ww_ctx->contending_lock = ww; -#endif - return -EDEADLK; + /* + * Wait-Die: if we find an older context waiting, there + * is no point in queueing behind it, as we'd have to + * wound ourselves the moment it would acquire the + * lock. + */ + if (is_wait_die) { + int ret = __ww_mutex_kill(lock, ww_ctx); + if (ret) + return ret; } break; @@ -718,17 +839,29 @@ __ww_mutex_add_waiter(struct mutex_waiter *waiter, pos = &cur->list; + /* Wait-Die: ensure younger waiters die. */ + __ww_mutex_die(lock, cur, ww_ctx); + } + + list_add_tail(&waiter->list, pos); + + /* + * Wound-Wait: if we're blocking on a mutex owned by a younger context, + * wound that such that we might proceed. + */ + if (!is_wait_die) { + struct ww_mutex *ww = container_of(lock, struct ww_mutex, base); + /* - * Wake up the waiter so that it gets a chance to back - * off. + * See ww_mutex_set_context_fastpath(). Orders the + * list_add_tail() vs the ww->ctx load, such that either we + * or the fastpath will wound @ww->ctx. */ - if (cur->ww_ctx->acquired > 0) { - debug_mutex_wake_waiter(lock, cur); - wake_up_process(cur->task); - } + smp_mb(); + + __ww_mutex_wound(lock, ww_ctx, ww->ctx); } - list_add_tail(&waiter->list, pos); return 0; } @@ -751,6 +884,14 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, if (use_ww_ctx && ww_ctx) { if (unlikely(ww_ctx == READ_ONCE(ww->ctx))) return -EALREADY; + + /* + * Reset the wounded flag after a kill. No other process can + * race and wound us here since they can't have a valid owner + * pointer at this time. + */ + if (ww_ctx->acquired == 0) + ww_ctx->wounded = 0; } preempt_disable(); @@ -772,7 +913,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, */ if (__mutex_trylock(lock)) { if (use_ww_ctx && ww_ctx) - __ww_mutex_wakeup_for_backoff(lock, ww_ctx); + __ww_mutex_check_waiters(lock, ww_ctx); goto skip_wait; } @@ -790,10 +931,10 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, waiter.ww_ctx = MUTEX_POISON_WW_CTX; #endif } else { - /* Add in stamp order, waking up waiters that must back off. */ + /* Add in stamp order, waking up waiters that must wound themselves. */ ret = __ww_mutex_add_waiter(&waiter, lock, ww_ctx); if (ret) - goto err_early_backoff; + goto err_early_kill; waiter.ww_ctx = ww_ctx; } @@ -824,8 +965,8 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, goto err; } - if (use_ww_ctx && ww_ctx && ww_ctx->acquired > 0) { - ret = __ww_mutex_lock_check_stamp(lock, &waiter, ww_ctx); + if (use_ww_ctx && ww_ctx) { + ret = __ww_mutex_check_wound(lock, &waiter, ww_ctx); if (ret) goto err; } @@ -859,6 +1000,16 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, acquired: __set_current_state(TASK_RUNNING); + if (use_ww_ctx && ww_ctx) { + /* + * Wound-Wait; we stole the lock (!first_waiter), check the + * waiters. This, together with XXX, ensures __ww_mutex_wound() + * only needs to check the first waiter (with context). + */ + if (!ww_ctx->is_wait_die && !__mutex_waiter_is_first(lock, &waiter)) + __ww_mutex_check_waiters(lock, ww_ctx); + } + mutex_remove_waiter(lock, &waiter, current); if (likely(list_empty(&lock->wait_list))) __mutex_clear_flag(lock, MUTEX_FLAGS); @@ -870,7 +1021,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, lock_acquired(&lock->dep_map, ip); if (use_ww_ctx && ww_ctx) - ww_mutex_set_context_slowpath(ww, ww_ctx); + ww_mutex_lock_acquired(ww, ww_ctx); spin_unlock(&lock->wait_lock); preempt_enable(); @@ -879,7 +1030,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, err: __set_current_state(TASK_RUNNING); mutex_remove_waiter(lock, &waiter, current); -err_early_backoff: +err_early_kill: spin_unlock(&lock->wait_lock); debug_mutex_free_waiter(&waiter); mutex_release(&lock->dep_map, 1, ip); From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.1 (2015-04-28) on archive.lwn.net X-Spam-Level: X-Spam-Status: No, score=-5.6 required=5.0 tests=DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, T_DKIM_INVALID autolearn=unavailable autolearn_force=no version=3.4.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by archive.lwn.net (Postfix) with ESMTP id EC5D67D072 for ; Thu, 14 Jun 2018 14:43:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S965354AbeFNOnZ (ORCPT ); Thu, 14 Jun 2018 10:43:25 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:56090 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S965278AbeFNOnW (ORCPT ); Thu, 14 Jun 2018 10:43:22 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=In-Reply-To:Content-Type:MIME-Version :References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=XOJbCG9lsU/Uw+/funevXmFWZD6fT+h4A2bXbNu8yuE=; b=LfPOP8U+fViCTqg+al1Y6meYS 8q+XH81tB/S3XTKXWugkMA3h/5TZCuBMqxyx1hrtIxlgTppIwN9WlwaBacA6AgZDQ94MVWXlkwciP WjlktiGyqMjMFNF9TQIajHKXaw43M35ZZrU8AqOyRSXglOL7Tcl8j7icYcVAEvmxMc6/tp8QE8gxw VVJO/ojxAhn1ibtCQufmTTzKaKjB873xrq2HBNOg5hvmxngQPBJgr4d/P4MJDeZrx+txyRiyccH2c 1gMUFr8W1fsncjiKrn59Nx3syIdKRSXP1eQB+foqgkQWUKIZdFWanXQDCFMbSAVk08RcIjnXDpy2j uiqR312Qg==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=hirez.programming.kicks-ass.net) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1fTTSi-0005fv-Oa; Thu, 14 Jun 2018 14:42:57 +0000 Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000) id A65FE20268300; Thu, 14 Jun 2018 16:42:54 +0200 (CEST) Date: Thu, 14 Jun 2018 16:42:54 +0200 From: Peter Zijlstra To: Thomas Hellstrom Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, Ingo Molnar , Jonathan Corbet , Gustavo Padovan , Maarten Lankhorst , Sean Paul , David Airlie , Davidlohr Bueso , "Paul E. McKenney" , Josh Triplett , Thomas Gleixner , Kate Stewart , Philippe Ombredanne , Greg Kroah-Hartman , linux-doc@vger.kernel.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org Subject: Re: [PATCH 1/2] locking: Implement an algorithm choice for Wound-Wait mutexes Message-ID: <20180614144254.GB12198@hirez.programming.kicks-ass.net> References: <20180613074745.14750-1-thellstrom@vmware.com> <20180613074745.14750-2-thellstrom@vmware.com> <20180613095012.GW12198@hirez.programming.kicks-ass.net> <69f3dee9-4782-bc90-3ee2-813ac6835c4a@vmware.com> <20180613131000.GX12198@hirez.programming.kicks-ass.net> <9afd482d-7082-fa17-5e34-179a652376e5@vmware.com> <20180614105151.GY12198@hirez.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.9.5 (2018-04-13) Sender: linux-doc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-doc@vger.kernel.org On Thu, Jun 14, 2018 at 01:48:39PM +0200, Thomas Hellstrom wrote: > The literature makes a distinction between "killed" and "wounded". In our > context, "Killed" is when a transaction actually receives an -EDEADLK and > needs to back off. "Wounded" is when someone (typically another transaction) > requests a transaction to kill itself. A wound will often, but not always, > lead to a kill. If the wounded transaction has finished its locking > sequence, or has the opportunity to grab uncontended ww mutexes or steal > contended (non-handoff) ww mutexes to finish its transaction it will do so > and never kill itself. Hopefully I got it all right this time; I folded your patch in and mucked around with it a bit, but haven't done anything except compile it. I left the context/transaction thing because well, that's what we called the thing. diff --git a/include/linux/ww_mutex.h b/include/linux/ww_mutex.h index 39fda195bf78..50ef5a10cfa0 100644 --- a/include/linux/ww_mutex.h +++ b/include/linux/ww_mutex.h @@ -8,6 +8,8 @@ * * Wound/wait implementation: * Copyright (C) 2013 Canonical Ltd. + * Choice of algorithm: + * Copyright (C) 2018 WMWare Inc. * * This file contains the main data structure and API definitions. */ @@ -23,14 +25,17 @@ struct ww_class { struct lock_class_key mutex_key; const char *acquire_name; const char *mutex_name; + unsigned int is_wait_die; }; struct ww_acquire_ctx { struct task_struct *task; unsigned long stamp; - unsigned acquired; + unsigned int acquired; + unsigned short wounded; + unsigned short is_wait_die; #ifdef CONFIG_DEBUG_MUTEXES - unsigned done_acquire; + unsigned int done_acquire; struct ww_class *ww_class; struct ww_mutex *contending_lock; #endif @@ -38,8 +43,8 @@ struct ww_acquire_ctx { struct lockdep_map dep_map; #endif #ifdef CONFIG_DEBUG_WW_MUTEX_SLOWPATH - unsigned deadlock_inject_interval; - unsigned deadlock_inject_countdown; + unsigned int deadlock_inject_interval; + unsigned int deadlock_inject_countdown; #endif }; @@ -58,17 +63,21 @@ struct ww_mutex { # define __WW_CLASS_MUTEX_INITIALIZER(lockname, class) #endif -#define __WW_CLASS_INITIALIZER(ww_class) \ +#define __WW_CLASS_INITIALIZER(ww_class, _is_wait_die) \ { .stamp = ATOMIC_LONG_INIT(0) \ , .acquire_name = #ww_class "_acquire" \ - , .mutex_name = #ww_class "_mutex" } + , .mutex_name = #ww_class "_mutex" \ + , .is_wait_die = _is_wait_die } #define __WW_MUTEX_INITIALIZER(lockname, class) \ { .base = __MUTEX_INITIALIZER(lockname.base) \ __WW_CLASS_MUTEX_INITIALIZER(lockname, class) } +#define DEFINE_WD_CLASS(classname) \ + struct ww_class classname = __WW_CLASS_INITIALIZER(classname, 1) + #define DEFINE_WW_CLASS(classname) \ - struct ww_class classname = __WW_CLASS_INITIALIZER(classname) + struct ww_class classname = __WW_CLASS_INITIALIZER(classname, 0) #define DEFINE_WW_MUTEX(mutexname, ww_class) \ struct ww_mutex mutexname = __WW_MUTEX_INITIALIZER(mutexname, ww_class) @@ -123,6 +132,8 @@ static inline void ww_acquire_init(struct ww_acquire_ctx *ctx, ctx->task = current; ctx->stamp = atomic_long_inc_return_relaxed(&ww_class->stamp); ctx->acquired = 0; + ctx->wounded = false; + ctx->is_wait_die = ww_class->is_wait_die; #ifdef CONFIG_DEBUG_MUTEXES ctx->ww_class = ww_class; ctx->done_acquire = 0; diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index f44f658ae629..9e244af4647d 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -244,6 +244,22 @@ void __sched mutex_lock(struct mutex *lock) EXPORT_SYMBOL(mutex_lock); #endif +/* + * Wait-Die: + * The newer transactions are killed when: + * It (the new transaction) makes a request for a lock being held + * by an older transaction. + * + * Wound-Wait: + * The newer transactions are wounded when: + * An older transaction makes a request for a lock being held by + * the newer transaction. + */ + +/* + * Associate the ww_mutex @ww with the context @ww_ctx under which we acquired + * it. + */ static __always_inline void ww_mutex_lock_acquired(struct ww_mutex *ww, struct ww_acquire_ctx *ww_ctx) { @@ -282,26 +298,96 @@ ww_mutex_lock_acquired(struct ww_mutex *ww, struct ww_acquire_ctx *ww_ctx) DEBUG_LOCKS_WARN_ON(ww_ctx->ww_class != ww->ww_class); #endif ww_ctx->acquired++; + ww->ctx = ww_ctx; } +/* + * Determine if context @a is 'after' context @b. IOW, @a should be wounded in + * favour of @b. + */ static inline bool __sched __ww_ctx_stamp_after(struct ww_acquire_ctx *a, struct ww_acquire_ctx *b) { - return a->stamp - b->stamp <= LONG_MAX && - (a->stamp != b->stamp || a > b); + + return (signed long)(a->stamp - b->stamp) > 0; } /* - * Wake up any waiters that may have to back off when the lock is held by the - * given context. + * Wait-Die; wake a younger waiter context (when locks held) such that it can die. * - * Due to the invariants on the wait list, this can only affect the first - * waiter with a context. + * Among waiters with context, only the first one can have other locks acquired + * already (ctx->acquired > 0), because __ww_mutex_add_waiter() and + * __ww_mutex_check_wound() wake any but the earliest context. + */ +static bool __ww_mutex_die(struct mutex *lock, struct mutex_waiter *waiter, + struct ww_acquire_ctx *ww_ctx) +{ + if (!ww_ctx->is_wait_die) + return false; + + if (waiter->ww_ctx->acquired > 0 && + __ww_ctx_stamp_after(waiter->ww_ctx, ww_ctx)) { + debug_mutex_wake_waiter(lock, waiter); + wake_up_process(waiter->task); + } + + return true; +} + +/* + * Wound-Wait; wound a younger @hold_ctx (if it has locks held). + * + * XXX more; explain why we too only need to wake the first. + */ +static bool __ww_mutex_wound(struct mutex *lock, + struct ww_acquire_ctx *ww_ctx, + struct ww_acquire_ctx *hold_ctx) +{ + struct task_struct *owner = __mutex_owner(lock); + + lockdep_assert_held(&lock->wait_lock); + + /* + * Possible through __ww_mutex_add_waiter() when we race with + * ww_mutex_set_context_fastpath(). In that case we'll get here again + * through __ww_mutex_check_waiters(). + */ + if (!hold_ctx) + return false; + + /* + * Can have !owner because of __mutex_unlock_slowpath(), but if owner, + * it cannot go away because we'll have FLAG_WAITERS set and hold + * wait_lock. + */ + if (!owner) + return false; + + if (ww_ctx->acquired > 0 && __ww_ctx_stamp_after(hold_ctx, ww_ctx)) { + hold_ctx->wounded = 1; + if (owner != current) + wake_up_process(owner); + + return true; + } + + return false; +} + +/* + * We just acquired @lock under @ww_ctx, if there are later contexts waiting + * behind us on the wait-list, check if they need wounding/killing. + * + * See __ww_mutex_add_waiter() for the list-order construction; basically the + * list is ordered by stamp, smallest (oldest) first. + * + * This relies on never mixing wait-die/wound-wait on the same wait-list; which is + * currently ensured by that being a ww_class property. * * The current task must not be on the wait list. */ static void __sched -__ww_mutex_wakeup_for_backoff(struct mutex *lock, struct ww_acquire_ctx *ww_ctx) +__ww_mutex_check_waiters(struct mutex *lock, struct ww_acquire_ctx *ww_ctx) { struct mutex_waiter *cur; @@ -311,66 +397,50 @@ __ww_mutex_wakeup_for_backoff(struct mutex *lock, struct ww_acquire_ctx *ww_ctx) if (!cur->ww_ctx) continue; - if (cur->ww_ctx->acquired > 0 && - __ww_ctx_stamp_after(cur->ww_ctx, ww_ctx)) { - debug_mutex_wake_waiter(lock, cur); - wake_up_process(cur->task); - } - - break; + if (__ww_mutex_die(lock, cur, ww_ctx) || + __ww_mutex_wound(lock, cur->ww_ctx, ww_ctx)) + break; } } /* - * After acquiring lock with fastpath or when we lost out in contested - * slowpath, set ctx and wake up any waiters so they can recheck. + * After acquiring lock with fastpath, where we do not hold wait_lock, set ctx + * and wake up any waiters so they can recheck. */ static __always_inline void ww_mutex_set_context_fastpath(struct ww_mutex *lock, struct ww_acquire_ctx *ctx) { ww_mutex_lock_acquired(lock, ctx); - lock->ctx = ctx; - /* * The lock->ctx update should be visible on all cores before - * the atomic read is done, otherwise contended waiters might be + * the list_empty check is done, otherwise contended waiters might be * missed. The contended waiters will either see ww_ctx == NULL * and keep spinning, or it will acquire wait_lock, add itself * to waiter list and sleep. */ - smp_mb(); /* ^^^ */ + smp_mb(); /* See comments above and below. */ /* - * Check if lock is contended, if not there is nobody to wake up + * [W] ww->ctx = ctx [W] list_add_tail() + * MB MB + * [R] list_empty() [R] ww->ctx + * + * The memory barrier above pairs with the memory barrier in + * __ww_mutex_add_waiter() and makes sure we either observe ww->ctx + * and/or !empty list. */ - if (likely(!(atomic_long_read(&lock->base.owner) & MUTEX_FLAG_WAITERS))) + if (likely(list_empty(&lock->base.wait_list))) return; /* - * Uh oh, we raced in fastpath, wake up everyone in this case, - * so they can see the new lock->ctx. + * Uh oh, we raced in fastpath, check if any of the waiters need wounding. */ spin_lock(&lock->base.wait_lock); - __ww_mutex_wakeup_for_backoff(&lock->base, ctx); + __ww_mutex_check_waiters(&lock->base, ctx); spin_unlock(&lock->base.wait_lock); } -/* - * After acquiring lock in the slowpath set ctx. - * - * Unlike for the fast path, the caller ensures that waiters are woken up where - * necessary. - * - * Callers must hold the mutex wait_lock. - */ -static __always_inline void -ww_mutex_set_context_slowpath(struct ww_mutex *lock, struct ww_acquire_ctx *ctx) -{ - ww_mutex_lock_acquired(lock, ctx); - lock->ctx = ctx; -} - #ifdef CONFIG_MUTEX_SPIN_ON_OWNER static inline @@ -646,37 +716,83 @@ void __sched ww_mutex_unlock(struct ww_mutex *lock) } EXPORT_SYMBOL(ww_mutex_unlock); + +static __always_inline int __sched +__ww_mutex_kill(struct mutex *lock, struct ww_acquire_ctx *ww_ctx) +{ + if (ww_ctx->acquired > 0) { +#ifdef CONFIG_DEBUG_MUTEXES + struct ww_mutex *ww; + + ww = container_of(lock, struct ww_mutex, base); + DEBUG_LOCKS_WARN_ON(ww_ctx->contending_lock); + ww_ctx->contending_lock = ww; +#endif + return -EDEADLK; + } + + return 0; +} + + +/* + * Check the wound condition for the current lock acquire. + * + * Wound-Wait: If we're wounded, kill ourself. + * + * Wait-Die: If we're trying to acquire a lock already held by an older + * context, kill ourselves. + * + * Since __ww_mutex_add_waiter() orders the wait-list on stamp, we only have to + * look at waiters before us in the wait-list. + */ static inline int __sched -__ww_mutex_lock_check_stamp(struct mutex *lock, struct mutex_waiter *waiter, +__ww_mutex_check_wound(struct mutex *lock, struct mutex_waiter *waiter, struct ww_acquire_ctx *ctx) { struct ww_mutex *ww = container_of(lock, struct ww_mutex, base); struct ww_acquire_ctx *hold_ctx = READ_ONCE(ww->ctx); struct mutex_waiter *cur; + if (ctx->acquired == 0) + return 0; + + if (!ctx->is_wait_die) { + if (ctx->wounded) + return __ww_mutex_kill(lock, ctx); + + return 0; + } + if (hold_ctx && __ww_ctx_stamp_after(ctx, hold_ctx)) - goto deadlock; + return __ww_mutex_kill(lock, ctx); /* * If there is a waiter in front of us that has a context, then its - * stamp is earlier than ours and we must back off. + * stamp is earlier than ours and we must wound ourself. */ cur = waiter; list_for_each_entry_continue_reverse(cur, &lock->wait_list, list) { - if (cur->ww_ctx) - goto deadlock; + if (!cur->ww_ctx) + continue; + + return __ww_mutex_kill(lock, ctx); } return 0; - -deadlock: -#ifdef CONFIG_DEBUG_MUTEXES - DEBUG_LOCKS_WARN_ON(ctx->contending_lock); - ctx->contending_lock = ww; -#endif - return -EDEADLK; } +/* + * Add @waiter to the wait-list, keep the wait-list ordered by stamp, smallest + * first. Such that older contexts are preferred to acquire the lock over + * younger contexts. + * + * Waiters without context are interspersed in FIFO order. + * + * Furthermore, for Wait-Die kill ourself immediately when possible (there are + * older contexts already waiting) to avoid unnecessary waiting and for + * Wound-Wait ensure we wound the owning context when it is younger. + */ static inline int __sched __ww_mutex_add_waiter(struct mutex_waiter *waiter, struct mutex *lock, @@ -684,16 +800,21 @@ __ww_mutex_add_waiter(struct mutex_waiter *waiter, { struct mutex_waiter *cur; struct list_head *pos; + bool is_wait_die; if (!ww_ctx) { list_add_tail(&waiter->list, &lock->wait_list); return 0; } + is_wait_die = ww_ctx->is_wait_die; + /* * Add the waiter before the first waiter with a higher stamp. * Waiters without a context are skipped to avoid starving - * them. + * them. Wait-Die waiters may back off here. Wound-Wait waiters + * never back off here, but they are sorted in stamp order and + * may wound the lock holder. */ pos = &lock->wait_list; list_for_each_entry_reverse(cur, &lock->wait_list, list) { @@ -701,16 +822,16 @@ __ww_mutex_add_waiter(struct mutex_waiter *waiter, continue; if (__ww_ctx_stamp_after(ww_ctx, cur->ww_ctx)) { - /* Back off immediately if necessary. */ - if (ww_ctx->acquired > 0) { -#ifdef CONFIG_DEBUG_MUTEXES - struct ww_mutex *ww; - - ww = container_of(lock, struct ww_mutex, base); - DEBUG_LOCKS_WARN_ON(ww_ctx->contending_lock); - ww_ctx->contending_lock = ww; -#endif - return -EDEADLK; + /* + * Wait-Die: if we find an older context waiting, there + * is no point in queueing behind it, as we'd have to + * wound ourselves the moment it would acquire the + * lock. + */ + if (is_wait_die) { + int ret = __ww_mutex_kill(lock, ww_ctx); + if (ret) + return ret; } break; @@ -718,17 +839,29 @@ __ww_mutex_add_waiter(struct mutex_waiter *waiter, pos = &cur->list; + /* Wait-Die: ensure younger waiters die. */ + __ww_mutex_die(lock, cur, ww_ctx); + } + + list_add_tail(&waiter->list, pos); + + /* + * Wound-Wait: if we're blocking on a mutex owned by a younger context, + * wound that such that we might proceed. + */ + if (!is_wait_die) { + struct ww_mutex *ww = container_of(lock, struct ww_mutex, base); + /* - * Wake up the waiter so that it gets a chance to back - * off. + * See ww_mutex_set_context_fastpath(). Orders the + * list_add_tail() vs the ww->ctx load, such that either we + * or the fastpath will wound @ww->ctx. */ - if (cur->ww_ctx->acquired > 0) { - debug_mutex_wake_waiter(lock, cur); - wake_up_process(cur->task); - } + smp_mb(); + + __ww_mutex_wound(lock, ww_ctx, ww->ctx); } - list_add_tail(&waiter->list, pos); return 0; } @@ -751,6 +884,14 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, if (use_ww_ctx && ww_ctx) { if (unlikely(ww_ctx == READ_ONCE(ww->ctx))) return -EALREADY; + + /* + * Reset the wounded flag after a kill. No other process can + * race and wound us here since they can't have a valid owner + * pointer at this time. + */ + if (ww_ctx->acquired == 0) + ww_ctx->wounded = 0; } preempt_disable(); @@ -772,7 +913,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, */ if (__mutex_trylock(lock)) { if (use_ww_ctx && ww_ctx) - __ww_mutex_wakeup_for_backoff(lock, ww_ctx); + __ww_mutex_check_waiters(lock, ww_ctx); goto skip_wait; } @@ -790,10 +931,10 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, waiter.ww_ctx = MUTEX_POISON_WW_CTX; #endif } else { - /* Add in stamp order, waking up waiters that must back off. */ + /* Add in stamp order, waking up waiters that must wound themselves. */ ret = __ww_mutex_add_waiter(&waiter, lock, ww_ctx); if (ret) - goto err_early_backoff; + goto err_early_kill; waiter.ww_ctx = ww_ctx; } @@ -824,8 +965,8 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, goto err; } - if (use_ww_ctx && ww_ctx && ww_ctx->acquired > 0) { - ret = __ww_mutex_lock_check_stamp(lock, &waiter, ww_ctx); + if (use_ww_ctx && ww_ctx) { + ret = __ww_mutex_check_wound(lock, &waiter, ww_ctx); if (ret) goto err; } @@ -859,6 +1000,16 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, acquired: __set_current_state(TASK_RUNNING); + if (use_ww_ctx && ww_ctx) { + /* + * Wound-Wait; we stole the lock (!first_waiter), check the + * waiters. This, together with XXX, ensures __ww_mutex_wound() + * only needs to check the first waiter (with context). + */ + if (!ww_ctx->is_wait_die && !__mutex_waiter_is_first(lock, &waiter)) + __ww_mutex_check_waiters(lock, ww_ctx); + } + mutex_remove_waiter(lock, &waiter, current); if (likely(list_empty(&lock->wait_list))) __mutex_clear_flag(lock, MUTEX_FLAGS); @@ -870,7 +1021,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, lock_acquired(&lock->dep_map, ip); if (use_ww_ctx && ww_ctx) - ww_mutex_set_context_slowpath(ww, ww_ctx); + ww_mutex_lock_acquired(ww, ww_ctx); spin_unlock(&lock->wait_lock); preempt_enable(); @@ -879,7 +1030,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, err: __set_current_state(TASK_RUNNING); mutex_remove_waiter(lock, &waiter, current); -err_early_backoff: +err_early_kill: spin_unlock(&lock->wait_lock); debug_mutex_free_waiter(&waiter); mutex_release(&lock->dep_map, 1, ip); -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html From mboxrd@z Thu Jan 1 00:00:00 1970 From: Peter Zijlstra Subject: Re: [PATCH 1/2] locking: Implement an algorithm choice for Wound-Wait mutexes Date: Thu, 14 Jun 2018 16:42:54 +0200 Message-ID: <20180614144254.GB12198@hirez.programming.kicks-ass.net> References: <20180613074745.14750-1-thellstrom@vmware.com> <20180613074745.14750-2-thellstrom@vmware.com> <20180613095012.GW12198@hirez.programming.kicks-ass.net> <69f3dee9-4782-bc90-3ee2-813ac6835c4a@vmware.com> <20180613131000.GX12198@hirez.programming.kicks-ass.net> <9afd482d-7082-fa17-5e34-179a652376e5@vmware.com> <20180614105151.GY12198@hirez.programming.kicks-ass.net> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 Return-path: Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by gabe.freedesktop.org (Postfix) with ESMTPS id 9D4A46E7D9 for ; Thu, 14 Jun 2018 14:43:06 +0000 (UTC) Content-Disposition: inline In-Reply-To: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" To: Thomas Hellstrom Cc: Kate Stewart , Davidlohr Bueso , Jonathan Corbet , David Airlie , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, Josh Triplett , linaro-mm-sig@lists.linaro.org, Greg Kroah-Hartman , Ingo Molnar , Philippe Ombredanne , Thomas Gleixner , "Paul E. McKenney" , linux-media@vger.kernel.org List-Id: dri-devel@lists.freedesktop.org T24gVGh1LCBKdW4gMTQsIDIwMTggYXQgMDE6NDg6MzlQTSArMDIwMCwgVGhvbWFzIEhlbGxzdHJv bSB3cm90ZToKPiBUaGUgbGl0ZXJhdHVyZSBtYWtlcyBhIGRpc3RpbmN0aW9uIGJldHdlZW4gImtp bGxlZCIgYW5kICJ3b3VuZGVkIi4gSW4gb3VyCj4gY29udGV4dCwgIktpbGxlZCIgaXMgd2hlbiBh IHRyYW5zYWN0aW9uIGFjdHVhbGx5IHJlY2VpdmVzIGFuIC1FREVBRExLIGFuZAo+IG5lZWRzIHRv IGJhY2sgb2ZmLiAiV291bmRlZCIgaXMgd2hlbiBzb21lb25lICh0eXBpY2FsbHkgYW5vdGhlciB0 cmFuc2FjdGlvbikKPiByZXF1ZXN0cyBhIHRyYW5zYWN0aW9uIHRvIGtpbGwgaXRzZWxmLiBBIHdv dW5kIHdpbGwgb2Z0ZW4sIGJ1dCBub3QgYWx3YXlzLAo+IGxlYWQgdG8gYSBraWxsLiBJZiB0aGUg d291bmRlZCB0cmFuc2FjdGlvbiBoYXMgZmluaXNoZWQgaXRzIGxvY2tpbmcKPiBzZXF1ZW5jZSwg b3IgaGFzIHRoZSBvcHBvcnR1bml0eSB0byBncmFiIHVuY29udGVuZGVkIHd3IG11dGV4ZXMgb3Ig c3RlYWwKPiBjb250ZW5kZWQgKG5vbi1oYW5kb2ZmKSB3dyBtdXRleGVzIHRvIGZpbmlzaCBpdHMg dHJhbnNhY3Rpb24gaXQgd2lsbCBkbyBzbwo+IGFuZCBuZXZlciBraWxsIGl0c2VsZi4KCkhvcGVm dWxseSBJIGdvdCBpdCBhbGwgcmlnaHQgdGhpcyB0aW1lOyBJIGZvbGRlZCB5b3VyIHBhdGNoIGlu IGFuZAptdWNrZWQgYXJvdW5kIHdpdGggaXQgYSBiaXQsIGJ1dCBoYXZlbid0IGRvbmUgYW55dGhp bmcgZXhjZXB0IGNvbXBpbGUKaXQuCgpJIGxlZnQgdGhlIGNvbnRleHQvdHJhbnNhY3Rpb24gdGhp bmcgYmVjYXVzZSB3ZWxsLCB0aGF0J3Mgd2hhdCB3ZSBjYWxsZWQKdGhlIHRoaW5nLgoKCmRpZmYg LS1naXQgYS9pbmNsdWRlL2xpbnV4L3d3X211dGV4LmggYi9pbmNsdWRlL2xpbnV4L3d3X211dGV4 LmgKaW5kZXggMzlmZGExOTViZjc4Li41MGVmNWExMGNmYTAgMTAwNjQ0Ci0tLSBhL2luY2x1ZGUv bGludXgvd3dfbXV0ZXguaAorKysgYi9pbmNsdWRlL2xpbnV4L3d3X211dGV4LmgKQEAgLTgsNiAr OCw4IEBACiAgKgogICogV291bmQvd2FpdCBpbXBsZW1lbnRhdGlvbjoKICAqICBDb3B5cmlnaHQg KEMpIDIwMTMgQ2Fub25pY2FsIEx0ZC4KKyAqIENob2ljZSBvZiBhbGdvcml0aG06CisgKiAgQ29w eXJpZ2h0IChDKSAyMDE4IFdNV2FyZSBJbmMuCiAgKgogICogVGhpcyBmaWxlIGNvbnRhaW5zIHRo ZSBtYWluIGRhdGEgc3RydWN0dXJlIGFuZCBBUEkgZGVmaW5pdGlvbnMuCiAgKi8KQEAgLTIzLDE0 ICsyNSwxNyBAQCBzdHJ1Y3Qgd3dfY2xhc3MgewogCXN0cnVjdCBsb2NrX2NsYXNzX2tleSBtdXRl eF9rZXk7CiAJY29uc3QgY2hhciAqYWNxdWlyZV9uYW1lOwogCWNvbnN0IGNoYXIgKm11dGV4X25h bWU7CisJdW5zaWduZWQgaW50IGlzX3dhaXRfZGllOwogfTsKIAogc3RydWN0IHd3X2FjcXVpcmVf Y3R4IHsKIAlzdHJ1Y3QgdGFza19zdHJ1Y3QgKnRhc2s7CiAJdW5zaWduZWQgbG9uZyBzdGFtcDsK LQl1bnNpZ25lZCBhY3F1aXJlZDsKKwl1bnNpZ25lZCBpbnQgYWNxdWlyZWQ7CisJdW5zaWduZWQg c2hvcnQgd291bmRlZDsKKwl1bnNpZ25lZCBzaG9ydCBpc193YWl0X2RpZTsKICNpZmRlZiBDT05G SUdfREVCVUdfTVVURVhFUwotCXVuc2lnbmVkIGRvbmVfYWNxdWlyZTsKKwl1bnNpZ25lZCBpbnQg ZG9uZV9hY3F1aXJlOwogCXN0cnVjdCB3d19jbGFzcyAqd3dfY2xhc3M7CiAJc3RydWN0IHd3X211 dGV4ICpjb250ZW5kaW5nX2xvY2s7CiAjZW5kaWYKQEAgLTM4LDggKzQzLDggQEAgc3RydWN0IHd3 X2FjcXVpcmVfY3R4IHsKIAlzdHJ1Y3QgbG9ja2RlcF9tYXAgZGVwX21hcDsKICNlbmRpZgogI2lm ZGVmIENPTkZJR19ERUJVR19XV19NVVRFWF9TTE9XUEFUSAotCXVuc2lnbmVkIGRlYWRsb2NrX2lu amVjdF9pbnRlcnZhbDsKLQl1bnNpZ25lZCBkZWFkbG9ja19pbmplY3RfY291bnRkb3duOworCXVu c2lnbmVkIGludCBkZWFkbG9ja19pbmplY3RfaW50ZXJ2YWw7CisJdW5zaWduZWQgaW50IGRlYWRs b2NrX2luamVjdF9jb3VudGRvd247CiAjZW5kaWYKIH07CiAKQEAgLTU4LDE3ICs2MywyMSBAQCBz dHJ1Y3Qgd3dfbXV0ZXggewogIyBkZWZpbmUgX19XV19DTEFTU19NVVRFWF9JTklUSUFMSVpFUihs b2NrbmFtZSwgY2xhc3MpCiAjZW5kaWYKIAotI2RlZmluZSBfX1dXX0NMQVNTX0lOSVRJQUxJWkVS KHd3X2NsYXNzKSBcCisjZGVmaW5lIF9fV1dfQ0xBU1NfSU5JVElBTElaRVIod3dfY2xhc3MsIF9p c193YWl0X2RpZSkJICAgIFwKIAkJeyAuc3RhbXAgPSBBVE9NSUNfTE9OR19JTklUKDApIFwKIAkJ LCAuYWNxdWlyZV9uYW1lID0gI3d3X2NsYXNzICJfYWNxdWlyZSIgXAotCQksIC5tdXRleF9uYW1l ID0gI3d3X2NsYXNzICJfbXV0ZXgiIH0KKwkJLCAubXV0ZXhfbmFtZSA9ICN3d19jbGFzcyAiX211 dGV4IiBcCisJCSwgLmlzX3dhaXRfZGllID0gX2lzX3dhaXRfZGllIH0KIAogI2RlZmluZSBfX1dX X01VVEVYX0lOSVRJQUxJWkVSKGxvY2tuYW1lLCBjbGFzcykgXAogCQl7IC5iYXNlID0gIF9fTVVU RVhfSU5JVElBTElaRVIobG9ja25hbWUuYmFzZSkgXAogCQlfX1dXX0NMQVNTX01VVEVYX0lOSVRJ QUxJWkVSKGxvY2tuYW1lLCBjbGFzcykgfQogCisjZGVmaW5lIERFRklORV9XRF9DTEFTUyhjbGFz c25hbWUpIFwKKwlzdHJ1Y3Qgd3dfY2xhc3MgY2xhc3NuYW1lID0gX19XV19DTEFTU19JTklUSUFM SVpFUihjbGFzc25hbWUsIDEpCisKICNkZWZpbmUgREVGSU5FX1dXX0NMQVNTKGNsYXNzbmFtZSkg XAotCXN0cnVjdCB3d19jbGFzcyBjbGFzc25hbWUgPSBfX1dXX0NMQVNTX0lOSVRJQUxJWkVSKGNs YXNzbmFtZSkKKwlzdHJ1Y3Qgd3dfY2xhc3MgY2xhc3NuYW1lID0gX19XV19DTEFTU19JTklUSUFM SVpFUihjbGFzc25hbWUsIDApCiAKICNkZWZpbmUgREVGSU5FX1dXX01VVEVYKG11dGV4bmFtZSwg d3dfY2xhc3MpIFwKIAlzdHJ1Y3Qgd3dfbXV0ZXggbXV0ZXhuYW1lID0gX19XV19NVVRFWF9JTklU SUFMSVpFUihtdXRleG5hbWUsIHd3X2NsYXNzKQpAQCAtMTIzLDYgKzEzMiw4IEBAIHN0YXRpYyBp bmxpbmUgdm9pZCB3d19hY3F1aXJlX2luaXQoc3RydWN0IHd3X2FjcXVpcmVfY3R4ICpjdHgsCiAJ Y3R4LT50YXNrID0gY3VycmVudDsKIAljdHgtPnN0YW1wID0gYXRvbWljX2xvbmdfaW5jX3JldHVy bl9yZWxheGVkKCZ3d19jbGFzcy0+c3RhbXApOwogCWN0eC0+YWNxdWlyZWQgPSAwOworCWN0eC0+ d291bmRlZCA9IGZhbHNlOworCWN0eC0+aXNfd2FpdF9kaWUgPSB3d19jbGFzcy0+aXNfd2FpdF9k aWU7CiAjaWZkZWYgQ09ORklHX0RFQlVHX01VVEVYRVMKIAljdHgtPnd3X2NsYXNzID0gd3dfY2xh c3M7CiAJY3R4LT5kb25lX2FjcXVpcmUgPSAwOwpkaWZmIC0tZ2l0IGEva2VybmVsL2xvY2tpbmcv bXV0ZXguYyBiL2tlcm5lbC9sb2NraW5nL211dGV4LmMKaW5kZXggZjQ0ZjY1OGFlNjI5Li45ZTI0 NGFmNDY0N2QgMTAwNjQ0Ci0tLSBhL2tlcm5lbC9sb2NraW5nL211dGV4LmMKKysrIGIva2VybmVs L2xvY2tpbmcvbXV0ZXguYwpAQCAtMjQ0LDYgKzI0NCwyMiBAQCB2b2lkIF9fc2NoZWQgbXV0ZXhf bG9jayhzdHJ1Y3QgbXV0ZXggKmxvY2spCiBFWFBPUlRfU1lNQk9MKG11dGV4X2xvY2spOwogI2Vu ZGlmCiAKKy8qCisgKiBXYWl0LURpZToKKyAqICAgVGhlIG5ld2VyIHRyYW5zYWN0aW9ucyBhcmUg a2lsbGVkIHdoZW46CisgKiAgICAgSXQgKHRoZSBuZXcgdHJhbnNhY3Rpb24pIG1ha2VzIGEgcmVx dWVzdCBmb3IgYSBsb2NrIGJlaW5nIGhlbGQKKyAqICAgICBieSBhbiBvbGRlciB0cmFuc2FjdGlv bi4KKyAqCisgKiBXb3VuZC1XYWl0OgorICogICBUaGUgbmV3ZXIgdHJhbnNhY3Rpb25zIGFyZSB3 b3VuZGVkIHdoZW46CisgKiAgICAgQW4gb2xkZXIgdHJhbnNhY3Rpb24gbWFrZXMgYSByZXF1ZXN0 IGZvciBhIGxvY2sgYmVpbmcgaGVsZCBieQorICogICAgIHRoZSBuZXdlciB0cmFuc2FjdGlvbi4K KyAqLworCisvKgorICogQXNzb2NpYXRlIHRoZSB3d19tdXRleCBAd3cgd2l0aCB0aGUgY29udGV4 dCBAd3dfY3R4IHVuZGVyIHdoaWNoIHdlIGFjcXVpcmVkCisgKiBpdC4KKyAqLwogc3RhdGljIF9f YWx3YXlzX2lubGluZSB2b2lkCiB3d19tdXRleF9sb2NrX2FjcXVpcmVkKHN0cnVjdCB3d19tdXRl eCAqd3csIHN0cnVjdCB3d19hY3F1aXJlX2N0eCAqd3dfY3R4KQogewpAQCAtMjgyLDI2ICsyOTgs OTYgQEAgd3dfbXV0ZXhfbG9ja19hY3F1aXJlZChzdHJ1Y3Qgd3dfbXV0ZXggKnd3LCBzdHJ1Y3Qg d3dfYWNxdWlyZV9jdHggKnd3X2N0eCkKIAlERUJVR19MT0NLU19XQVJOX09OKHd3X2N0eC0+d3df Y2xhc3MgIT0gd3ctPnd3X2NsYXNzKTsKICNlbmRpZgogCXd3X2N0eC0+YWNxdWlyZWQrKzsKKwl3 dy0+Y3R4ID0gd3dfY3R4OwogfQogCisvKgorICogRGV0ZXJtaW5lIGlmIGNvbnRleHQgQGEgaXMg J2FmdGVyJyBjb250ZXh0IEBiLiBJT1csIEBhIHNob3VsZCBiZSB3b3VuZGVkIGluCisgKiBmYXZv dXIgb2YgQGIuCisgKi8KIHN0YXRpYyBpbmxpbmUgYm9vbCBfX3NjaGVkCiBfX3d3X2N0eF9zdGFt cF9hZnRlcihzdHJ1Y3Qgd3dfYWNxdWlyZV9jdHggKmEsIHN0cnVjdCB3d19hY3F1aXJlX2N0eCAq YikKIHsKLQlyZXR1cm4gYS0+c3RhbXAgLSBiLT5zdGFtcCA8PSBMT05HX01BWCAmJgotCSAgICAg ICAoYS0+c3RhbXAgIT0gYi0+c3RhbXAgfHwgYSA+IGIpOworCisJcmV0dXJuIChzaWduZWQgbG9u ZykoYS0+c3RhbXAgLSBiLT5zdGFtcCkgPiAwOwogfQogCiAvKgotICogV2FrZSB1cCBhbnkgd2Fp dGVycyB0aGF0IG1heSBoYXZlIHRvIGJhY2sgb2ZmIHdoZW4gdGhlIGxvY2sgaXMgaGVsZCBieSB0 aGUKLSAqIGdpdmVuIGNvbnRleHQuCisgKiBXYWl0LURpZTsgd2FrZSBhIHlvdW5nZXIgd2FpdGVy IGNvbnRleHQgKHdoZW4gbG9ja3MgaGVsZCkgc3VjaCB0aGF0IGl0IGNhbiBkaWUuCiAgKgotICog RHVlIHRvIHRoZSBpbnZhcmlhbnRzIG9uIHRoZSB3YWl0IGxpc3QsIHRoaXMgY2FuIG9ubHkgYWZm ZWN0IHRoZSBmaXJzdAotICogd2FpdGVyIHdpdGggYSBjb250ZXh0LgorICogQW1vbmcgd2FpdGVy cyB3aXRoIGNvbnRleHQsIG9ubHkgdGhlIGZpcnN0IG9uZSBjYW4gaGF2ZSBvdGhlciBsb2NrcyBh Y3F1aXJlZAorICogYWxyZWFkeSAoY3R4LT5hY3F1aXJlZCA+IDApLCBiZWNhdXNlIF9fd3dfbXV0 ZXhfYWRkX3dhaXRlcigpIGFuZAorICogX193d19tdXRleF9jaGVja193b3VuZCgpIHdha2UgYW55 IGJ1dCB0aGUgZWFybGllc3QgY29udGV4dC4KKyAqLworc3RhdGljIGJvb2wgX193d19tdXRleF9k aWUoc3RydWN0IG11dGV4ICpsb2NrLCBzdHJ1Y3QgbXV0ZXhfd2FpdGVyICp3YWl0ZXIsCisJCSAg ICAgICAgICAgc3RydWN0IHd3X2FjcXVpcmVfY3R4ICp3d19jdHgpCit7CisJaWYgKCF3d19jdHgt PmlzX3dhaXRfZGllKQorCQlyZXR1cm4gZmFsc2U7CisKKwlpZiAod2FpdGVyLT53d19jdHgtPmFj cXVpcmVkID4gMCAmJgorCQkJX193d19jdHhfc3RhbXBfYWZ0ZXIod2FpdGVyLT53d19jdHgsIHd3 X2N0eCkpIHsKKwkJZGVidWdfbXV0ZXhfd2FrZV93YWl0ZXIobG9jaywgd2FpdGVyKTsKKwkJd2Fr ZV91cF9wcm9jZXNzKHdhaXRlci0+dGFzayk7CisJfQorCisJcmV0dXJuIHRydWU7Cit9CisKKy8q CisgKiBXb3VuZC1XYWl0OyB3b3VuZCBhIHlvdW5nZXIgQGhvbGRfY3R4IChpZiBpdCBoYXMgbG9j a3MgaGVsZCkuCisgKgorICogWFhYIG1vcmU7IGV4cGxhaW4gd2h5IHdlIHRvbyBvbmx5IG5lZWQg dG8gd2FrZSB0aGUgZmlyc3QuCisgKi8KK3N0YXRpYyBib29sIF9fd3dfbXV0ZXhfd291bmQoc3Ry dWN0IG11dGV4ICpsb2NrLAorCQkJICAgICBzdHJ1Y3Qgd3dfYWNxdWlyZV9jdHggKnd3X2N0eCwK KwkJCSAgICAgc3RydWN0IHd3X2FjcXVpcmVfY3R4ICpob2xkX2N0eCkKK3sKKwlzdHJ1Y3QgdGFz a19zdHJ1Y3QgKm93bmVyID0gX19tdXRleF9vd25lcihsb2NrKTsKKworCWxvY2tkZXBfYXNzZXJ0 X2hlbGQoJmxvY2stPndhaXRfbG9jayk7CisKKwkvKgorCSAqIFBvc3NpYmxlIHRocm91Z2ggX193 d19tdXRleF9hZGRfd2FpdGVyKCkgd2hlbiB3ZSByYWNlIHdpdGgKKwkgKiB3d19tdXRleF9zZXRf Y29udGV4dF9mYXN0cGF0aCgpLiBJbiB0aGF0IGNhc2Ugd2UnbGwgZ2V0IGhlcmUgYWdhaW4KKwkg KiB0aHJvdWdoIF9fd3dfbXV0ZXhfY2hlY2tfd2FpdGVycygpLgorCSAqLworCWlmICghaG9sZF9j dHgpCisJCXJldHVybiBmYWxzZTsKKworCS8qCisJICogQ2FuIGhhdmUgIW93bmVyIGJlY2F1c2Ug b2YgX19tdXRleF91bmxvY2tfc2xvd3BhdGgoKSwgYnV0IGlmIG93bmVyLAorCSAqIGl0IGNhbm5v dCBnbyBhd2F5IGJlY2F1c2Ugd2UnbGwgaGF2ZSBGTEFHX1dBSVRFUlMgc2V0IGFuZCBob2xkCisJ ICogd2FpdF9sb2NrLgorCSAqLworCWlmICghb3duZXIpCisJCXJldHVybiBmYWxzZTsKKworCWlm ICh3d19jdHgtPmFjcXVpcmVkID4gMCAmJiBfX3d3X2N0eF9zdGFtcF9hZnRlcihob2xkX2N0eCwg d3dfY3R4KSkgeworCQlob2xkX2N0eC0+d291bmRlZCA9IDE7CisJCWlmIChvd25lciAhPSBjdXJy ZW50KQorCQkJd2FrZV91cF9wcm9jZXNzKG93bmVyKTsKKworCQlyZXR1cm4gdHJ1ZTsKKwl9CisK KwlyZXR1cm4gZmFsc2U7Cit9CisKKy8qCisgKiBXZSBqdXN0IGFjcXVpcmVkIEBsb2NrIHVuZGVy IEB3d19jdHgsIGlmIHRoZXJlIGFyZSBsYXRlciBjb250ZXh0cyB3YWl0aW5nCisgKiBiZWhpbmQg dXMgb24gdGhlIHdhaXQtbGlzdCwgY2hlY2sgaWYgdGhleSBuZWVkIHdvdW5kaW5nL2tpbGxpbmcu CisgKgorICogU2VlIF9fd3dfbXV0ZXhfYWRkX3dhaXRlcigpIGZvciB0aGUgbGlzdC1vcmRlciBj b25zdHJ1Y3Rpb247IGJhc2ljYWxseSB0aGUKKyAqIGxpc3QgaXMgb3JkZXJlZCBieSBzdGFtcCwg c21hbGxlc3QgKG9sZGVzdCkgZmlyc3QuCisgKgorICogVGhpcyByZWxpZXMgb24gbmV2ZXIgbWl4 aW5nIHdhaXQtZGllL3dvdW5kLXdhaXQgb24gdGhlIHNhbWUgd2FpdC1saXN0OyB3aGljaCBpcwor ICogY3VycmVudGx5IGVuc3VyZWQgYnkgdGhhdCBiZWluZyBhIHd3X2NsYXNzIHByb3BlcnR5Lgog ICoKICAqIFRoZSBjdXJyZW50IHRhc2sgbXVzdCBub3QgYmUgb24gdGhlIHdhaXQgbGlzdC4KICAq Lwogc3RhdGljIHZvaWQgX19zY2hlZAotX193d19tdXRleF93YWtldXBfZm9yX2JhY2tvZmYoc3Ry dWN0IG11dGV4ICpsb2NrLCBzdHJ1Y3Qgd3dfYWNxdWlyZV9jdHggKnd3X2N0eCkKK19fd3dfbXV0 ZXhfY2hlY2tfd2FpdGVycyhzdHJ1Y3QgbXV0ZXggKmxvY2ssIHN0cnVjdCB3d19hY3F1aXJlX2N0 eCAqd3dfY3R4KQogewogCXN0cnVjdCBtdXRleF93YWl0ZXIgKmN1cjsKIApAQCAtMzExLDY2ICsz OTcsNTAgQEAgX193d19tdXRleF93YWtldXBfZm9yX2JhY2tvZmYoc3RydWN0IG11dGV4ICpsb2Nr LCBzdHJ1Y3Qgd3dfYWNxdWlyZV9jdHggKnd3X2N0eCkKIAkJaWYgKCFjdXItPnd3X2N0eCkKIAkJ CWNvbnRpbnVlOwogCi0JCWlmIChjdXItPnd3X2N0eC0+YWNxdWlyZWQgPiAwICYmCi0JCSAgICBf X3d3X2N0eF9zdGFtcF9hZnRlcihjdXItPnd3X2N0eCwgd3dfY3R4KSkgewotCQkJZGVidWdfbXV0 ZXhfd2FrZV93YWl0ZXIobG9jaywgY3VyKTsKLQkJCXdha2VfdXBfcHJvY2VzcyhjdXItPnRhc2sp OwotCQl9Ci0KLQkJYnJlYWs7CisJCWlmIChfX3d3X211dGV4X2RpZShsb2NrLCBjdXIsIHd3X2N0 eCkgfHwKKwkJICAgIF9fd3dfbXV0ZXhfd291bmQobG9jaywgY3VyLT53d19jdHgsIHd3X2N0eCkp CisJCQlicmVhazsKIAl9CiB9CiAKIC8qCi0gKiBBZnRlciBhY3F1aXJpbmcgbG9jayB3aXRoIGZh c3RwYXRoIG9yIHdoZW4gd2UgbG9zdCBvdXQgaW4gY29udGVzdGVkCi0gKiBzbG93cGF0aCwgc2V0 IGN0eCBhbmQgd2FrZSB1cCBhbnkgd2FpdGVycyBzbyB0aGV5IGNhbiByZWNoZWNrLgorICogQWZ0 ZXIgYWNxdWlyaW5nIGxvY2sgd2l0aCBmYXN0cGF0aCwgd2hlcmUgd2UgZG8gbm90IGhvbGQgd2Fp dF9sb2NrLCBzZXQgY3R4CisgKiBhbmQgd2FrZSB1cCBhbnkgd2FpdGVycyBzbyB0aGV5IGNhbiBy ZWNoZWNrLgogICovCiBzdGF0aWMgX19hbHdheXNfaW5saW5lIHZvaWQKIHd3X211dGV4X3NldF9j b250ZXh0X2Zhc3RwYXRoKHN0cnVjdCB3d19tdXRleCAqbG9jaywgc3RydWN0IHd3X2FjcXVpcmVf Y3R4ICpjdHgpCiB7CiAJd3dfbXV0ZXhfbG9ja19hY3F1aXJlZChsb2NrLCBjdHgpOwogCi0JbG9j ay0+Y3R4ID0gY3R4OwotCiAJLyoKIAkgKiBUaGUgbG9jay0+Y3R4IHVwZGF0ZSBzaG91bGQgYmUg dmlzaWJsZSBvbiBhbGwgY29yZXMgYmVmb3JlCi0JICogdGhlIGF0b21pYyByZWFkIGlzIGRvbmUs IG90aGVyd2lzZSBjb250ZW5kZWQgd2FpdGVycyBtaWdodCBiZQorCSAqIHRoZSBsaXN0X2VtcHR5 IGNoZWNrIGlzIGRvbmUsIG90aGVyd2lzZSBjb250ZW5kZWQgd2FpdGVycyBtaWdodCBiZQogCSAq IG1pc3NlZC4gVGhlIGNvbnRlbmRlZCB3YWl0ZXJzIHdpbGwgZWl0aGVyIHNlZSB3d19jdHggPT0g TlVMTAogCSAqIGFuZCBrZWVwIHNwaW5uaW5nLCBvciBpdCB3aWxsIGFjcXVpcmUgd2FpdF9sb2Nr LCBhZGQgaXRzZWxmCiAJICogdG8gd2FpdGVyIGxpc3QgYW5kIHNsZWVwLgogCSAqLwotCXNtcF9t YigpOyAvKiBeXl4gKi8KKwlzbXBfbWIoKTsgLyogU2VlIGNvbW1lbnRzIGFib3ZlIGFuZCBiZWxv dy4gKi8KIAogCS8qCi0JICogQ2hlY2sgaWYgbG9jayBpcyBjb250ZW5kZWQsIGlmIG5vdCB0aGVy ZSBpcyBub2JvZHkgdG8gd2FrZSB1cAorCSAqIFtXXSB3dy0+Y3R4ID0gY3R4CVtXXSBsaXN0X2Fk ZF90YWlsKCkKKwkgKiAgICAgTUIJCSAgICBNQgorCSAqIFtSXSBsaXN0X2VtcHR5KCkJW1JdIHd3 LT5jdHgKKwkgKgorCSAqIFRoZSBtZW1vcnkgYmFycmllciBhYm92ZSBwYWlycyB3aXRoIHRoZSBt ZW1vcnkgYmFycmllciBpbgorCSAqIF9fd3dfbXV0ZXhfYWRkX3dhaXRlcigpIGFuZCBtYWtlcyBz dXJlIHdlIGVpdGhlciBvYnNlcnZlIHd3LT5jdHgKKwkgKiBhbmQvb3IgIWVtcHR5IGxpc3QuCiAJ ICovCi0JaWYgKGxpa2VseSghKGF0b21pY19sb25nX3JlYWQoJmxvY2stPmJhc2Uub3duZXIpICYg TVVURVhfRkxBR19XQUlURVJTKSkpCisJaWYgKGxpa2VseShsaXN0X2VtcHR5KCZsb2NrLT5iYXNl LndhaXRfbGlzdCkpKQogCQlyZXR1cm47CiAKIAkvKgotCSAqIFVoIG9oLCB3ZSByYWNlZCBpbiBm YXN0cGF0aCwgd2FrZSB1cCBldmVyeW9uZSBpbiB0aGlzIGNhc2UsCi0JICogc28gdGhleSBjYW4g c2VlIHRoZSBuZXcgbG9jay0+Y3R4LgorCSAqIFVoIG9oLCB3ZSByYWNlZCBpbiBmYXN0cGF0aCwg Y2hlY2sgaWYgYW55IG9mIHRoZSB3YWl0ZXJzIG5lZWQgd291bmRpbmcuCiAJICovCiAJc3Bpbl9s b2NrKCZsb2NrLT5iYXNlLndhaXRfbG9jayk7Ci0JX193d19tdXRleF93YWtldXBfZm9yX2JhY2tv ZmYoJmxvY2stPmJhc2UsIGN0eCk7CisJX193d19tdXRleF9jaGVja193YWl0ZXJzKCZsb2NrLT5i YXNlLCBjdHgpOwogCXNwaW5fdW5sb2NrKCZsb2NrLT5iYXNlLndhaXRfbG9jayk7CiB9CiAKLS8q Ci0gKiBBZnRlciBhY3F1aXJpbmcgbG9jayBpbiB0aGUgc2xvd3BhdGggc2V0IGN0eC4KLSAqCi0g KiBVbmxpa2UgZm9yIHRoZSBmYXN0IHBhdGgsIHRoZSBjYWxsZXIgZW5zdXJlcyB0aGF0IHdhaXRl cnMgYXJlIHdva2VuIHVwIHdoZXJlCi0gKiBuZWNlc3NhcnkuCi0gKgotICogQ2FsbGVycyBtdXN0 IGhvbGQgdGhlIG11dGV4IHdhaXRfbG9jay4KLSAqLwotc3RhdGljIF9fYWx3YXlzX2lubGluZSB2 b2lkCi13d19tdXRleF9zZXRfY29udGV4dF9zbG93cGF0aChzdHJ1Y3Qgd3dfbXV0ZXggKmxvY2ss IHN0cnVjdCB3d19hY3F1aXJlX2N0eCAqY3R4KQotewotCXd3X211dGV4X2xvY2tfYWNxdWlyZWQo bG9jaywgY3R4KTsKLQlsb2NrLT5jdHggPSBjdHg7Ci19Ci0KICNpZmRlZiBDT05GSUdfTVVURVhf U1BJTl9PTl9PV05FUgogCiBzdGF0aWMgaW5saW5lCkBAIC02NDYsMzcgKzcxNiw4MyBAQCB2b2lk IF9fc2NoZWQgd3dfbXV0ZXhfdW5sb2NrKHN0cnVjdCB3d19tdXRleCAqbG9jaykKIH0KIEVYUE9S VF9TWU1CT0wod3dfbXV0ZXhfdW5sb2NrKTsKIAorCitzdGF0aWMgX19hbHdheXNfaW5saW5lIGlu dCBfX3NjaGVkCitfX3d3X211dGV4X2tpbGwoc3RydWN0IG11dGV4ICpsb2NrLCBzdHJ1Y3Qgd3df YWNxdWlyZV9jdHggKnd3X2N0eCkKK3sKKwlpZiAod3dfY3R4LT5hY3F1aXJlZCA+IDApIHsKKyNp ZmRlZiBDT05GSUdfREVCVUdfTVVURVhFUworCQlzdHJ1Y3Qgd3dfbXV0ZXggKnd3OworCisJCXd3 ID0gY29udGFpbmVyX29mKGxvY2ssIHN0cnVjdCB3d19tdXRleCwgYmFzZSk7CisJCURFQlVHX0xP Q0tTX1dBUk5fT04od3dfY3R4LT5jb250ZW5kaW5nX2xvY2spOworCQl3d19jdHgtPmNvbnRlbmRp bmdfbG9jayA9IHd3OworI2VuZGlmCisJCXJldHVybiAtRURFQURMSzsKKwl9CisKKwlyZXR1cm4g MDsKK30KKworCisvKgorICogQ2hlY2sgdGhlIHdvdW5kIGNvbmRpdGlvbiBmb3IgdGhlIGN1cnJl bnQgbG9jayBhY3F1aXJlLgorICoKKyAqIFdvdW5kLVdhaXQ6IElmIHdlJ3JlIHdvdW5kZWQsIGtp bGwgb3Vyc2VsZi4KKyAqCisgKiBXYWl0LURpZTogSWYgd2UncmUgdHJ5aW5nIHRvIGFjcXVpcmUg YSBsb2NrIGFscmVhZHkgaGVsZCBieSBhbiBvbGRlcgorICogICAgICAgICAgIGNvbnRleHQsIGtp bGwgb3Vyc2VsdmVzLgorICoKKyAqIFNpbmNlIF9fd3dfbXV0ZXhfYWRkX3dhaXRlcigpIG9yZGVy cyB0aGUgd2FpdC1saXN0IG9uIHN0YW1wLCB3ZSBvbmx5IGhhdmUgdG8KKyAqIGxvb2sgYXQgd2Fp dGVycyBiZWZvcmUgdXMgaW4gdGhlIHdhaXQtbGlzdC4KKyAqLwogc3RhdGljIGlubGluZSBpbnQg X19zY2hlZAotX193d19tdXRleF9sb2NrX2NoZWNrX3N0YW1wKHN0cnVjdCBtdXRleCAqbG9jaywg c3RydWN0IG11dGV4X3dhaXRlciAqd2FpdGVyLAorX193d19tdXRleF9jaGVja193b3VuZChzdHJ1 Y3QgbXV0ZXggKmxvY2ssIHN0cnVjdCBtdXRleF93YWl0ZXIgKndhaXRlciwKIAkJCSAgICBzdHJ1 Y3Qgd3dfYWNxdWlyZV9jdHggKmN0eCkKIHsKIAlzdHJ1Y3Qgd3dfbXV0ZXggKnd3ID0gY29udGFp bmVyX29mKGxvY2ssIHN0cnVjdCB3d19tdXRleCwgYmFzZSk7CiAJc3RydWN0IHd3X2FjcXVpcmVf Y3R4ICpob2xkX2N0eCA9IFJFQURfT05DRSh3dy0+Y3R4KTsKIAlzdHJ1Y3QgbXV0ZXhfd2FpdGVy ICpjdXI7CiAKKwlpZiAoY3R4LT5hY3F1aXJlZCA9PSAwKQorCQlyZXR1cm4gMDsKKworCWlmICgh Y3R4LT5pc193YWl0X2RpZSkgeworCQlpZiAoY3R4LT53b3VuZGVkKQorCQkJcmV0dXJuIF9fd3df bXV0ZXhfa2lsbChsb2NrLCBjdHgpOworCisJCXJldHVybiAwOworCX0KKwogCWlmIChob2xkX2N0 eCAmJiBfX3d3X2N0eF9zdGFtcF9hZnRlcihjdHgsIGhvbGRfY3R4KSkKLQkJZ290byBkZWFkbG9j azsKKwkJcmV0dXJuIF9fd3dfbXV0ZXhfa2lsbChsb2NrLCBjdHgpOwogCiAJLyoKIAkgKiBJZiB0 aGVyZSBpcyBhIHdhaXRlciBpbiBmcm9udCBvZiB1cyB0aGF0IGhhcyBhIGNvbnRleHQsIHRoZW4g aXRzCi0JICogc3RhbXAgaXMgZWFybGllciB0aGFuIG91cnMgYW5kIHdlIG11c3QgYmFjayBvZmYu CisJICogc3RhbXAgaXMgZWFybGllciB0aGFuIG91cnMgYW5kIHdlIG11c3Qgd291bmQgb3Vyc2Vs Zi4KIAkgKi8KIAljdXIgPSB3YWl0ZXI7CiAJbGlzdF9mb3JfZWFjaF9lbnRyeV9jb250aW51ZV9y ZXZlcnNlKGN1ciwgJmxvY2stPndhaXRfbGlzdCwgbGlzdCkgewotCQlpZiAoY3VyLT53d19jdHgp Ci0JCQlnb3RvIGRlYWRsb2NrOworCQlpZiAoIWN1ci0+d3dfY3R4KQorCQkJY29udGludWU7CisK KwkJcmV0dXJuIF9fd3dfbXV0ZXhfa2lsbChsb2NrLCBjdHgpOwogCX0KIAogCXJldHVybiAwOwot Ci1kZWFkbG9jazoKLSNpZmRlZiBDT05GSUdfREVCVUdfTVVURVhFUwotCURFQlVHX0xPQ0tTX1dB Uk5fT04oY3R4LT5jb250ZW5kaW5nX2xvY2spOwotCWN0eC0+Y29udGVuZGluZ19sb2NrID0gd3c7 Ci0jZW5kaWYKLQlyZXR1cm4gLUVERUFETEs7CiB9CiAKKy8qCisgKiBBZGQgQHdhaXRlciB0byB0 aGUgd2FpdC1saXN0LCBrZWVwIHRoZSB3YWl0LWxpc3Qgb3JkZXJlZCBieSBzdGFtcCwgc21hbGxl c3QKKyAqIGZpcnN0LiBTdWNoIHRoYXQgb2xkZXIgY29udGV4dHMgYXJlIHByZWZlcnJlZCB0byBh Y3F1aXJlIHRoZSBsb2NrIG92ZXIKKyAqIHlvdW5nZXIgY29udGV4dHMuCisgKgorICogV2FpdGVy cyB3aXRob3V0IGNvbnRleHQgYXJlIGludGVyc3BlcnNlZCBpbiBGSUZPIG9yZGVyLgorICoKKyAq IEZ1cnRoZXJtb3JlLCBmb3IgV2FpdC1EaWUga2lsbCBvdXJzZWxmIGltbWVkaWF0ZWx5IHdoZW4g cG9zc2libGUgKHRoZXJlIGFyZQorICogb2xkZXIgY29udGV4dHMgYWxyZWFkeSB3YWl0aW5nKSB0 byBhdm9pZCB1bm5lY2Vzc2FyeSB3YWl0aW5nIGFuZCBmb3IKKyAqIFdvdW5kLVdhaXQgZW5zdXJl IHdlIHdvdW5kIHRoZSBvd25pbmcgY29udGV4dCB3aGVuIGl0IGlzIHlvdW5nZXIuCisgKi8KIHN0 YXRpYyBpbmxpbmUgaW50IF9fc2NoZWQKIF9fd3dfbXV0ZXhfYWRkX3dhaXRlcihzdHJ1Y3QgbXV0 ZXhfd2FpdGVyICp3YWl0ZXIsCiAJCSAgICAgIHN0cnVjdCBtdXRleCAqbG9jaywKQEAgLTY4NCwx NiArODAwLDIxIEBAIF9fd3dfbXV0ZXhfYWRkX3dhaXRlcihzdHJ1Y3QgbXV0ZXhfd2FpdGVyICp3 YWl0ZXIsCiB7CiAJc3RydWN0IG11dGV4X3dhaXRlciAqY3VyOwogCXN0cnVjdCBsaXN0X2hlYWQg KnBvczsKKwlib29sIGlzX3dhaXRfZGllOwogCiAJaWYgKCF3d19jdHgpIHsKIAkJbGlzdF9hZGRf dGFpbCgmd2FpdGVyLT5saXN0LCAmbG9jay0+d2FpdF9saXN0KTsKIAkJcmV0dXJuIDA7CiAJfQog CisJaXNfd2FpdF9kaWUgPSB3d19jdHgtPmlzX3dhaXRfZGllOworCiAJLyoKIAkgKiBBZGQgdGhl IHdhaXRlciBiZWZvcmUgdGhlIGZpcnN0IHdhaXRlciB3aXRoIGEgaGlnaGVyIHN0YW1wLgogCSAq IFdhaXRlcnMgd2l0aG91dCBhIGNvbnRleHQgYXJlIHNraXBwZWQgdG8gYXZvaWQgc3RhcnZpbmcK LQkgKiB0aGVtLgorCSAqIHRoZW0uIFdhaXQtRGllIHdhaXRlcnMgbWF5IGJhY2sgb2ZmIGhlcmUu IFdvdW5kLVdhaXQgd2FpdGVycworCSAqIG5ldmVyIGJhY2sgb2ZmIGhlcmUsIGJ1dCB0aGV5IGFy ZSBzb3J0ZWQgaW4gc3RhbXAgb3JkZXIgYW5kCisJICogbWF5IHdvdW5kIHRoZSBsb2NrIGhvbGRl ci4KIAkgKi8KIAlwb3MgPSAmbG9jay0+d2FpdF9saXN0OwogCWxpc3RfZm9yX2VhY2hfZW50cnlf cmV2ZXJzZShjdXIsICZsb2NrLT53YWl0X2xpc3QsIGxpc3QpIHsKQEAgLTcwMSwxNiArODIyLDE2 IEBAIF9fd3dfbXV0ZXhfYWRkX3dhaXRlcihzdHJ1Y3QgbXV0ZXhfd2FpdGVyICp3YWl0ZXIsCiAJ CQljb250aW51ZTsKIAogCQlpZiAoX193d19jdHhfc3RhbXBfYWZ0ZXIod3dfY3R4LCBjdXItPnd3 X2N0eCkpIHsKLQkJCS8qIEJhY2sgb2ZmIGltbWVkaWF0ZWx5IGlmIG5lY2Vzc2FyeS4gKi8KLQkJ CWlmICh3d19jdHgtPmFjcXVpcmVkID4gMCkgewotI2lmZGVmIENPTkZJR19ERUJVR19NVVRFWEVT Ci0JCQkJc3RydWN0IHd3X211dGV4ICp3dzsKLQotCQkJCXd3ID0gY29udGFpbmVyX29mKGxvY2ss IHN0cnVjdCB3d19tdXRleCwgYmFzZSk7Ci0JCQkJREVCVUdfTE9DS1NfV0FSTl9PTih3d19jdHgt PmNvbnRlbmRpbmdfbG9jayk7Ci0JCQkJd3dfY3R4LT5jb250ZW5kaW5nX2xvY2sgPSB3dzsKLSNl bmRpZgotCQkJCXJldHVybiAtRURFQURMSzsKKwkJCS8qCisJCQkgKiBXYWl0LURpZTogaWYgd2Ug ZmluZCBhbiBvbGRlciBjb250ZXh0IHdhaXRpbmcsIHRoZXJlCisJCQkgKiBpcyBubyBwb2ludCBp biBxdWV1ZWluZyBiZWhpbmQgaXQsIGFzIHdlJ2QgaGF2ZSB0bworCQkJICogd291bmQgb3Vyc2Vs dmVzIHRoZSBtb21lbnQgaXQgd291bGQgYWNxdWlyZSB0aGUKKwkJCSAqIGxvY2suCisJCQkgKi8K KwkJCWlmIChpc193YWl0X2RpZSkgeworCQkJCWludCByZXQgPSBfX3d3X211dGV4X2tpbGwobG9j aywgd3dfY3R4KTsKKwkJCQlpZiAocmV0KQorCQkJCQlyZXR1cm4gcmV0OwogCQkJfQogCiAJCQli cmVhazsKQEAgLTcxOCwxNyArODM5LDI5IEBAIF9fd3dfbXV0ZXhfYWRkX3dhaXRlcihzdHJ1Y3Qg bXV0ZXhfd2FpdGVyICp3YWl0ZXIsCiAKIAkJcG9zID0gJmN1ci0+bGlzdDsKIAorCQkvKiBXYWl0 LURpZTogZW5zdXJlIHlvdW5nZXIgd2FpdGVycyBkaWUuICovCisJCV9fd3dfbXV0ZXhfZGllKGxv Y2ssIGN1ciwgd3dfY3R4KTsKKwl9CisKKwlsaXN0X2FkZF90YWlsKCZ3YWl0ZXItPmxpc3QsIHBv cyk7CisKKwkvKgorCSAqIFdvdW5kLVdhaXQ6IGlmIHdlJ3JlIGJsb2NraW5nIG9uIGEgbXV0ZXgg b3duZWQgYnkgYSB5b3VuZ2VyIGNvbnRleHQsCisJICogd291bmQgdGhhdCBzdWNoIHRoYXQgd2Ug bWlnaHQgcHJvY2VlZC4KKwkgKi8KKwlpZiAoIWlzX3dhaXRfZGllKSB7CisJCXN0cnVjdCB3d19t dXRleCAqd3cgPSBjb250YWluZXJfb2YobG9jaywgc3RydWN0IHd3X211dGV4LCBiYXNlKTsKKwog CQkvKgotCQkgKiBXYWtlIHVwIHRoZSB3YWl0ZXIgc28gdGhhdCBpdCBnZXRzIGEgY2hhbmNlIHRv IGJhY2sKLQkJICogb2ZmLgorCQkgKiBTZWUgd3dfbXV0ZXhfc2V0X2NvbnRleHRfZmFzdHBhdGgo KS4gT3JkZXJzIHRoZQorCQkgKiBsaXN0X2FkZF90YWlsKCkgdnMgdGhlIHd3LT5jdHggbG9hZCwg c3VjaCB0aGF0IGVpdGhlciB3ZQorCQkgKiBvciB0aGUgZmFzdHBhdGggd2lsbCB3b3VuZCBAd3ct PmN0eC4KIAkJICovCi0JCWlmIChjdXItPnd3X2N0eC0+YWNxdWlyZWQgPiAwKSB7Ci0JCQlkZWJ1 Z19tdXRleF93YWtlX3dhaXRlcihsb2NrLCBjdXIpOwotCQkJd2FrZV91cF9wcm9jZXNzKGN1ci0+ dGFzayk7Ci0JCX0KKwkJc21wX21iKCk7CisKKwkJX193d19tdXRleF93b3VuZChsb2NrLCB3d19j dHgsIHd3LT5jdHgpOwogCX0KIAotCWxpc3RfYWRkX3RhaWwoJndhaXRlci0+bGlzdCwgcG9zKTsK IAlyZXR1cm4gMDsKIH0KIApAQCAtNzUxLDYgKzg4NCwxNCBAQCBfX211dGV4X2xvY2tfY29tbW9u KHN0cnVjdCBtdXRleCAqbG9jaywgbG9uZyBzdGF0ZSwgdW5zaWduZWQgaW50IHN1YmNsYXNzLAog CWlmICh1c2Vfd3dfY3R4ICYmIHd3X2N0eCkgewogCQlpZiAodW5saWtlbHkod3dfY3R4ID09IFJF QURfT05DRSh3dy0+Y3R4KSkpCiAJCQlyZXR1cm4gLUVBTFJFQURZOworCisJCS8qCisJCSAqIFJl c2V0IHRoZSB3b3VuZGVkIGZsYWcgYWZ0ZXIgYSBraWxsLiAgTm8gb3RoZXIgcHJvY2VzcyBjYW4K KwkJICogcmFjZSBhbmQgd291bmQgdXMgaGVyZSBzaW5jZSB0aGV5IGNhbid0IGhhdmUgYSB2YWxp ZCBvd25lcgorCQkgKiBwb2ludGVyIGF0IHRoaXMgdGltZS4KKwkJICovCisJCWlmICh3d19jdHgt PmFjcXVpcmVkID09IDApCisJCQl3d19jdHgtPndvdW5kZWQgPSAwOwogCX0KIAogCXByZWVtcHRf ZGlzYWJsZSgpOwpAQCAtNzcyLDcgKzkxMyw3IEBAIF9fbXV0ZXhfbG9ja19jb21tb24oc3RydWN0 IG11dGV4ICpsb2NrLCBsb25nIHN0YXRlLCB1bnNpZ25lZCBpbnQgc3ViY2xhc3MsCiAJICovCiAJ aWYgKF9fbXV0ZXhfdHJ5bG9jayhsb2NrKSkgewogCQlpZiAodXNlX3d3X2N0eCAmJiB3d19jdHgp Ci0JCQlfX3d3X211dGV4X3dha2V1cF9mb3JfYmFja29mZihsb2NrLCB3d19jdHgpOworCQkJX193 d19tdXRleF9jaGVja193YWl0ZXJzKGxvY2ssIHd3X2N0eCk7CiAKIAkJZ290byBza2lwX3dhaXQ7 CiAJfQpAQCAtNzkwLDEwICs5MzEsMTAgQEAgX19tdXRleF9sb2NrX2NvbW1vbihzdHJ1Y3QgbXV0 ZXggKmxvY2ssIGxvbmcgc3RhdGUsIHVuc2lnbmVkIGludCBzdWJjbGFzcywKIAkJd2FpdGVyLnd3 X2N0eCA9IE1VVEVYX1BPSVNPTl9XV19DVFg7CiAjZW5kaWYKIAl9IGVsc2UgewotCQkvKiBBZGQg aW4gc3RhbXAgb3JkZXIsIHdha2luZyB1cCB3YWl0ZXJzIHRoYXQgbXVzdCBiYWNrIG9mZi4gKi8K KwkJLyogQWRkIGluIHN0YW1wIG9yZGVyLCB3YWtpbmcgdXAgd2FpdGVycyB0aGF0IG11c3Qgd291 bmQgdGhlbXNlbHZlcy4gKi8KIAkJcmV0ID0gX193d19tdXRleF9hZGRfd2FpdGVyKCZ3YWl0ZXIs IGxvY2ssIHd3X2N0eCk7CiAJCWlmIChyZXQpCi0JCQlnb3RvIGVycl9lYXJseV9iYWNrb2ZmOwor CQkJZ290byBlcnJfZWFybHlfa2lsbDsKIAogCQl3YWl0ZXIud3dfY3R4ID0gd3dfY3R4OwogCX0K QEAgLTgyNCw4ICs5NjUsOCBAQCBfX211dGV4X2xvY2tfY29tbW9uKHN0cnVjdCBtdXRleCAqbG9j aywgbG9uZyBzdGF0ZSwgdW5zaWduZWQgaW50IHN1YmNsYXNzLAogCQkJZ290byBlcnI7CiAJCX0K IAotCQlpZiAodXNlX3d3X2N0eCAmJiB3d19jdHggJiYgd3dfY3R4LT5hY3F1aXJlZCA+IDApIHsK LQkJCXJldCA9IF9fd3dfbXV0ZXhfbG9ja19jaGVja19zdGFtcChsb2NrLCAmd2FpdGVyLCB3d19j dHgpOworCQlpZiAodXNlX3d3X2N0eCAmJiB3d19jdHgpIHsKKwkJCXJldCA9IF9fd3dfbXV0ZXhf Y2hlY2tfd291bmQobG9jaywgJndhaXRlciwgd3dfY3R4KTsKIAkJCWlmIChyZXQpCiAJCQkJZ290 byBlcnI7CiAJCX0KQEAgLTg1OSw2ICsxMDAwLDE2IEBAIF9fbXV0ZXhfbG9ja19jb21tb24oc3Ry dWN0IG11dGV4ICpsb2NrLCBsb25nIHN0YXRlLCB1bnNpZ25lZCBpbnQgc3ViY2xhc3MsCiBhY3F1 aXJlZDoKIAlfX3NldF9jdXJyZW50X3N0YXRlKFRBU0tfUlVOTklORyk7CiAKKwlpZiAodXNlX3d3 X2N0eCAmJiB3d19jdHgpIHsKKwkJLyoKKwkJICogV291bmQtV2FpdDsgd2Ugc3RvbGUgdGhlIGxv Y2sgKCFmaXJzdF93YWl0ZXIpLCBjaGVjayB0aGUKKwkJICogd2FpdGVycy4gVGhpcywgdG9nZXRo ZXIgd2l0aCBYWFgsIGVuc3VyZXMgX193d19tdXRleF93b3VuZCgpCisJCSAqIG9ubHkgbmVlZHMg dG8gY2hlY2sgdGhlIGZpcnN0IHdhaXRlciAod2l0aCBjb250ZXh0KS4KKwkJICovCisJCWlmICgh d3dfY3R4LT5pc193YWl0X2RpZSAmJiAhX19tdXRleF93YWl0ZXJfaXNfZmlyc3QobG9jaywgJndh aXRlcikpCisJCQlfX3d3X211dGV4X2NoZWNrX3dhaXRlcnMobG9jaywgd3dfY3R4KTsKKwl9CisK IAltdXRleF9yZW1vdmVfd2FpdGVyKGxvY2ssICZ3YWl0ZXIsIGN1cnJlbnQpOwogCWlmIChsaWtl bHkobGlzdF9lbXB0eSgmbG9jay0+d2FpdF9saXN0KSkpCiAJCV9fbXV0ZXhfY2xlYXJfZmxhZyhs b2NrLCBNVVRFWF9GTEFHUyk7CkBAIC04NzAsNyArMTAyMSw3IEBAIF9fbXV0ZXhfbG9ja19jb21t b24oc3RydWN0IG11dGV4ICpsb2NrLCBsb25nIHN0YXRlLCB1bnNpZ25lZCBpbnQgc3ViY2xhc3Ms CiAJbG9ja19hY3F1aXJlZCgmbG9jay0+ZGVwX21hcCwgaXApOwogCiAJaWYgKHVzZV93d19jdHgg JiYgd3dfY3R4KQotCQl3d19tdXRleF9zZXRfY29udGV4dF9zbG93cGF0aCh3dywgd3dfY3R4KTsK KwkJd3dfbXV0ZXhfbG9ja19hY3F1aXJlZCh3dywgd3dfY3R4KTsKIAogCXNwaW5fdW5sb2NrKCZs b2NrLT53YWl0X2xvY2spOwogCXByZWVtcHRfZW5hYmxlKCk7CkBAIC04NzksNyArMTAzMCw3IEBA IF9fbXV0ZXhfbG9ja19jb21tb24oc3RydWN0IG11dGV4ICpsb2NrLCBsb25nIHN0YXRlLCB1bnNp Z25lZCBpbnQgc3ViY2xhc3MsCiBlcnI6CiAJX19zZXRfY3VycmVudF9zdGF0ZShUQVNLX1JVTk5J TkcpOwogCW11dGV4X3JlbW92ZV93YWl0ZXIobG9jaywgJndhaXRlciwgY3VycmVudCk7Ci1lcnJf ZWFybHlfYmFja29mZjoKK2Vycl9lYXJseV9raWxsOgogCXNwaW5fdW5sb2NrKCZsb2NrLT53YWl0 X2xvY2spOwogCWRlYnVnX211dGV4X2ZyZWVfd2FpdGVyKCZ3YWl0ZXIpOwogCW11dGV4X3JlbGVh c2UoJmxvY2stPmRlcF9tYXAsIDEsIGlwKTsKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX18KZHJpLWRldmVsIG1haWxpbmcgbGlzdApkcmktZGV2ZWxAbGlzdHMu ZnJlZWRlc2t0b3Aub3JnCmh0dHBzOi8vbGlzdHMuZnJlZWRlc2t0b3Aub3JnL21haWxtYW4vbGlz dGluZm8vZHJpLWRldmVsCg==