From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D1CFFC07E9E for ; Tue, 6 Jul 2021 12:46:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B710F61D0D for ; Tue, 6 Jul 2021 12:46:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235780AbhGFMs7 (ORCPT ); Tue, 6 Jul 2021 08:48:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34520 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239092AbhGFMse (ORCPT ); Tue, 6 Jul 2021 08:48:34 -0400 Received: from mail-qt1-x82c.google.com (mail-qt1-x82c.google.com [IPv6:2607:f8b0:4864:20::82c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D1865C0613B1 for ; Tue, 6 Jul 2021 05:35:09 -0700 (PDT) Received: by mail-qt1-x82c.google.com with SMTP id w22so8345401qtj.8 for ; Tue, 06 Jul 2021 05:35:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc:content-transfer-encoding; bh=jt8pvOIuvJkygBOiSyJ6eVHW151CWxNQ0a8VwJyqmpg=; b=R1XXcZ+D6Qz3QXDzlWc11r7pwpvTaS1KcaCT+OCmw1w/cJtmGy8zsTWXsXuyxGA2+L PAe/g0qZMbF3MlOSYdmB7x43jCQBSb4P8j66nUQXaL9sRcVkNjPzw7H44LiTt4bzQIVn GjYvzcAfCXcAKDP8haNRJgPG+8+5NxOGeuJiioTddU8/CfBjQLf9BK6jIEG/NKqOgaUs khqVwgp/DUY2gbEAegWCXpztXJ5+jcR2NjEXKe3XjCTHBNawBNpLzX6TDJ62Bl9kUiwL Yo7QsWbwjR0lxz8sjHIoL3NYOdRrToIcI9J5ahCB59bNRbtUc9nU9TjhUfHed5t/SNRi LDnA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=jt8pvOIuvJkygBOiSyJ6eVHW151CWxNQ0a8VwJyqmpg=; b=b3Z2AnAIGhiNEQFv5duZ3j2T+kgx2Pgsc8oo49AI1e19Yns837YUjTOrparVRCwRkV 57KNl+t26U+GUW/NX4LMYZJINiuhlqsaneCczUkomlQtKQ9u6bjRbeVmSAm4XlaVzwfz Oa5Lz8VykEdI7eaZkTu68EfNn/Fn/3fbgpJbY6rXeDDFZghFPmnYrs4UTTjVf/5BuIf2 oftHtetS+zbggMwTE0adOn2BEdPCyRjCtwg/WQW5trNQHVvLTLJYhi0V1Jn/M+Ez72TR lKXZL9coNNRhTvJEw/kkYofD+LTZUL2b5x/snmBX6LJtiIZP1igOoabeW0a6iGGV7Bt8 7BXg== X-Gm-Message-State: AOAM531HWmxmKYIIFzRjHN7j9Msm6qfa8lc36iU99Wp6rUqzXTW8ufz6 1cErUNtHv62p8vPB8omIvaEWBXHZ5lw5ow7yMjo= X-Google-Smtp-Source: ABdhPJwQIje4Ls+nmBBfAn5G5pwL5wih3C5/Omii8KCKTfCEcUlB/uwHaNbHdZxH7mag2Xv1uDhJYbqcLOp4DNvPTeo= X-Received: by 2002:ac8:60d:: with SMTP id d13mr17283069qth.223.1625574908748; Tue, 06 Jul 2021 05:35:08 -0700 (PDT) MIME-Version: 1.0 References: <20210706101209.3034092-1-daniel.vetter@ffwll.ch> <20210706101209.3034092-8-daniel.vetter@ffwll.ch> In-Reply-To: <20210706101209.3034092-8-daniel.vetter@ffwll.ch> From: Matthew Auld Date: Tue, 6 Jul 2021 13:34:40 +0100 Message-ID: Subject: Re: [Intel-gfx] [PATCH 7/7] dma-resv: Give the docs a do-over To: Daniel Vetter Cc: DRI Development , Intel Graphics Development , Sumit Semwal , linaro-mm-sig@lists.linaro.org, Daniel Vetter , =?UTF-8?Q?Christian_K=C3=B6nig?= , linux-media@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org On Tue, 6 Jul 2021 at 11:12, Daniel Vetter wrote: > > Specifically document the new/clarified rules around how the shared > fences do not have any ordering requirements against the exclusive > fence. > > But also document all the things a bit better, given how central > struct dma_resv to dynamic buffer management the docs have been very > inadequat. > > - Lots more links to other pieces of the puzzle. Unfortunately > ttm_buffer_object has no docs, so no links :-( > > - Explain/complain a bit about dma_resv_locking_ctx(). I still don't > like that one, but fixing the ttm call chains is going to be > horrible. Plus we want to plug in real slowpath locking when we do > that anyway. > > - Main part of the patch is some actual docs for struct dma_resv. > > Overall I think we still have a lot of bad naming in this area (e.g. > dma_resv.fence is singular, but contains the multiple shared fences), > but I think that's more indicative of how the semantics and rules are > just not great. > > Another thing that's real awkard is how chaining exclusive fences > right now means direct dma_resv.exclusive_fence pointer access with an > rcu_assign_pointer. Not so great either. > > Signed-off-by: Daniel Vetter > Cc: Sumit Semwal > Cc: "Christian K=C3=B6nig" > Cc: linux-media@vger.kernel.org > Cc: linaro-mm-sig@lists.linaro.org > --- > drivers/dma-buf/dma-resv.c | 22 ++++++-- > include/linux/dma-resv.h | 104 +++++++++++++++++++++++++++++++++++-- > 2 files changed, 116 insertions(+), 10 deletions(-) > > diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c > index f26c71747d43..898f8d894bbd 100644 > --- a/drivers/dma-buf/dma-resv.c > +++ b/drivers/dma-buf/dma-resv.c > @@ -48,6 +48,8 @@ > * write operations) or N shared fences (read operations). The RCU > * mechanism is used to protect read access to fences from locked > * write-side updates. > + * > + * See struct dma_resv for more details. > */ > > DEFINE_WD_CLASS(reservation_ww_class); > @@ -137,7 +139,11 @@ EXPORT_SYMBOL(dma_resv_fini); > * @num_fences: number of fences we want to add > * > * Should be called before dma_resv_add_shared_fence(). Must > - * be called with obj->lock held. > + * be called with @obj locked through dma_resv_lock(). > + * > + * Note that the preallocated slots need to be re-reserved if @obj is un= locked > + * at any time before callind dma_resv_add_shared_fence(). This is valid= ate when s/callind/calling s/validate/validated > + * CONFIG_DEBUG_MUTEXES is enabled. > * > * RETURNS > * Zero for success, or -errno > @@ -234,8 +240,10 @@ EXPORT_SYMBOL(dma_resv_reset_shared_max); > * @obj: the reservation object > * @fence: the shared fence to add > * > - * Add a fence to a shared slot, obj->lock must be held, and > + * Add a fence to a shared slot, @obj must be locked with dma_resv_lock(= ), and > * dma_resv_reserve_shared() has been called. > + * > + * See also &dma_resv.fence for a discussion of the semantics. > */ > void dma_resv_add_shared_fence(struct dma_resv *obj, struct dma_fence *f= ence) > { > @@ -280,7 +288,9 @@ EXPORT_SYMBOL(dma_resv_add_shared_fence); > * @obj: the reservation object > * @fence: the shared fence to add "the exclusive fence", or perhaps "the fence to add to the exclusive slot"? > * > - * Add a fence to the exclusive slot. The obj->lock must be held. > + * Add a fence to the exclusive slot. @obj must be locked with dma_resv_= lock(). > + * Note that this function replaces all fences attached to @obj, see als= o > + * &dma_resv.fence_excl for a discussion of the semantics. > */ > void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fen= ce) > { > @@ -609,9 +619,11 @@ static inline int dma_resv_test_signaled_single(stru= ct dma_fence *passed_fence) > * fence > * > * Callers are not required to hold specific locks, but maybe hold > - * dma_resv_lock() already > + * dma_resv_lock() already. > + * > * RETURNS > - * true if all fences signaled, else false > + * > + * True if all fences signaled, else false. > */ > bool dma_resv_test_signaled(struct dma_resv *obj, bool test_all) > { > diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h > index e1ca2080a1ff..c77fd54d033f 100644 > --- a/include/linux/dma-resv.h > +++ b/include/linux/dma-resv.h > @@ -62,16 +62,90 @@ struct dma_resv_list { > > /** > * struct dma_resv - a reservation object manages fences for a buffer > - * @lock: update side lock > - * @seq: sequence count for managing RCU read-side synchronization > - * @fence_excl: the exclusive fence, if there is one currently > - * @fence: list of current shared fences > + * > + * There are multiple uses for this, with sometimes slightly different r= ules in > + * how the fence slots are used. > + * > + * One use is to synchronize cross-driver access to a struct dma_buf, ei= ther for > + * dynamic buffer management or just to handle implicit synchronization = between > + * different users of the buffer in userspace. See &dma_buf.resv for a m= ore > + * in-depth discussion. > + * > + * The other major use is to manage access and locking within a driver i= n a > + * buffer based memory manager. struct ttm_buffer_object is the canonica= l > + * example here, since this is were reservation objects originated from.= But use s/were/where > + * in drivers is spreading and some drivers also manage struct > + * drm_gem_object with the same scheme. > */ > struct dma_resv { > + /** > + * @lock: > + * > + * Update side lock. Don't use directly, instead use the wrapper > + * functions like dma_resv_lock() and dma_resv_unlock(). > + * > + * Drivers which use the reservation object to manage memory dyna= mically > + * also use this lock to protect buffer object state like placeme= nt, > + * allocation policies or throughout command submission. > + */ > struct ww_mutex lock; > + > + /** > + * @seq: > + * > + * Sequence count for managing RCU read-side synchronization, all= ows > + * read-only access to @fence_excl and @fence while ensuring we t= ake a > + * consistent snapshot. > + */ > seqcount_ww_mutex_t seq; > > + /** > + * @fence_excl: > + * > + * The exclusive fence, if there is one currently. > + * > + * There are two was to update this fence: s/was/ways > + * > + * - First by calling dma_resv_add_excl_fence(), which replaces a= ll > + * fences attached to the reservation object. To guarantee that= no > + * fences are lost this new fence must signal only after all pr= evious > + * fences, both shared and exclusive, have signalled. In some c= ases it/ Random slash at the end > + * is convenient to achieve that by attaching a struct dma_fenc= e_array > + * with all the new and old fences. > + * > + * - Alternatively the fence can be set directly, which leaves th= e > + * shared fences unchanged. To guarantee that no fences are los= t this > + * new fence must signale only after the previous exclusive fen= ce has s/signale/signal > + * singalled. Since the shared fences are staying intact, it is= not s/singalled/signalled > + * necessary to maintain any ordering against those. If semanti= cally > + * only a new access is added without actually treating the pre= vious > + * one as a dependency the exclusive fences can be strung toget= her > + * using struct dma_fence_chain. > + * > + * Note that actual semantics of what an exclusive or shared fenc= e mean > + * is defined by the user, for reservation objects shared across = drivers > + * see &dma_buf.resv. > + */ > struct dma_fence __rcu *fence_excl; > + > + /** > + * @fence: > + * > + * List of current shared fences. > + * > + * There are no ordering constraints of shared fences against the > + * exclusive fence slot. If a waiter needs to wait for all access= , it > + * has to wait for both set of fences to signal. > + * > + * A new fence is added by calling dma_resv_add_shared_fence(). S= ince > + * this often needs to be done past the point of no return in com= mand > + * submission it cannot fail, and therefor sufficient slots need = to be s/therefor/therefore > + * reserved by calling dma_resv_reserve_shared(). > + * > + * Note that actual semantics of what an exclusive or shared fenc= e mean > + * is defined by the user, for reservation objects shared across = drivers > + * see &dma_buf.resv. > + */ > struct dma_resv_list __rcu *fence; > }; > > @@ -98,6 +172,13 @@ static inline void dma_resv_reset_shared_max(struct d= ma_resv *obj) {} > * undefined order, a #ww_acquire_ctx is passed to unwind if a cycle > * is detected. See ww_mutex_lock() and ww_acquire_init(). A reservation > * object may be locked by itself by passing NULL as @ctx. > + * > + * When a die situation is indicated by returning -EDEADLK all locks hel= d by > + * @ctx must be unlocked and then dma_resv_lock_slow() called on @obj. > + * > + * Unlocked by calling dma_resv_lock(). dma_resv_unlock() > + * > + * See also dma_resv_lock_interruptible() for the interruptible variant. > */ > static inline int dma_resv_lock(struct dma_resv *obj, > struct ww_acquire_ctx *ctx) > @@ -119,6 +200,12 @@ static inline int dma_resv_lock(struct dma_resv *obj= , > * undefined order, a #ww_acquire_ctx is passed to unwind if a cycle > * is detected. See ww_mutex_lock() and ww_acquire_init(). A reservation > * object may be locked by itself by passing NULL as @ctx. > + * > + * When a die situation is indicated by returning -EDEADLK all locks hel= d by > + * @ctx must be unlocked and then dma_resv_lock_slow_interruptible() cal= led on > + * @obj. > + * > + * Unlocked by calling dma_resv_lock(). dma_resv_unlock() fwiw, Reviewed-by: Matthew Auld > */ > static inline int dma_resv_lock_interruptible(struct dma_resv *obj, > struct ww_acquire_ctx *ctx) > @@ -134,6 +221,8 @@ static inline int dma_resv_lock_interruptible(struct = dma_resv *obj, > * Acquires the reservation object after a die case. This function > * will sleep until the lock becomes available. See dma_resv_lock() as > * well. > + * > + * See also dma_resv_lock_slow_interruptible() for the interruptible var= iant. > */ > static inline void dma_resv_lock_slow(struct dma_resv *obj, > struct ww_acquire_ctx *ctx) > @@ -167,7 +256,7 @@ static inline int dma_resv_lock_slow_interruptible(st= ruct dma_resv *obj, > * if they overlap with a writer. > * > * Also note that since no context is provided, no deadlock protection i= s > - * possible. > + * possible, which is also not needed for a trylock. > * > * Returns true if the lock was acquired, false otherwise. > */ > @@ -193,6 +282,11 @@ static inline bool dma_resv_is_locked(struct dma_res= v *obj) > * > * Returns the context used to lock a reservation object or NULL if no c= ontext > * was used or the object is not locked at all. > + * > + * WARNING: This interface is pretty horrible, but TTM needs it because = it > + * doesn't pass the struct ww_acquire_ctx around in some very long callc= hains. > + * Everyone else just uses it to check whether they're holding a reserva= tion or > + * not. > */ > static inline struct ww_acquire_ctx *dma_resv_locking_ctx(struct dma_res= v *obj) > { > -- > 2.32.0 > > _______________________________________________ > Intel-gfx mailing list > Intel-gfx@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/intel-gfx From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2D905C07E96 for ; Tue, 6 Jul 2021 12:35:11 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E59FA61C2B for ; Tue, 6 Jul 2021 12:35:10 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E59FA61C2B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 43B9B6E199; Tue, 6 Jul 2021 12:35:10 +0000 (UTC) Received: from mail-qt1-x82b.google.com (mail-qt1-x82b.google.com [IPv6:2607:f8b0:4864:20::82b]) by gabe.freedesktop.org (Postfix) with ESMTPS id BDC186E199; Tue, 6 Jul 2021 12:35:09 +0000 (UTC) Received: by mail-qt1-x82b.google.com with SMTP id n9so14272480qtk.7; Tue, 06 Jul 2021 05:35:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc:content-transfer-encoding; bh=jt8pvOIuvJkygBOiSyJ6eVHW151CWxNQ0a8VwJyqmpg=; b=R1XXcZ+D6Qz3QXDzlWc11r7pwpvTaS1KcaCT+OCmw1w/cJtmGy8zsTWXsXuyxGA2+L PAe/g0qZMbF3MlOSYdmB7x43jCQBSb4P8j66nUQXaL9sRcVkNjPzw7H44LiTt4bzQIVn GjYvzcAfCXcAKDP8haNRJgPG+8+5NxOGeuJiioTddU8/CfBjQLf9BK6jIEG/NKqOgaUs khqVwgp/DUY2gbEAegWCXpztXJ5+jcR2NjEXKe3XjCTHBNawBNpLzX6TDJ62Bl9kUiwL Yo7QsWbwjR0lxz8sjHIoL3NYOdRrToIcI9J5ahCB59bNRbtUc9nU9TjhUfHed5t/SNRi LDnA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=jt8pvOIuvJkygBOiSyJ6eVHW151CWxNQ0a8VwJyqmpg=; b=iDVlfOJ7KvTOidkEuZ0/8BQgCJtsNMJfkgLPSr5/8zifcZNGXCODZMyUnFmL16kHuk aF5TS1vpY1jvuWZwR+b+MqapMTGQgudOMWyguxx4661xIk5Kfg+hVUFUV71q5LiudjYG /0abrvv9X4xpNjnH7ZSN5bfle/wlxov6M1o7ciDKhmkmjj/n2/s5jxgJUpF4YudFc5lC gI+ouggvPd3VJO0WqqAAAavhzFvv9/BzGCWrD8hfQf1PrzQU1ceGE1wiZqPtmfAR716K HtQ5ly1RotmKqOZez6KzF8qN5GCZwGi64skdu4rzGe5OCgIROqmMCrZ06n5uPGp2l334 GK2g== X-Gm-Message-State: AOAM530OQ1GdZcUaoeW9KLWcY1vf+S3XqnZONQRP+vjraNJx2DrcHEV3 HsFh3KtyxVqWygGABA2yGi9ygVJ4eErdnTB+QR0= X-Google-Smtp-Source: ABdhPJwQIje4Ls+nmBBfAn5G5pwL5wih3C5/Omii8KCKTfCEcUlB/uwHaNbHdZxH7mag2Xv1uDhJYbqcLOp4DNvPTeo= X-Received: by 2002:ac8:60d:: with SMTP id d13mr17283069qth.223.1625574908748; Tue, 06 Jul 2021 05:35:08 -0700 (PDT) MIME-Version: 1.0 References: <20210706101209.3034092-1-daniel.vetter@ffwll.ch> <20210706101209.3034092-8-daniel.vetter@ffwll.ch> In-Reply-To: <20210706101209.3034092-8-daniel.vetter@ffwll.ch> From: Matthew Auld Date: Tue, 6 Jul 2021 13:34:40 +0100 Message-ID: Subject: Re: [Intel-gfx] [PATCH 7/7] dma-resv: Give the docs a do-over To: Daniel Vetter Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Intel Graphics Development , DRI Development , =?UTF-8?Q?Christian_K=C3=B6nig?= , linaro-mm-sig@lists.linaro.org, Daniel Vetter , linux-media@vger.kernel.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" On Tue, 6 Jul 2021 at 11:12, Daniel Vetter wrote: > > Specifically document the new/clarified rules around how the shared > fences do not have any ordering requirements against the exclusive > fence. > > But also document all the things a bit better, given how central > struct dma_resv to dynamic buffer management the docs have been very > inadequat. > > - Lots more links to other pieces of the puzzle. Unfortunately > ttm_buffer_object has no docs, so no links :-( > > - Explain/complain a bit about dma_resv_locking_ctx(). I still don't > like that one, but fixing the ttm call chains is going to be > horrible. Plus we want to plug in real slowpath locking when we do > that anyway. > > - Main part of the patch is some actual docs for struct dma_resv. > > Overall I think we still have a lot of bad naming in this area (e.g. > dma_resv.fence is singular, but contains the multiple shared fences), > but I think that's more indicative of how the semantics and rules are > just not great. > > Another thing that's real awkard is how chaining exclusive fences > right now means direct dma_resv.exclusive_fence pointer access with an > rcu_assign_pointer. Not so great either. > > Signed-off-by: Daniel Vetter > Cc: Sumit Semwal > Cc: "Christian K=C3=B6nig" > Cc: linux-media@vger.kernel.org > Cc: linaro-mm-sig@lists.linaro.org > --- > drivers/dma-buf/dma-resv.c | 22 ++++++-- > include/linux/dma-resv.h | 104 +++++++++++++++++++++++++++++++++++-- > 2 files changed, 116 insertions(+), 10 deletions(-) > > diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c > index f26c71747d43..898f8d894bbd 100644 > --- a/drivers/dma-buf/dma-resv.c > +++ b/drivers/dma-buf/dma-resv.c > @@ -48,6 +48,8 @@ > * write operations) or N shared fences (read operations). The RCU > * mechanism is used to protect read access to fences from locked > * write-side updates. > + * > + * See struct dma_resv for more details. > */ > > DEFINE_WD_CLASS(reservation_ww_class); > @@ -137,7 +139,11 @@ EXPORT_SYMBOL(dma_resv_fini); > * @num_fences: number of fences we want to add > * > * Should be called before dma_resv_add_shared_fence(). Must > - * be called with obj->lock held. > + * be called with @obj locked through dma_resv_lock(). > + * > + * Note that the preallocated slots need to be re-reserved if @obj is un= locked > + * at any time before callind dma_resv_add_shared_fence(). This is valid= ate when s/callind/calling s/validate/validated > + * CONFIG_DEBUG_MUTEXES is enabled. > * > * RETURNS > * Zero for success, or -errno > @@ -234,8 +240,10 @@ EXPORT_SYMBOL(dma_resv_reset_shared_max); > * @obj: the reservation object > * @fence: the shared fence to add > * > - * Add a fence to a shared slot, obj->lock must be held, and > + * Add a fence to a shared slot, @obj must be locked with dma_resv_lock(= ), and > * dma_resv_reserve_shared() has been called. > + * > + * See also &dma_resv.fence for a discussion of the semantics. > */ > void dma_resv_add_shared_fence(struct dma_resv *obj, struct dma_fence *f= ence) > { > @@ -280,7 +288,9 @@ EXPORT_SYMBOL(dma_resv_add_shared_fence); > * @obj: the reservation object > * @fence: the shared fence to add "the exclusive fence", or perhaps "the fence to add to the exclusive slot"? > * > - * Add a fence to the exclusive slot. The obj->lock must be held. > + * Add a fence to the exclusive slot. @obj must be locked with dma_resv_= lock(). > + * Note that this function replaces all fences attached to @obj, see als= o > + * &dma_resv.fence_excl for a discussion of the semantics. > */ > void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fen= ce) > { > @@ -609,9 +619,11 @@ static inline int dma_resv_test_signaled_single(stru= ct dma_fence *passed_fence) > * fence > * > * Callers are not required to hold specific locks, but maybe hold > - * dma_resv_lock() already > + * dma_resv_lock() already. > + * > * RETURNS > - * true if all fences signaled, else false > + * > + * True if all fences signaled, else false. > */ > bool dma_resv_test_signaled(struct dma_resv *obj, bool test_all) > { > diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h > index e1ca2080a1ff..c77fd54d033f 100644 > --- a/include/linux/dma-resv.h > +++ b/include/linux/dma-resv.h > @@ -62,16 +62,90 @@ struct dma_resv_list { > > /** > * struct dma_resv - a reservation object manages fences for a buffer > - * @lock: update side lock > - * @seq: sequence count for managing RCU read-side synchronization > - * @fence_excl: the exclusive fence, if there is one currently > - * @fence: list of current shared fences > + * > + * There are multiple uses for this, with sometimes slightly different r= ules in > + * how the fence slots are used. > + * > + * One use is to synchronize cross-driver access to a struct dma_buf, ei= ther for > + * dynamic buffer management or just to handle implicit synchronization = between > + * different users of the buffer in userspace. See &dma_buf.resv for a m= ore > + * in-depth discussion. > + * > + * The other major use is to manage access and locking within a driver i= n a > + * buffer based memory manager. struct ttm_buffer_object is the canonica= l > + * example here, since this is were reservation objects originated from.= But use s/were/where > + * in drivers is spreading and some drivers also manage struct > + * drm_gem_object with the same scheme. > */ > struct dma_resv { > + /** > + * @lock: > + * > + * Update side lock. Don't use directly, instead use the wrapper > + * functions like dma_resv_lock() and dma_resv_unlock(). > + * > + * Drivers which use the reservation object to manage memory dyna= mically > + * also use this lock to protect buffer object state like placeme= nt, > + * allocation policies or throughout command submission. > + */ > struct ww_mutex lock; > + > + /** > + * @seq: > + * > + * Sequence count for managing RCU read-side synchronization, all= ows > + * read-only access to @fence_excl and @fence while ensuring we t= ake a > + * consistent snapshot. > + */ > seqcount_ww_mutex_t seq; > > + /** > + * @fence_excl: > + * > + * The exclusive fence, if there is one currently. > + * > + * There are two was to update this fence: s/was/ways > + * > + * - First by calling dma_resv_add_excl_fence(), which replaces a= ll > + * fences attached to the reservation object. To guarantee that= no > + * fences are lost this new fence must signal only after all pr= evious > + * fences, both shared and exclusive, have signalled. In some c= ases it/ Random slash at the end > + * is convenient to achieve that by attaching a struct dma_fenc= e_array > + * with all the new and old fences. > + * > + * - Alternatively the fence can be set directly, which leaves th= e > + * shared fences unchanged. To guarantee that no fences are los= t this > + * new fence must signale only after the previous exclusive fen= ce has s/signale/signal > + * singalled. Since the shared fences are staying intact, it is= not s/singalled/signalled > + * necessary to maintain any ordering against those. If semanti= cally > + * only a new access is added without actually treating the pre= vious > + * one as a dependency the exclusive fences can be strung toget= her > + * using struct dma_fence_chain. > + * > + * Note that actual semantics of what an exclusive or shared fenc= e mean > + * is defined by the user, for reservation objects shared across = drivers > + * see &dma_buf.resv. > + */ > struct dma_fence __rcu *fence_excl; > + > + /** > + * @fence: > + * > + * List of current shared fences. > + * > + * There are no ordering constraints of shared fences against the > + * exclusive fence slot. If a waiter needs to wait for all access= , it > + * has to wait for both set of fences to signal. > + * > + * A new fence is added by calling dma_resv_add_shared_fence(). S= ince > + * this often needs to be done past the point of no return in com= mand > + * submission it cannot fail, and therefor sufficient slots need = to be s/therefor/therefore > + * reserved by calling dma_resv_reserve_shared(). > + * > + * Note that actual semantics of what an exclusive or shared fenc= e mean > + * is defined by the user, for reservation objects shared across = drivers > + * see &dma_buf.resv. > + */ > struct dma_resv_list __rcu *fence; > }; > > @@ -98,6 +172,13 @@ static inline void dma_resv_reset_shared_max(struct d= ma_resv *obj) {} > * undefined order, a #ww_acquire_ctx is passed to unwind if a cycle > * is detected. See ww_mutex_lock() and ww_acquire_init(). A reservation > * object may be locked by itself by passing NULL as @ctx. > + * > + * When a die situation is indicated by returning -EDEADLK all locks hel= d by > + * @ctx must be unlocked and then dma_resv_lock_slow() called on @obj. > + * > + * Unlocked by calling dma_resv_lock(). dma_resv_unlock() > + * > + * See also dma_resv_lock_interruptible() for the interruptible variant. > */ > static inline int dma_resv_lock(struct dma_resv *obj, > struct ww_acquire_ctx *ctx) > @@ -119,6 +200,12 @@ static inline int dma_resv_lock(struct dma_resv *obj= , > * undefined order, a #ww_acquire_ctx is passed to unwind if a cycle > * is detected. See ww_mutex_lock() and ww_acquire_init(). A reservation > * object may be locked by itself by passing NULL as @ctx. > + * > + * When a die situation is indicated by returning -EDEADLK all locks hel= d by > + * @ctx must be unlocked and then dma_resv_lock_slow_interruptible() cal= led on > + * @obj. > + * > + * Unlocked by calling dma_resv_lock(). dma_resv_unlock() fwiw, Reviewed-by: Matthew Auld > */ > static inline int dma_resv_lock_interruptible(struct dma_resv *obj, > struct ww_acquire_ctx *ctx) > @@ -134,6 +221,8 @@ static inline int dma_resv_lock_interruptible(struct = dma_resv *obj, > * Acquires the reservation object after a die case. This function > * will sleep until the lock becomes available. See dma_resv_lock() as > * well. > + * > + * See also dma_resv_lock_slow_interruptible() for the interruptible var= iant. > */ > static inline void dma_resv_lock_slow(struct dma_resv *obj, > struct ww_acquire_ctx *ctx) > @@ -167,7 +256,7 @@ static inline int dma_resv_lock_slow_interruptible(st= ruct dma_resv *obj, > * if they overlap with a writer. > * > * Also note that since no context is provided, no deadlock protection i= s > - * possible. > + * possible, which is also not needed for a trylock. > * > * Returns true if the lock was acquired, false otherwise. > */ > @@ -193,6 +282,11 @@ static inline bool dma_resv_is_locked(struct dma_res= v *obj) > * > * Returns the context used to lock a reservation object or NULL if no c= ontext > * was used or the object is not locked at all. > + * > + * WARNING: This interface is pretty horrible, but TTM needs it because = it > + * doesn't pass the struct ww_acquire_ctx around in some very long callc= hains. > + * Everyone else just uses it to check whether they're holding a reserva= tion or > + * not. > */ > static inline struct ww_acquire_ctx *dma_resv_locking_ctx(struct dma_res= v *obj) > { > -- > 2.32.0 > > _______________________________________________ > Intel-gfx mailing list > Intel-gfx@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/intel-gfx From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BA716C07E9B for ; Tue, 6 Jul 2021 12:35:13 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 89ECD61C2B for ; Tue, 6 Jul 2021 12:35:13 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 89ECD61C2B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 28EEE6E48D; Tue, 6 Jul 2021 12:35:11 +0000 (UTC) Received: from mail-qt1-x82b.google.com (mail-qt1-x82b.google.com [IPv6:2607:f8b0:4864:20::82b]) by gabe.freedesktop.org (Postfix) with ESMTPS id BDC186E199; Tue, 6 Jul 2021 12:35:09 +0000 (UTC) Received: by mail-qt1-x82b.google.com with SMTP id n9so14272480qtk.7; Tue, 06 Jul 2021 05:35:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc:content-transfer-encoding; bh=jt8pvOIuvJkygBOiSyJ6eVHW151CWxNQ0a8VwJyqmpg=; b=R1XXcZ+D6Qz3QXDzlWc11r7pwpvTaS1KcaCT+OCmw1w/cJtmGy8zsTWXsXuyxGA2+L PAe/g0qZMbF3MlOSYdmB7x43jCQBSb4P8j66nUQXaL9sRcVkNjPzw7H44LiTt4bzQIVn GjYvzcAfCXcAKDP8haNRJgPG+8+5NxOGeuJiioTddU8/CfBjQLf9BK6jIEG/NKqOgaUs khqVwgp/DUY2gbEAegWCXpztXJ5+jcR2NjEXKe3XjCTHBNawBNpLzX6TDJ62Bl9kUiwL Yo7QsWbwjR0lxz8sjHIoL3NYOdRrToIcI9J5ahCB59bNRbtUc9nU9TjhUfHed5t/SNRi LDnA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=jt8pvOIuvJkygBOiSyJ6eVHW151CWxNQ0a8VwJyqmpg=; b=iDVlfOJ7KvTOidkEuZ0/8BQgCJtsNMJfkgLPSr5/8zifcZNGXCODZMyUnFmL16kHuk aF5TS1vpY1jvuWZwR+b+MqapMTGQgudOMWyguxx4661xIk5Kfg+hVUFUV71q5LiudjYG /0abrvv9X4xpNjnH7ZSN5bfle/wlxov6M1o7ciDKhmkmjj/n2/s5jxgJUpF4YudFc5lC gI+ouggvPd3VJO0WqqAAAavhzFvv9/BzGCWrD8hfQf1PrzQU1ceGE1wiZqPtmfAR716K HtQ5ly1RotmKqOZez6KzF8qN5GCZwGi64skdu4rzGe5OCgIROqmMCrZ06n5uPGp2l334 GK2g== X-Gm-Message-State: AOAM530OQ1GdZcUaoeW9KLWcY1vf+S3XqnZONQRP+vjraNJx2DrcHEV3 HsFh3KtyxVqWygGABA2yGi9ygVJ4eErdnTB+QR0= X-Google-Smtp-Source: ABdhPJwQIje4Ls+nmBBfAn5G5pwL5wih3C5/Omii8KCKTfCEcUlB/uwHaNbHdZxH7mag2Xv1uDhJYbqcLOp4DNvPTeo= X-Received: by 2002:ac8:60d:: with SMTP id d13mr17283069qth.223.1625574908748; Tue, 06 Jul 2021 05:35:08 -0700 (PDT) MIME-Version: 1.0 References: <20210706101209.3034092-1-daniel.vetter@ffwll.ch> <20210706101209.3034092-8-daniel.vetter@ffwll.ch> In-Reply-To: <20210706101209.3034092-8-daniel.vetter@ffwll.ch> From: Matthew Auld Date: Tue, 6 Jul 2021 13:34:40 +0100 Message-ID: To: Daniel Vetter Subject: Re: [Intel-gfx] [PATCH 7/7] dma-resv: Give the docs a do-over X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Intel Graphics Development , DRI Development , =?UTF-8?Q?Christian_K=C3=B6nig?= , linaro-mm-sig@lists.linaro.org, Daniel Vetter , Sumit Semwal , linux-media@vger.kernel.org Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" T24gVHVlLCA2IEp1bCAyMDIxIGF0IDExOjEyLCBEYW5pZWwgVmV0dGVyIDxkYW5pZWwudmV0dGVy QGZmd2xsLmNoPiB3cm90ZToKPgo+IFNwZWNpZmljYWxseSBkb2N1bWVudCB0aGUgbmV3L2NsYXJp ZmllZCBydWxlcyBhcm91bmQgaG93IHRoZSBzaGFyZWQKPiBmZW5jZXMgZG8gbm90IGhhdmUgYW55 IG9yZGVyaW5nIHJlcXVpcmVtZW50cyBhZ2FpbnN0IHRoZSBleGNsdXNpdmUKPiBmZW5jZS4KPgo+ IEJ1dCBhbHNvIGRvY3VtZW50IGFsbCB0aGUgdGhpbmdzIGEgYml0IGJldHRlciwgZ2l2ZW4gaG93 IGNlbnRyYWwKPiBzdHJ1Y3QgZG1hX3Jlc3YgdG8gZHluYW1pYyBidWZmZXIgbWFuYWdlbWVudCB0 aGUgZG9jcyBoYXZlIGJlZW4gdmVyeQo+IGluYWRlcXVhdC4KPgo+IC0gTG90cyBtb3JlIGxpbmtz IHRvIG90aGVyIHBpZWNlcyBvZiB0aGUgcHV6emxlLiBVbmZvcnR1bmF0ZWx5Cj4gICB0dG1fYnVm ZmVyX29iamVjdCBoYXMgbm8gZG9jcywgc28gbm8gbGlua3MgOi0oCj4KPiAtIEV4cGxhaW4vY29t cGxhaW4gYSBiaXQgYWJvdXQgZG1hX3Jlc3ZfbG9ja2luZ19jdHgoKS4gSSBzdGlsbCBkb24ndAo+ ICAgbGlrZSB0aGF0IG9uZSwgYnV0IGZpeGluZyB0aGUgdHRtIGNhbGwgY2hhaW5zIGlzIGdvaW5n IHRvIGJlCj4gICBob3JyaWJsZS4gUGx1cyB3ZSB3YW50IHRvIHBsdWcgaW4gcmVhbCBzbG93cGF0 aCBsb2NraW5nIHdoZW4gd2UgZG8KPiAgIHRoYXQgYW55d2F5Lgo+Cj4gLSBNYWluIHBhcnQgb2Yg dGhlIHBhdGNoIGlzIHNvbWUgYWN0dWFsIGRvY3MgZm9yIHN0cnVjdCBkbWFfcmVzdi4KPgo+IE92 ZXJhbGwgSSB0aGluayB3ZSBzdGlsbCBoYXZlIGEgbG90IG9mIGJhZCBuYW1pbmcgaW4gdGhpcyBh cmVhIChlLmcuCj4gZG1hX3Jlc3YuZmVuY2UgaXMgc2luZ3VsYXIsIGJ1dCBjb250YWlucyB0aGUg bXVsdGlwbGUgc2hhcmVkIGZlbmNlcyksCj4gYnV0IEkgdGhpbmsgdGhhdCdzIG1vcmUgaW5kaWNh dGl2ZSBvZiBob3cgdGhlIHNlbWFudGljcyBhbmQgcnVsZXMgYXJlCj4ganVzdCBub3QgZ3JlYXQu Cj4KPiBBbm90aGVyIHRoaW5nIHRoYXQncyByZWFsIGF3a2FyZCBpcyBob3cgY2hhaW5pbmcgZXhj bHVzaXZlIGZlbmNlcwo+IHJpZ2h0IG5vdyBtZWFucyBkaXJlY3QgZG1hX3Jlc3YuZXhjbHVzaXZl X2ZlbmNlIHBvaW50ZXIgYWNjZXNzIHdpdGggYW4KPiByY3VfYXNzaWduX3BvaW50ZXIuIE5vdCBz byBncmVhdCBlaXRoZXIuCj4KPiBTaWduZWQtb2ZmLWJ5OiBEYW5pZWwgVmV0dGVyIDxkYW5pZWwu dmV0dGVyQGludGVsLmNvbT4KPiBDYzogU3VtaXQgU2Vtd2FsIDxzdW1pdC5zZW13YWxAbGluYXJv Lm9yZz4KPiBDYzogIkNocmlzdGlhbiBLw7ZuaWciIDxjaHJpc3RpYW4ua29lbmlnQGFtZC5jb20+ Cj4gQ2M6IGxpbnV4LW1lZGlhQHZnZXIua2VybmVsLm9yZwo+IENjOiBsaW5hcm8tbW0tc2lnQGxp c3RzLmxpbmFyby5vcmcKPiAtLS0KPiAgZHJpdmVycy9kbWEtYnVmL2RtYS1yZXN2LmMgfCAgMjIg KysrKysrLS0KPiAgaW5jbHVkZS9saW51eC9kbWEtcmVzdi5oICAgfCAxMDQgKysrKysrKysrKysr KysrKysrKysrKysrKysrKysrKysrKystLQo+ICAyIGZpbGVzIGNoYW5nZWQsIDExNiBpbnNlcnRp b25zKCspLCAxMCBkZWxldGlvbnMoLSkKPgo+IGRpZmYgLS1naXQgYS9kcml2ZXJzL2RtYS1idWYv ZG1hLXJlc3YuYyBiL2RyaXZlcnMvZG1hLWJ1Zi9kbWEtcmVzdi5jCj4gaW5kZXggZjI2YzcxNzQ3 ZDQzLi44OThmOGQ4OTRiYmQgMTAwNjQ0Cj4gLS0tIGEvZHJpdmVycy9kbWEtYnVmL2RtYS1yZXN2 LmMKPiArKysgYi9kcml2ZXJzL2RtYS1idWYvZG1hLXJlc3YuYwo+IEBAIC00OCw2ICs0OCw4IEBA Cj4gICAqIHdyaXRlIG9wZXJhdGlvbnMpIG9yIE4gc2hhcmVkIGZlbmNlcyAocmVhZCBvcGVyYXRp b25zKS4gIFRoZSBSQ1UKPiAgICogbWVjaGFuaXNtIGlzIHVzZWQgdG8gcHJvdGVjdCByZWFkIGFj Y2VzcyB0byBmZW5jZXMgZnJvbSBsb2NrZWQKPiAgICogd3JpdGUtc2lkZSB1cGRhdGVzLgo+ICsg Kgo+ICsgKiBTZWUgc3RydWN0IGRtYV9yZXN2IGZvciBtb3JlIGRldGFpbHMuCj4gICAqLwo+Cj4g IERFRklORV9XRF9DTEFTUyhyZXNlcnZhdGlvbl93d19jbGFzcyk7Cj4gQEAgLTEzNyw3ICsxMzks MTEgQEAgRVhQT1JUX1NZTUJPTChkbWFfcmVzdl9maW5pKTsKPiAgICogQG51bV9mZW5jZXM6IG51 bWJlciBvZiBmZW5jZXMgd2Ugd2FudCB0byBhZGQKPiAgICoKPiAgICogU2hvdWxkIGJlIGNhbGxl ZCBiZWZvcmUgZG1hX3Jlc3ZfYWRkX3NoYXJlZF9mZW5jZSgpLiAgTXVzdAo+IC0gKiBiZSBjYWxs ZWQgd2l0aCBvYmotPmxvY2sgaGVsZC4KPiArICogYmUgY2FsbGVkIHdpdGggQG9iaiBsb2NrZWQg dGhyb3VnaCBkbWFfcmVzdl9sb2NrKCkuCj4gKyAqCj4gKyAqIE5vdGUgdGhhdCB0aGUgcHJlYWxs b2NhdGVkIHNsb3RzIG5lZWQgdG8gYmUgcmUtcmVzZXJ2ZWQgaWYgQG9iaiBpcyB1bmxvY2tlZAo+ ICsgKiBhdCBhbnkgdGltZSBiZWZvcmUgY2FsbGluZCBkbWFfcmVzdl9hZGRfc2hhcmVkX2ZlbmNl KCkuIFRoaXMgaXMgdmFsaWRhdGUgd2hlbgoKcy9jYWxsaW5kL2NhbGxpbmcKcy92YWxpZGF0ZS92 YWxpZGF0ZWQKCj4gKyAqIENPTkZJR19ERUJVR19NVVRFWEVTIGlzIGVuYWJsZWQuCj4gICAqCj4g ICAqIFJFVFVSTlMKPiAgICogWmVybyBmb3Igc3VjY2Vzcywgb3IgLWVycm5vCj4gQEAgLTIzNCw4 ICsyNDAsMTAgQEAgRVhQT1JUX1NZTUJPTChkbWFfcmVzdl9yZXNldF9zaGFyZWRfbWF4KTsKPiAg ICogQG9iajogdGhlIHJlc2VydmF0aW9uIG9iamVjdAo+ICAgKiBAZmVuY2U6IHRoZSBzaGFyZWQg ZmVuY2UgdG8gYWRkCj4gICAqCj4gLSAqIEFkZCBhIGZlbmNlIHRvIGEgc2hhcmVkIHNsb3QsIG9i ai0+bG9jayBtdXN0IGJlIGhlbGQsIGFuZAo+ICsgKiBBZGQgYSBmZW5jZSB0byBhIHNoYXJlZCBz bG90LCBAb2JqIG11c3QgYmUgbG9ja2VkIHdpdGggZG1hX3Jlc3ZfbG9jaygpLCBhbmQKPiAgICog ZG1hX3Jlc3ZfcmVzZXJ2ZV9zaGFyZWQoKSBoYXMgYmVlbiBjYWxsZWQuCj4gKyAqCj4gKyAqIFNl ZSBhbHNvICZkbWFfcmVzdi5mZW5jZSBmb3IgYSBkaXNjdXNzaW9uIG9mIHRoZSBzZW1hbnRpY3Mu Cj4gICAqLwo+ICB2b2lkIGRtYV9yZXN2X2FkZF9zaGFyZWRfZmVuY2Uoc3RydWN0IGRtYV9yZXN2 ICpvYmosIHN0cnVjdCBkbWFfZmVuY2UgKmZlbmNlKQo+ICB7Cj4gQEAgLTI4MCw3ICsyODgsOSBA QCBFWFBPUlRfU1lNQk9MKGRtYV9yZXN2X2FkZF9zaGFyZWRfZmVuY2UpOwo+ICAgKiBAb2JqOiB0 aGUgcmVzZXJ2YXRpb24gb2JqZWN0Cj4gICAqIEBmZW5jZTogdGhlIHNoYXJlZCBmZW5jZSB0byBh ZGQKCiJ0aGUgZXhjbHVzaXZlIGZlbmNlIiwgb3IgcGVyaGFwcyAidGhlIGZlbmNlIHRvIGFkZCB0 byB0aGUgZXhjbHVzaXZlIHNsb3QiPwoKPiAgICoKPiAtICogQWRkIGEgZmVuY2UgdG8gdGhlIGV4 Y2x1c2l2ZSBzbG90LiAgVGhlIG9iai0+bG9jayBtdXN0IGJlIGhlbGQuCj4gKyAqIEFkZCBhIGZl bmNlIHRvIHRoZSBleGNsdXNpdmUgc2xvdC4gQG9iaiBtdXN0IGJlIGxvY2tlZCB3aXRoIGRtYV9y ZXN2X2xvY2soKS4KPiArICogTm90ZSB0aGF0IHRoaXMgZnVuY3Rpb24gcmVwbGFjZXMgYWxsIGZl bmNlcyBhdHRhY2hlZCB0byBAb2JqLCBzZWUgYWxzbwo+ICsgKiAmZG1hX3Jlc3YuZmVuY2VfZXhj bCBmb3IgYSBkaXNjdXNzaW9uIG9mIHRoZSBzZW1hbnRpY3MuCj4gICAqLwo+ICB2b2lkIGRtYV9y ZXN2X2FkZF9leGNsX2ZlbmNlKHN0cnVjdCBkbWFfcmVzdiAqb2JqLCBzdHJ1Y3QgZG1hX2ZlbmNl ICpmZW5jZSkKPiAgewo+IEBAIC02MDksOSArNjE5LDExIEBAIHN0YXRpYyBpbmxpbmUgaW50IGRt YV9yZXN2X3Rlc3Rfc2lnbmFsZWRfc2luZ2xlKHN0cnVjdCBkbWFfZmVuY2UgKnBhc3NlZF9mZW5j ZSkKPiAgICogZmVuY2UKPiAgICoKPiAgICogQ2FsbGVycyBhcmUgbm90IHJlcXVpcmVkIHRvIGhv bGQgc3BlY2lmaWMgbG9ja3MsIGJ1dCBtYXliZSBob2xkCj4gLSAqIGRtYV9yZXN2X2xvY2soKSBh bHJlYWR5Cj4gKyAqIGRtYV9yZXN2X2xvY2soKSBhbHJlYWR5Lgo+ICsgKgo+ICAgKiBSRVRVUk5T Cj4gLSAqIHRydWUgaWYgYWxsIGZlbmNlcyBzaWduYWxlZCwgZWxzZSBmYWxzZQo+ICsgKgo+ICsg KiBUcnVlIGlmIGFsbCBmZW5jZXMgc2lnbmFsZWQsIGVsc2UgZmFsc2UuCj4gICAqLwo+ICBib29s IGRtYV9yZXN2X3Rlc3Rfc2lnbmFsZWQoc3RydWN0IGRtYV9yZXN2ICpvYmosIGJvb2wgdGVzdF9h bGwpCj4gIHsKPiBkaWZmIC0tZ2l0IGEvaW5jbHVkZS9saW51eC9kbWEtcmVzdi5oIGIvaW5jbHVk ZS9saW51eC9kbWEtcmVzdi5oCj4gaW5kZXggZTFjYTIwODBhMWZmLi5jNzdmZDU0ZDAzM2YgMTAw NjQ0Cj4gLS0tIGEvaW5jbHVkZS9saW51eC9kbWEtcmVzdi5oCj4gKysrIGIvaW5jbHVkZS9saW51 eC9kbWEtcmVzdi5oCj4gQEAgLTYyLDE2ICs2Miw5MCBAQCBzdHJ1Y3QgZG1hX3Jlc3ZfbGlzdCB7 Cj4KPiAgLyoqCj4gICAqIHN0cnVjdCBkbWFfcmVzdiAtIGEgcmVzZXJ2YXRpb24gb2JqZWN0IG1h bmFnZXMgZmVuY2VzIGZvciBhIGJ1ZmZlcgo+IC0gKiBAbG9jazogdXBkYXRlIHNpZGUgbG9jawo+ IC0gKiBAc2VxOiBzZXF1ZW5jZSBjb3VudCBmb3IgbWFuYWdpbmcgUkNVIHJlYWQtc2lkZSBzeW5j aHJvbml6YXRpb24KPiAtICogQGZlbmNlX2V4Y2w6IHRoZSBleGNsdXNpdmUgZmVuY2UsIGlmIHRo ZXJlIGlzIG9uZSBjdXJyZW50bHkKPiAtICogQGZlbmNlOiBsaXN0IG9mIGN1cnJlbnQgc2hhcmVk IGZlbmNlcwo+ICsgKgo+ICsgKiBUaGVyZSBhcmUgbXVsdGlwbGUgdXNlcyBmb3IgdGhpcywgd2l0 aCBzb21ldGltZXMgc2xpZ2h0bHkgZGlmZmVyZW50IHJ1bGVzIGluCj4gKyAqIGhvdyB0aGUgZmVu Y2Ugc2xvdHMgYXJlIHVzZWQuCj4gKyAqCj4gKyAqIE9uZSB1c2UgaXMgdG8gc3luY2hyb25pemUg Y3Jvc3MtZHJpdmVyIGFjY2VzcyB0byBhIHN0cnVjdCBkbWFfYnVmLCBlaXRoZXIgZm9yCj4gKyAq IGR5bmFtaWMgYnVmZmVyIG1hbmFnZW1lbnQgb3IganVzdCB0byBoYW5kbGUgaW1wbGljaXQgc3lu Y2hyb25pemF0aW9uIGJldHdlZW4KPiArICogZGlmZmVyZW50IHVzZXJzIG9mIHRoZSBidWZmZXIg aW4gdXNlcnNwYWNlLiBTZWUgJmRtYV9idWYucmVzdiBmb3IgYSBtb3JlCj4gKyAqIGluLWRlcHRo IGRpc2N1c3Npb24uCj4gKyAqCj4gKyAqIFRoZSBvdGhlciBtYWpvciB1c2UgaXMgdG8gbWFuYWdl IGFjY2VzcyBhbmQgbG9ja2luZyB3aXRoaW4gYSBkcml2ZXIgaW4gYQo+ICsgKiBidWZmZXIgYmFz ZWQgbWVtb3J5IG1hbmFnZXIuIHN0cnVjdCB0dG1fYnVmZmVyX29iamVjdCBpcyB0aGUgY2Fub25p Y2FsCj4gKyAqIGV4YW1wbGUgaGVyZSwgc2luY2UgdGhpcyBpcyB3ZXJlIHJlc2VydmF0aW9uIG9i amVjdHMgb3JpZ2luYXRlZCBmcm9tLiBCdXQgdXNlCgpzL3dlcmUvd2hlcmUKCj4gKyAqIGluIGRy aXZlcnMgaXMgc3ByZWFkaW5nIGFuZCBzb21lIGRyaXZlcnMgYWxzbyBtYW5hZ2Ugc3RydWN0Cj4g KyAqIGRybV9nZW1fb2JqZWN0IHdpdGggdGhlIHNhbWUgc2NoZW1lLgo+ICAgKi8KPiAgc3RydWN0 IGRtYV9yZXN2IHsKPiArICAgICAgIC8qKgo+ICsgICAgICAgICogQGxvY2s6Cj4gKyAgICAgICAg Kgo+ICsgICAgICAgICogVXBkYXRlIHNpZGUgbG9jay4gRG9uJ3QgdXNlIGRpcmVjdGx5LCBpbnN0 ZWFkIHVzZSB0aGUgd3JhcHBlcgo+ICsgICAgICAgICogZnVuY3Rpb25zIGxpa2UgZG1hX3Jlc3Zf bG9jaygpIGFuZCBkbWFfcmVzdl91bmxvY2soKS4KPiArICAgICAgICAqCj4gKyAgICAgICAgKiBE cml2ZXJzIHdoaWNoIHVzZSB0aGUgcmVzZXJ2YXRpb24gb2JqZWN0IHRvIG1hbmFnZSBtZW1vcnkg ZHluYW1pY2FsbHkKPiArICAgICAgICAqIGFsc28gdXNlIHRoaXMgbG9jayB0byBwcm90ZWN0IGJ1 ZmZlciBvYmplY3Qgc3RhdGUgbGlrZSBwbGFjZW1lbnQsCj4gKyAgICAgICAgKiBhbGxvY2F0aW9u IHBvbGljaWVzIG9yIHRocm91Z2hvdXQgY29tbWFuZCBzdWJtaXNzaW9uLgo+ICsgICAgICAgICov Cj4gICAgICAgICBzdHJ1Y3Qgd3dfbXV0ZXggbG9jazsKPiArCj4gKyAgICAgICAvKioKPiArICAg ICAgICAqIEBzZXE6Cj4gKyAgICAgICAgKgo+ICsgICAgICAgICogU2VxdWVuY2UgY291bnQgZm9y IG1hbmFnaW5nIFJDVSByZWFkLXNpZGUgc3luY2hyb25pemF0aW9uLCBhbGxvd3MKPiArICAgICAg ICAqIHJlYWQtb25seSBhY2Nlc3MgdG8gQGZlbmNlX2V4Y2wgYW5kIEBmZW5jZSB3aGlsZSBlbnN1 cmluZyB3ZSB0YWtlIGEKPiArICAgICAgICAqIGNvbnNpc3RlbnQgc25hcHNob3QuCj4gKyAgICAg ICAgKi8KPiAgICAgICAgIHNlcWNvdW50X3d3X211dGV4X3Qgc2VxOwo+Cj4gKyAgICAgICAvKioK PiArICAgICAgICAqIEBmZW5jZV9leGNsOgo+ICsgICAgICAgICoKPiArICAgICAgICAqIFRoZSBl eGNsdXNpdmUgZmVuY2UsIGlmIHRoZXJlIGlzIG9uZSBjdXJyZW50bHkuCj4gKyAgICAgICAgKgo+ ICsgICAgICAgICogVGhlcmUgYXJlIHR3byB3YXMgdG8gdXBkYXRlIHRoaXMgZmVuY2U6CgpzL3dh cy93YXlzCgo+ICsgICAgICAgICoKPiArICAgICAgICAqIC0gRmlyc3QgYnkgY2FsbGluZyBkbWFf cmVzdl9hZGRfZXhjbF9mZW5jZSgpLCB3aGljaCByZXBsYWNlcyBhbGwKPiArICAgICAgICAqICAg ZmVuY2VzIGF0dGFjaGVkIHRvIHRoZSByZXNlcnZhdGlvbiBvYmplY3QuIFRvIGd1YXJhbnRlZSB0 aGF0IG5vCj4gKyAgICAgICAgKiAgIGZlbmNlcyBhcmUgbG9zdCB0aGlzIG5ldyBmZW5jZSBtdXN0 IHNpZ25hbCBvbmx5IGFmdGVyIGFsbCBwcmV2aW91cwo+ICsgICAgICAgICogICBmZW5jZXMsIGJv dGggc2hhcmVkIGFuZCBleGNsdXNpdmUsIGhhdmUgc2lnbmFsbGVkLiBJbiBzb21lIGNhc2VzIGl0 LwoKUmFuZG9tIHNsYXNoIGF0IHRoZSBlbmQKCj4gKyAgICAgICAgKiAgIGlzIGNvbnZlbmllbnQg dG8gYWNoaWV2ZSB0aGF0IGJ5IGF0dGFjaGluZyBhIHN0cnVjdCBkbWFfZmVuY2VfYXJyYXkKPiAr ICAgICAgICAqICAgd2l0aCBhbGwgdGhlIG5ldyBhbmQgb2xkIGZlbmNlcy4KPiArICAgICAgICAq Cj4gKyAgICAgICAgKiAtIEFsdGVybmF0aXZlbHkgdGhlIGZlbmNlIGNhbiBiZSBzZXQgZGlyZWN0 bHksIHdoaWNoIGxlYXZlcyB0aGUKPiArICAgICAgICAqICAgc2hhcmVkIGZlbmNlcyB1bmNoYW5n ZWQuIFRvIGd1YXJhbnRlZSB0aGF0IG5vIGZlbmNlcyBhcmUgbG9zdCB0aGlzCj4gKyAgICAgICAg KiAgIG5ldyBmZW5jZSBtdXN0IHNpZ25hbGUgb25seSBhZnRlciB0aGUgcHJldmlvdXMgZXhjbHVz aXZlIGZlbmNlIGhhcwoKcy9zaWduYWxlL3NpZ25hbAoKPiArICAgICAgICAqICAgc2luZ2FsbGVk LiBTaW5jZSB0aGUgc2hhcmVkIGZlbmNlcyBhcmUgc3RheWluZyBpbnRhY3QsIGl0IGlzIG5vdAoK cy9zaW5nYWxsZWQvc2lnbmFsbGVkCgo+ICsgICAgICAgICogICBuZWNlc3NhcnkgdG8gbWFpbnRh aW4gYW55IG9yZGVyaW5nIGFnYWluc3QgdGhvc2UuIElmIHNlbWFudGljYWxseQo+ICsgICAgICAg ICogICBvbmx5IGEgbmV3IGFjY2VzcyBpcyBhZGRlZCB3aXRob3V0IGFjdHVhbGx5IHRyZWF0aW5n IHRoZSBwcmV2aW91cwo+ICsgICAgICAgICogICBvbmUgYXMgYSBkZXBlbmRlbmN5IHRoZSBleGNs dXNpdmUgZmVuY2VzIGNhbiBiZSBzdHJ1bmcgdG9nZXRoZXIKPiArICAgICAgICAqICAgdXNpbmcg c3RydWN0IGRtYV9mZW5jZV9jaGFpbi4KPiArICAgICAgICAqCj4gKyAgICAgICAgKiBOb3RlIHRo YXQgYWN0dWFsIHNlbWFudGljcyBvZiB3aGF0IGFuIGV4Y2x1c2l2ZSBvciBzaGFyZWQgZmVuY2Ug bWVhbgo+ICsgICAgICAgICogaXMgZGVmaW5lZCBieSB0aGUgdXNlciwgZm9yIHJlc2VydmF0aW9u IG9iamVjdHMgc2hhcmVkIGFjcm9zcyBkcml2ZXJzCj4gKyAgICAgICAgKiBzZWUgJmRtYV9idWYu cmVzdi4KPiArICAgICAgICAqLwo+ICAgICAgICAgc3RydWN0IGRtYV9mZW5jZSBfX3JjdSAqZmVu Y2VfZXhjbDsKPiArCj4gKyAgICAgICAvKioKPiArICAgICAgICAqIEBmZW5jZToKPiArICAgICAg ICAqCj4gKyAgICAgICAgKiBMaXN0IG9mIGN1cnJlbnQgc2hhcmVkIGZlbmNlcy4KPiArICAgICAg ICAqCj4gKyAgICAgICAgKiBUaGVyZSBhcmUgbm8gb3JkZXJpbmcgY29uc3RyYWludHMgb2Ygc2hh cmVkIGZlbmNlcyBhZ2FpbnN0IHRoZQo+ICsgICAgICAgICogZXhjbHVzaXZlIGZlbmNlIHNsb3Qu IElmIGEgd2FpdGVyIG5lZWRzIHRvIHdhaXQgZm9yIGFsbCBhY2Nlc3MsIGl0Cj4gKyAgICAgICAg KiBoYXMgdG8gd2FpdCBmb3IgYm90aCBzZXQgb2YgZmVuY2VzIHRvIHNpZ25hbC4KPiArICAgICAg ICAqCj4gKyAgICAgICAgKiBBIG5ldyBmZW5jZSBpcyBhZGRlZCBieSBjYWxsaW5nIGRtYV9yZXN2 X2FkZF9zaGFyZWRfZmVuY2UoKS4gU2luY2UKPiArICAgICAgICAqIHRoaXMgb2Z0ZW4gbmVlZHMg dG8gYmUgZG9uZSBwYXN0IHRoZSBwb2ludCBvZiBubyByZXR1cm4gaW4gY29tbWFuZAo+ICsgICAg ICAgICogc3VibWlzc2lvbiBpdCBjYW5ub3QgZmFpbCwgYW5kIHRoZXJlZm9yIHN1ZmZpY2llbnQg c2xvdHMgbmVlZCB0byBiZQoKcy90aGVyZWZvci90aGVyZWZvcmUKCj4gKyAgICAgICAgKiByZXNl cnZlZCBieSBjYWxsaW5nIGRtYV9yZXN2X3Jlc2VydmVfc2hhcmVkKCkuCj4gKyAgICAgICAgKgo+ ICsgICAgICAgICogTm90ZSB0aGF0IGFjdHVhbCBzZW1hbnRpY3Mgb2Ygd2hhdCBhbiBleGNsdXNp dmUgb3Igc2hhcmVkIGZlbmNlIG1lYW4KPiArICAgICAgICAqIGlzIGRlZmluZWQgYnkgdGhlIHVz ZXIsIGZvciByZXNlcnZhdGlvbiBvYmplY3RzIHNoYXJlZCBhY3Jvc3MgZHJpdmVycwo+ICsgICAg ICAgICogc2VlICZkbWFfYnVmLnJlc3YuCj4gKyAgICAgICAgKi8KPiAgICAgICAgIHN0cnVjdCBk bWFfcmVzdl9saXN0IF9fcmN1ICpmZW5jZTsKPiAgfTsKPgo+IEBAIC05OCw2ICsxNzIsMTMgQEAg c3RhdGljIGlubGluZSB2b2lkIGRtYV9yZXN2X3Jlc2V0X3NoYXJlZF9tYXgoc3RydWN0IGRtYV9y ZXN2ICpvYmopIHt9Cj4gICAqIHVuZGVmaW5lZCBvcmRlciwgYSAjd3dfYWNxdWlyZV9jdHggaXMg cGFzc2VkIHRvIHVud2luZCBpZiBhIGN5Y2xlCj4gICAqIGlzIGRldGVjdGVkLiBTZWUgd3dfbXV0 ZXhfbG9jaygpIGFuZCB3d19hY3F1aXJlX2luaXQoKS4gQSByZXNlcnZhdGlvbgo+ICAgKiBvYmpl Y3QgbWF5IGJlIGxvY2tlZCBieSBpdHNlbGYgYnkgcGFzc2luZyBOVUxMIGFzIEBjdHguCj4gKyAq Cj4gKyAqIFdoZW4gYSBkaWUgc2l0dWF0aW9uIGlzIGluZGljYXRlZCBieSByZXR1cm5pbmcgLUVE RUFETEsgYWxsIGxvY2tzIGhlbGQgYnkKPiArICogQGN0eCBtdXN0IGJlIHVubG9ja2VkIGFuZCB0 aGVuIGRtYV9yZXN2X2xvY2tfc2xvdygpIGNhbGxlZCBvbiBAb2JqLgo+ICsgKgo+ICsgKiBVbmxv Y2tlZCBieSBjYWxsaW5nIGRtYV9yZXN2X2xvY2soKS4KCmRtYV9yZXN2X3VubG9jaygpCgo+ICsg Kgo+ICsgKiBTZWUgYWxzbyBkbWFfcmVzdl9sb2NrX2ludGVycnVwdGlibGUoKSBmb3IgdGhlIGlu dGVycnVwdGlibGUgdmFyaWFudC4KPiAgICovCj4gIHN0YXRpYyBpbmxpbmUgaW50IGRtYV9yZXN2 X2xvY2soc3RydWN0IGRtYV9yZXN2ICpvYmosCj4gICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICBzdHJ1Y3Qgd3dfYWNxdWlyZV9jdHggKmN0eCkKPiBAQCAtMTE5LDYgKzIwMCwxMiBAQCBz dGF0aWMgaW5saW5lIGludCBkbWFfcmVzdl9sb2NrKHN0cnVjdCBkbWFfcmVzdiAqb2JqLAo+ICAg KiB1bmRlZmluZWQgb3JkZXIsIGEgI3d3X2FjcXVpcmVfY3R4IGlzIHBhc3NlZCB0byB1bndpbmQg aWYgYSBjeWNsZQo+ICAgKiBpcyBkZXRlY3RlZC4gU2VlIHd3X211dGV4X2xvY2soKSBhbmQgd3df YWNxdWlyZV9pbml0KCkuIEEgcmVzZXJ2YXRpb24KPiAgICogb2JqZWN0IG1heSBiZSBsb2NrZWQg YnkgaXRzZWxmIGJ5IHBhc3NpbmcgTlVMTCBhcyBAY3R4Lgo+ICsgKgo+ICsgKiBXaGVuIGEgZGll IHNpdHVhdGlvbiBpcyBpbmRpY2F0ZWQgYnkgcmV0dXJuaW5nIC1FREVBRExLIGFsbCBsb2NrcyBo ZWxkIGJ5Cj4gKyAqIEBjdHggbXVzdCBiZSB1bmxvY2tlZCBhbmQgdGhlbiBkbWFfcmVzdl9sb2Nr X3Nsb3dfaW50ZXJydXB0aWJsZSgpIGNhbGxlZCBvbgo+ICsgKiBAb2JqLgo+ICsgKgo+ICsgKiBV bmxvY2tlZCBieSBjYWxsaW5nIGRtYV9yZXN2X2xvY2soKS4KCmRtYV9yZXN2X3VubG9jaygpCgpm d2l3LApSZXZpZXdlZC1ieTogTWF0dGhldyBBdWxkIDxtYXR0aGV3LmF1bGRAaW50ZWwuY29tPgoK Cj4gICAqLwo+ICBzdGF0aWMgaW5saW5lIGludCBkbWFfcmVzdl9sb2NrX2ludGVycnVwdGlibGUo c3RydWN0IGRtYV9yZXN2ICpvYmosCj4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgIHN0cnVjdCB3d19hY3F1aXJlX2N0eCAqY3R4KQo+IEBAIC0xMzQsNiArMjIx LDggQEAgc3RhdGljIGlubGluZSBpbnQgZG1hX3Jlc3ZfbG9ja19pbnRlcnJ1cHRpYmxlKHN0cnVj dCBkbWFfcmVzdiAqb2JqLAo+ICAgKiBBY3F1aXJlcyB0aGUgcmVzZXJ2YXRpb24gb2JqZWN0IGFm dGVyIGEgZGllIGNhc2UuIFRoaXMgZnVuY3Rpb24KPiAgICogd2lsbCBzbGVlcCB1bnRpbCB0aGUg bG9jayBiZWNvbWVzIGF2YWlsYWJsZS4gU2VlIGRtYV9yZXN2X2xvY2soKSBhcwo+ICAgKiB3ZWxs Lgo+ICsgKgo+ICsgKiBTZWUgYWxzbyBkbWFfcmVzdl9sb2NrX3Nsb3dfaW50ZXJydXB0aWJsZSgp IGZvciB0aGUgaW50ZXJydXB0aWJsZSB2YXJpYW50Lgo+ICAgKi8KPiAgc3RhdGljIGlubGluZSB2 b2lkIGRtYV9yZXN2X2xvY2tfc2xvdyhzdHJ1Y3QgZG1hX3Jlc3YgKm9iaiwKPiAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgIHN0cnVjdCB3d19hY3F1aXJlX2N0eCAqY3R4KQo+ IEBAIC0xNjcsNyArMjU2LDcgQEAgc3RhdGljIGlubGluZSBpbnQgZG1hX3Jlc3ZfbG9ja19zbG93 X2ludGVycnVwdGlibGUoc3RydWN0IGRtYV9yZXN2ICpvYmosCj4gICAqIGlmIHRoZXkgb3Zlcmxh cCB3aXRoIGEgd3JpdGVyLgo+ICAgKgo+ICAgKiBBbHNvIG5vdGUgdGhhdCBzaW5jZSBubyBjb250 ZXh0IGlzIHByb3ZpZGVkLCBubyBkZWFkbG9jayBwcm90ZWN0aW9uIGlzCj4gLSAqIHBvc3NpYmxl Lgo+ICsgKiBwb3NzaWJsZSwgd2hpY2ggaXMgYWxzbyBub3QgbmVlZGVkIGZvciBhIHRyeWxvY2su Cj4gICAqCj4gICAqIFJldHVybnMgdHJ1ZSBpZiB0aGUgbG9jayB3YXMgYWNxdWlyZWQsIGZhbHNl IG90aGVyd2lzZS4KPiAgICovCj4gQEAgLTE5Myw2ICsyODIsMTEgQEAgc3RhdGljIGlubGluZSBi b29sIGRtYV9yZXN2X2lzX2xvY2tlZChzdHJ1Y3QgZG1hX3Jlc3YgKm9iaikKPiAgICoKPiAgICog UmV0dXJucyB0aGUgY29udGV4dCB1c2VkIHRvIGxvY2sgYSByZXNlcnZhdGlvbiBvYmplY3Qgb3Ig TlVMTCBpZiBubyBjb250ZXh0Cj4gICAqIHdhcyB1c2VkIG9yIHRoZSBvYmplY3QgaXMgbm90IGxv Y2tlZCBhdCBhbGwuCj4gKyAqCj4gKyAqIFdBUk5JTkc6IFRoaXMgaW50ZXJmYWNlIGlzIHByZXR0 eSBob3JyaWJsZSwgYnV0IFRUTSBuZWVkcyBpdCBiZWNhdXNlIGl0Cj4gKyAqIGRvZXNuJ3QgcGFz cyB0aGUgc3RydWN0IHd3X2FjcXVpcmVfY3R4IGFyb3VuZCBpbiBzb21lIHZlcnkgbG9uZyBjYWxs Y2hhaW5zLgo+ICsgKiBFdmVyeW9uZSBlbHNlIGp1c3QgdXNlcyBpdCB0byBjaGVjayB3aGV0aGVy IHRoZXkncmUgaG9sZGluZyBhIHJlc2VydmF0aW9uIG9yCj4gKyAqIG5vdC4KPiAgICovCj4gIHN0 YXRpYyBpbmxpbmUgc3RydWN0IHd3X2FjcXVpcmVfY3R4ICpkbWFfcmVzdl9sb2NraW5nX2N0eChz dHJ1Y3QgZG1hX3Jlc3YgKm9iaikKPiAgewo+IC0tCj4gMi4zMi4wCj4KPiBfX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwo+IEludGVsLWdmeCBtYWlsaW5nIGxp c3QKPiBJbnRlbC1nZnhAbGlzdHMuZnJlZWRlc2t0b3Aub3JnCj4gaHR0cHM6Ly9saXN0cy5mcmVl ZGVza3RvcC5vcmcvbWFpbG1hbi9saXN0aW5mby9pbnRlbC1nZngKX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX18KSW50ZWwtZ2Z4IG1haWxpbmcgbGlzdApJbnRl bC1nZnhAbGlzdHMuZnJlZWRlc2t0b3Aub3JnCmh0dHBzOi8vbGlzdHMuZnJlZWRlc2t0b3Aub3Jn L21haWxtYW4vbGlzdGluZm8vaW50ZWwtZ2Z4Cg==