From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.2 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0D4BDC07E95 for ; Wed, 7 Jul 2021 08:06:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DFCFD61CB3 for ; Wed, 7 Jul 2021 08:06:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230414AbhGGIJb (ORCPT ); Wed, 7 Jul 2021 04:09:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39150 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230398AbhGGIJa (ORCPT ); Wed, 7 Jul 2021 04:09:30 -0400 Received: from mail-wm1-x335.google.com (mail-wm1-x335.google.com [IPv6:2a00:1450:4864:20::335]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AC7DBC061574 for ; Wed, 7 Jul 2021 01:06:50 -0700 (PDT) Received: by mail-wm1-x335.google.com with SMTP id n33so1216112wms.1 for ; Wed, 07 Jul 2021 01:06:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=subject:to:cc:references:from:message-id:date:user-agent :mime-version:in-reply-to:content-transfer-encoding:content-language; bh=QSYk948aOngX87+6HnocMDT9EPo+d77Our2uSzGUozI=; b=aZ+4/9hHlQ9iZh6NsuxPcFh3Q3ZvL5a6wdBHSb4cm7cAArwKyVMMAtlNQYl/TfsLFF mV08KPKE/4mVBmmyW9dIzKIO+HNnQE1nsUZ2x8fay7rpB+/4iJsqjYCX/hhHv+KyOFuZ bvzQ+4dQJidTXJk6licog1tgquszQPQ+SrQy8CbO92ZoYaJdidd3F64qOr2geoFBBEfE xCN4XS4TEu9hEMieA/adggrOLKXoHknrsCcWoCY/32U+nHdPZqTOcXw5w4ERTG9vfSfK nLP3AdQ9vaZ0BL2d4FXWZwzpWp/3BgpOGfTFtOwxgQfKLTOT8pv8Fo8UZZKKkW+/t4Ns Dofw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:cc:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-transfer-encoding :content-language; bh=QSYk948aOngX87+6HnocMDT9EPo+d77Our2uSzGUozI=; b=LU0LPBgkZckLvRY2VMsV3m6PPIOb4hVew3K1r0jk/62edHq9ihGpngg0dC/4mzHhF/ Gxb1AfQU/b9IEStjoYbyzFdvQxanIAtSn9D6/qD4/0FxeLYx+icPRg6qSPyv2RFZg98x 7ZexjT2ECqJTeDJm9pEe1xGQ6Ic7L/Sosa0NYO+0kmd2/FPJY3cGeMGmU3FnhdyywuSS ZRC33blEbpbKJNgHiMRNutzohh5oTVLSpVc3XrvMmODiCdfDY+jSkOgyyDxemG1UTyz2 MgynkK1C24t3NWvErGqJHdPRxVTURwlxrbnUrBHtGuEIYMSbi1y2uT4UtgPZubwo93sv 92Qg== X-Gm-Message-State: AOAM533rvy0qeNXqW0GFYB/JyyaDYv2u3uXEm9BnBx4fUQNtbW7vtCjF +rC9yNN9m0x8kfwRWD0THPEVBkWWbmY= X-Google-Smtp-Source: ABdhPJyUc2JfiJLCx4TuDJWK+0d4s/cWbxTsxfV1qNDmTIl/+1pHyQ34yut/8CVhLv/DIanK8iL8Aw== X-Received: by 2002:a05:600c:35c1:: with SMTP id r1mr24661124wmq.13.1625645209353; Wed, 07 Jul 2021 01:06:49 -0700 (PDT) Received: from ?IPv6:2a02:908:1252:fb60:7671:3328:2129:96b5? ([2a02:908:1252:fb60:7671:3328:2129:96b5]) by smtp.gmail.com with ESMTPSA id r4sm20513204wre.84.2021.07.07.01.06.48 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 07 Jul 2021 01:06:48 -0700 (PDT) Subject: Re: [Linaro-mm-sig] [PATCH 7/7] dma-resv: Give the docs a do-over To: Daniel Vetter , DRI Development Cc: Intel Graphics Development , =?UTF-8?Q?Christian_K=c3=b6nig?= , linaro-mm-sig@lists.linaro.org, Daniel Vetter , linux-media@vger.kernel.org References: <20210706101209.3034092-1-daniel.vetter@ffwll.ch> <20210706101209.3034092-8-daniel.vetter@ffwll.ch> From: =?UTF-8?Q?Christian_K=c3=b6nig?= Message-ID: <280c31ae-0d9f-0548-11ea-5b25546741ae@gmail.com> Date: Wed, 7 Jul 2021 10:06:47 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.11.0 MIME-Version: 1.0 In-Reply-To: <20210706101209.3034092-8-daniel.vetter@ffwll.ch> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Content-Language: en-US Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Am 06.07.21 um 12:12 schrieb Daniel Vetter: > Specifically document the new/clarified rules around how the shared > fences do not have any ordering requirements against the exclusive > fence. > > But also document all the things a bit better, given how central > struct dma_resv to dynamic buffer management the docs have been very > inadequat. > > - Lots more links to other pieces of the puzzle. Unfortunately > ttm_buffer_object has no docs, so no links :-( > > - Explain/complain a bit about dma_resv_locking_ctx(). I still don't > like that one, but fixing the ttm call chains is going to be > horrible. Plus we want to plug in real slowpath locking when we do > that anyway. > > - Main part of the patch is some actual docs for struct dma_resv. > > Overall I think we still have a lot of bad naming in this area (e.g. > dma_resv.fence is singular, but contains the multiple shared fences), > but I think that's more indicative of how the semantics and rules are > just not great. > > Another thing that's real awkard is how chaining exclusive fences > right now means direct dma_resv.exclusive_fence pointer access with an > rcu_assign_pointer. Not so great either. > > Signed-off-by: Daniel Vetter > Cc: Sumit Semwal > Cc: "Christian König" > Cc: linux-media@vger.kernel.org > Cc: linaro-mm-sig@lists.linaro.org > --- > drivers/dma-buf/dma-resv.c | 22 ++++++-- > include/linux/dma-resv.h | 104 +++++++++++++++++++++++++++++++++++-- > 2 files changed, 116 insertions(+), 10 deletions(-) > > diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c > index f26c71747d43..898f8d894bbd 100644 > --- a/drivers/dma-buf/dma-resv.c > +++ b/drivers/dma-buf/dma-resv.c > @@ -48,6 +48,8 @@ > * write operations) or N shared fences (read operations). The RCU > * mechanism is used to protect read access to fences from locked > * write-side updates. > + * > + * See struct dma_resv for more details. > */ > > DEFINE_WD_CLASS(reservation_ww_class); > @@ -137,7 +139,11 @@ EXPORT_SYMBOL(dma_resv_fini); > * @num_fences: number of fences we want to add > * > * Should be called before dma_resv_add_shared_fence(). Must > - * be called with obj->lock held. > + * be called with @obj locked through dma_resv_lock(). > + * > + * Note that the preallocated slots need to be re-reserved if @obj is unlocked > + * at any time before callind dma_resv_add_shared_fence(). This is validate when > + * CONFIG_DEBUG_MUTEXES is enabled. > * > * RETURNS > * Zero for success, or -errno > @@ -234,8 +240,10 @@ EXPORT_SYMBOL(dma_resv_reset_shared_max); > * @obj: the reservation object > * @fence: the shared fence to add > * > - * Add a fence to a shared slot, obj->lock must be held, and > + * Add a fence to a shared slot, @obj must be locked with dma_resv_lock(), and > * dma_resv_reserve_shared() has been called. > + * > + * See also &dma_resv.fence for a discussion of the semantics. > */ > void dma_resv_add_shared_fence(struct dma_resv *obj, struct dma_fence *fence) > { > @@ -280,7 +288,9 @@ EXPORT_SYMBOL(dma_resv_add_shared_fence); > * @obj: the reservation object > * @fence: the shared fence to add > * > - * Add a fence to the exclusive slot. The obj->lock must be held. > + * Add a fence to the exclusive slot. @obj must be locked with dma_resv_lock(). > + * Note that this function replaces all fences attached to @obj, see also > + * &dma_resv.fence_excl for a discussion of the semantics. > */ > void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence) > { > @@ -609,9 +619,11 @@ static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence) > * fence > * > * Callers are not required to hold specific locks, but maybe hold > - * dma_resv_lock() already > + * dma_resv_lock() already. > + * > * RETURNS > - * true if all fences signaled, else false > + * > + * True if all fences signaled, else false. > */ > bool dma_resv_test_signaled(struct dma_resv *obj, bool test_all) > { > diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h > index e1ca2080a1ff..c77fd54d033f 100644 > --- a/include/linux/dma-resv.h > +++ b/include/linux/dma-resv.h > @@ -62,16 +62,90 @@ struct dma_resv_list { > > /** > * struct dma_resv - a reservation object manages fences for a buffer > - * @lock: update side lock > - * @seq: sequence count for managing RCU read-side synchronization > - * @fence_excl: the exclusive fence, if there is one currently > - * @fence: list of current shared fences > + * > + * There are multiple uses for this, with sometimes slightly different rules in > + * how the fence slots are used. > + * > + * One use is to synchronize cross-driver access to a struct dma_buf, either for > + * dynamic buffer management or just to handle implicit synchronization between > + * different users of the buffer in userspace. See &dma_buf.resv for a more > + * in-depth discussion. > + * > + * The other major use is to manage access and locking within a driver in a > + * buffer based memory manager. struct ttm_buffer_object is the canonical > + * example here, since this is were reservation objects originated from. But use > + * in drivers is spreading and some drivers also manage struct > + * drm_gem_object with the same scheme. I would still make that even harder, e.g. mentioning that you run into use after free and the resulting memory corruption if you don't obey the rules. Apart from that with the spelling stuff pointed out by others fixed the patch is Reviewed-by: Christian König Regards, Christian. > */ > struct dma_resv { > + /** > + * @lock: > + * > + * Update side lock. Don't use directly, instead use the wrapper > + * functions like dma_resv_lock() and dma_resv_unlock(). > + * > + * Drivers which use the reservation object to manage memory dynamically > + * also use this lock to protect buffer object state like placement, > + * allocation policies or throughout command submission. > + */ > struct ww_mutex lock; > + > + /** > + * @seq: > + * > + * Sequence count for managing RCU read-side synchronization, allows > + * read-only access to @fence_excl and @fence while ensuring we take a > + * consistent snapshot. > + */ > seqcount_ww_mutex_t seq; > > + /** > + * @fence_excl: > + * > + * The exclusive fence, if there is one currently. > + * > + * There are two was to update this fence: > + * > + * - First by calling dma_resv_add_excl_fence(), which replaces all > + * fences attached to the reservation object. To guarantee that no > + * fences are lost this new fence must signal only after all previous > + * fences, both shared and exclusive, have signalled. In some cases it > + * is convenient to achieve that by attaching a struct dma_fence_array > + * with all the new and old fences. > + * > + * - Alternatively the fence can be set directly, which leaves the > + * shared fences unchanged. To guarantee that no fences are lost this > + * new fence must signale only after the previous exclusive fence has > + * singalled. Since the shared fences are staying intact, it is not > + * necessary to maintain any ordering against those. If semantically > + * only a new access is added without actually treating the previous > + * one as a dependency the exclusive fences can be strung together > + * using struct dma_fence_chain. > + * > + * Note that actual semantics of what an exclusive or shared fence mean > + * is defined by the user, for reservation objects shared across drivers > + * see &dma_buf.resv. > + */ > struct dma_fence __rcu *fence_excl; > + > + /** > + * @fence: > + * > + * List of current shared fences. > + * > + * There are no ordering constraints of shared fences against the > + * exclusive fence slot. If a waiter needs to wait for all access, it > + * has to wait for both set of fences to signal. > + * > + * A new fence is added by calling dma_resv_add_shared_fence(). Since > + * this often needs to be done past the point of no return in command > + * submission it cannot fail, and therefor sufficient slots need to be > + * reserved by calling dma_resv_reserve_shared(). > + * > + * Note that actual semantics of what an exclusive or shared fence mean > + * is defined by the user, for reservation objects shared across drivers > + * see &dma_buf.resv. > + */ > struct dma_resv_list __rcu *fence; > }; > > @@ -98,6 +172,13 @@ static inline void dma_resv_reset_shared_max(struct dma_resv *obj) {} > * undefined order, a #ww_acquire_ctx is passed to unwind if a cycle > * is detected. See ww_mutex_lock() and ww_acquire_init(). A reservation > * object may be locked by itself by passing NULL as @ctx. > + * > + * When a die situation is indicated by returning -EDEADLK all locks held by > + * @ctx must be unlocked and then dma_resv_lock_slow() called on @obj. > + * > + * Unlocked by calling dma_resv_lock(). > + * > + * See also dma_resv_lock_interruptible() for the interruptible variant. > */ > static inline int dma_resv_lock(struct dma_resv *obj, > struct ww_acquire_ctx *ctx) > @@ -119,6 +200,12 @@ static inline int dma_resv_lock(struct dma_resv *obj, > * undefined order, a #ww_acquire_ctx is passed to unwind if a cycle > * is detected. See ww_mutex_lock() and ww_acquire_init(). A reservation > * object may be locked by itself by passing NULL as @ctx. > + * > + * When a die situation is indicated by returning -EDEADLK all locks held by > + * @ctx must be unlocked and then dma_resv_lock_slow_interruptible() called on > + * @obj. > + * > + * Unlocked by calling dma_resv_lock(). > */ > static inline int dma_resv_lock_interruptible(struct dma_resv *obj, > struct ww_acquire_ctx *ctx) > @@ -134,6 +221,8 @@ static inline int dma_resv_lock_interruptible(struct dma_resv *obj, > * Acquires the reservation object after a die case. This function > * will sleep until the lock becomes available. See dma_resv_lock() as > * well. > + * > + * See also dma_resv_lock_slow_interruptible() for the interruptible variant. > */ > static inline void dma_resv_lock_slow(struct dma_resv *obj, > struct ww_acquire_ctx *ctx) > @@ -167,7 +256,7 @@ static inline int dma_resv_lock_slow_interruptible(struct dma_resv *obj, > * if they overlap with a writer. > * > * Also note that since no context is provided, no deadlock protection is > - * possible. > + * possible, which is also not needed for a trylock. > * > * Returns true if the lock was acquired, false otherwise. > */ > @@ -193,6 +282,11 @@ static inline bool dma_resv_is_locked(struct dma_resv *obj) > * > * Returns the context used to lock a reservation object or NULL if no context > * was used or the object is not locked at all. > + * > + * WARNING: This interface is pretty horrible, but TTM needs it because it > + * doesn't pass the struct ww_acquire_ctx around in some very long callchains. > + * Everyone else just uses it to check whether they're holding a reservation or > + * not. > */ > static inline struct ww_acquire_ctx *dma_resv_locking_ctx(struct dma_resv *obj) > { From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.0 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2BC97C07E9C for ; Wed, 7 Jul 2021 08:06:52 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E34C061C7C for ; Wed, 7 Jul 2021 08:06:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E34C061C7C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 6E1B16E83B; Wed, 7 Jul 2021 08:06:51 +0000 (UTC) Received: from mail-wm1-x32c.google.com (mail-wm1-x32c.google.com [IPv6:2a00:1450:4864:20::32c]) by gabe.freedesktop.org (Postfix) with ESMTPS id 9CEA86E83B; Wed, 7 Jul 2021 08:06:50 +0000 (UTC) Received: by mail-wm1-x32c.google.com with SMTP id u8-20020a7bcb080000b02901e44e9caa2aso1046730wmj.4; Wed, 07 Jul 2021 01:06:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=subject:to:cc:references:from:message-id:date:user-agent :mime-version:in-reply-to:content-transfer-encoding:content-language; bh=QSYk948aOngX87+6HnocMDT9EPo+d77Our2uSzGUozI=; b=aZ+4/9hHlQ9iZh6NsuxPcFh3Q3ZvL5a6wdBHSb4cm7cAArwKyVMMAtlNQYl/TfsLFF mV08KPKE/4mVBmmyW9dIzKIO+HNnQE1nsUZ2x8fay7rpB+/4iJsqjYCX/hhHv+KyOFuZ bvzQ+4dQJidTXJk6licog1tgquszQPQ+SrQy8CbO92ZoYaJdidd3F64qOr2geoFBBEfE xCN4XS4TEu9hEMieA/adggrOLKXoHknrsCcWoCY/32U+nHdPZqTOcXw5w4ERTG9vfSfK nLP3AdQ9vaZ0BL2d4FXWZwzpWp/3BgpOGfTFtOwxgQfKLTOT8pv8Fo8UZZKKkW+/t4Ns Dofw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:cc:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-transfer-encoding :content-language; bh=QSYk948aOngX87+6HnocMDT9EPo+d77Our2uSzGUozI=; b=bZELKfKLht8KL/6glgTGLO88fSsH1oDQITFi6pDFKIXBpOBtgVW5sqQ1gvlJEB9MOe bIp3LaJh9dZ5BkiLbCVKFRkLYc2NQ9I3dbgxXQ1u9472DGEcXAEZ2CKIBnnMB3WfjHos Whlgg9bYXzdnqq8JFds79YGusAQEQHLpUF9gxACq/S6cK353+j8jvPMFu6zeok7hYBj1 f2hTF1XEgntZSsBpgVFfgEZ1SsNXdZz+3r67okiZX+pMa84ZQ87fyCLaZ9XFKGOPXQLy DjHVU3KaGwdpTDnKxYBNWFuVZN6Bkb81PyMcof/SdbLWHU28UZKIQSbiDiJu6E+1gCM3 rT4A== X-Gm-Message-State: AOAM530LZDxsbg6lMM3vZtwBmNJFblHKQa6gHlOXUtXr3EB74Z7Msxf1 fLOOcRWZuZZL94PjynkVOxc= X-Google-Smtp-Source: ABdhPJyUc2JfiJLCx4TuDJWK+0d4s/cWbxTsxfV1qNDmTIl/+1pHyQ34yut/8CVhLv/DIanK8iL8Aw== X-Received: by 2002:a05:600c:35c1:: with SMTP id r1mr24661124wmq.13.1625645209353; Wed, 07 Jul 2021 01:06:49 -0700 (PDT) Received: from ?IPv6:2a02:908:1252:fb60:7671:3328:2129:96b5? ([2a02:908:1252:fb60:7671:3328:2129:96b5]) by smtp.gmail.com with ESMTPSA id r4sm20513204wre.84.2021.07.07.01.06.48 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 07 Jul 2021 01:06:48 -0700 (PDT) Subject: Re: [Linaro-mm-sig] [PATCH 7/7] dma-resv: Give the docs a do-over To: Daniel Vetter , DRI Development References: <20210706101209.3034092-1-daniel.vetter@ffwll.ch> <20210706101209.3034092-8-daniel.vetter@ffwll.ch> From: =?UTF-8?Q?Christian_K=c3=b6nig?= Message-ID: <280c31ae-0d9f-0548-11ea-5b25546741ae@gmail.com> Date: Wed, 7 Jul 2021 10:06:47 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.11.0 MIME-Version: 1.0 In-Reply-To: <20210706101209.3034092-8-daniel.vetter@ffwll.ch> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Content-Language: en-US X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linaro-mm-sig@lists.linaro.org, Daniel Vetter , Intel Graphics Development , =?UTF-8?Q?Christian_K=c3=b6nig?= , linux-media@vger.kernel.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Am 06.07.21 um 12:12 schrieb Daniel Vetter: > Specifically document the new/clarified rules around how the shared > fences do not have any ordering requirements against the exclusive > fence. > > But also document all the things a bit better, given how central > struct dma_resv to dynamic buffer management the docs have been very > inadequat. > > - Lots more links to other pieces of the puzzle. Unfortunately > ttm_buffer_object has no docs, so no links :-( > > - Explain/complain a bit about dma_resv_locking_ctx(). I still don't > like that one, but fixing the ttm call chains is going to be > horrible. Plus we want to plug in real slowpath locking when we do > that anyway. > > - Main part of the patch is some actual docs for struct dma_resv. > > Overall I think we still have a lot of bad naming in this area (e.g. > dma_resv.fence is singular, but contains the multiple shared fences), > but I think that's more indicative of how the semantics and rules are > just not great. > > Another thing that's real awkard is how chaining exclusive fences > right now means direct dma_resv.exclusive_fence pointer access with an > rcu_assign_pointer. Not so great either. > > Signed-off-by: Daniel Vetter > Cc: Sumit Semwal > Cc: "Christian König" > Cc: linux-media@vger.kernel.org > Cc: linaro-mm-sig@lists.linaro.org > --- > drivers/dma-buf/dma-resv.c | 22 ++++++-- > include/linux/dma-resv.h | 104 +++++++++++++++++++++++++++++++++++-- > 2 files changed, 116 insertions(+), 10 deletions(-) > > diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c > index f26c71747d43..898f8d894bbd 100644 > --- a/drivers/dma-buf/dma-resv.c > +++ b/drivers/dma-buf/dma-resv.c > @@ -48,6 +48,8 @@ > * write operations) or N shared fences (read operations). The RCU > * mechanism is used to protect read access to fences from locked > * write-side updates. > + * > + * See struct dma_resv for more details. > */ > > DEFINE_WD_CLASS(reservation_ww_class); > @@ -137,7 +139,11 @@ EXPORT_SYMBOL(dma_resv_fini); > * @num_fences: number of fences we want to add > * > * Should be called before dma_resv_add_shared_fence(). Must > - * be called with obj->lock held. > + * be called with @obj locked through dma_resv_lock(). > + * > + * Note that the preallocated slots need to be re-reserved if @obj is unlocked > + * at any time before callind dma_resv_add_shared_fence(). This is validate when > + * CONFIG_DEBUG_MUTEXES is enabled. > * > * RETURNS > * Zero for success, or -errno > @@ -234,8 +240,10 @@ EXPORT_SYMBOL(dma_resv_reset_shared_max); > * @obj: the reservation object > * @fence: the shared fence to add > * > - * Add a fence to a shared slot, obj->lock must be held, and > + * Add a fence to a shared slot, @obj must be locked with dma_resv_lock(), and > * dma_resv_reserve_shared() has been called. > + * > + * See also &dma_resv.fence for a discussion of the semantics. > */ > void dma_resv_add_shared_fence(struct dma_resv *obj, struct dma_fence *fence) > { > @@ -280,7 +288,9 @@ EXPORT_SYMBOL(dma_resv_add_shared_fence); > * @obj: the reservation object > * @fence: the shared fence to add > * > - * Add a fence to the exclusive slot. The obj->lock must be held. > + * Add a fence to the exclusive slot. @obj must be locked with dma_resv_lock(). > + * Note that this function replaces all fences attached to @obj, see also > + * &dma_resv.fence_excl for a discussion of the semantics. > */ > void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence) > { > @@ -609,9 +619,11 @@ static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence) > * fence > * > * Callers are not required to hold specific locks, but maybe hold > - * dma_resv_lock() already > + * dma_resv_lock() already. > + * > * RETURNS > - * true if all fences signaled, else false > + * > + * True if all fences signaled, else false. > */ > bool dma_resv_test_signaled(struct dma_resv *obj, bool test_all) > { > diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h > index e1ca2080a1ff..c77fd54d033f 100644 > --- a/include/linux/dma-resv.h > +++ b/include/linux/dma-resv.h > @@ -62,16 +62,90 @@ struct dma_resv_list { > > /** > * struct dma_resv - a reservation object manages fences for a buffer > - * @lock: update side lock > - * @seq: sequence count for managing RCU read-side synchronization > - * @fence_excl: the exclusive fence, if there is one currently > - * @fence: list of current shared fences > + * > + * There are multiple uses for this, with sometimes slightly different rules in > + * how the fence slots are used. > + * > + * One use is to synchronize cross-driver access to a struct dma_buf, either for > + * dynamic buffer management or just to handle implicit synchronization between > + * different users of the buffer in userspace. See &dma_buf.resv for a more > + * in-depth discussion. > + * > + * The other major use is to manage access and locking within a driver in a > + * buffer based memory manager. struct ttm_buffer_object is the canonical > + * example here, since this is were reservation objects originated from. But use > + * in drivers is spreading and some drivers also manage struct > + * drm_gem_object with the same scheme. I would still make that even harder, e.g. mentioning that you run into use after free and the resulting memory corruption if you don't obey the rules. Apart from that with the spelling stuff pointed out by others fixed the patch is Reviewed-by: Christian König Regards, Christian. > */ > struct dma_resv { > + /** > + * @lock: > + * > + * Update side lock. Don't use directly, instead use the wrapper > + * functions like dma_resv_lock() and dma_resv_unlock(). > + * > + * Drivers which use the reservation object to manage memory dynamically > + * also use this lock to protect buffer object state like placement, > + * allocation policies or throughout command submission. > + */ > struct ww_mutex lock; > + > + /** > + * @seq: > + * > + * Sequence count for managing RCU read-side synchronization, allows > + * read-only access to @fence_excl and @fence while ensuring we take a > + * consistent snapshot. > + */ > seqcount_ww_mutex_t seq; > > + /** > + * @fence_excl: > + * > + * The exclusive fence, if there is one currently. > + * > + * There are two was to update this fence: > + * > + * - First by calling dma_resv_add_excl_fence(), which replaces all > + * fences attached to the reservation object. To guarantee that no > + * fences are lost this new fence must signal only after all previous > + * fences, both shared and exclusive, have signalled. In some cases it > + * is convenient to achieve that by attaching a struct dma_fence_array > + * with all the new and old fences. > + * > + * - Alternatively the fence can be set directly, which leaves the > + * shared fences unchanged. To guarantee that no fences are lost this > + * new fence must signale only after the previous exclusive fence has > + * singalled. Since the shared fences are staying intact, it is not > + * necessary to maintain any ordering against those. If semantically > + * only a new access is added without actually treating the previous > + * one as a dependency the exclusive fences can be strung together > + * using struct dma_fence_chain. > + * > + * Note that actual semantics of what an exclusive or shared fence mean > + * is defined by the user, for reservation objects shared across drivers > + * see &dma_buf.resv. > + */ > struct dma_fence __rcu *fence_excl; > + > + /** > + * @fence: > + * > + * List of current shared fences. > + * > + * There are no ordering constraints of shared fences against the > + * exclusive fence slot. If a waiter needs to wait for all access, it > + * has to wait for both set of fences to signal. > + * > + * A new fence is added by calling dma_resv_add_shared_fence(). Since > + * this often needs to be done past the point of no return in command > + * submission it cannot fail, and therefor sufficient slots need to be > + * reserved by calling dma_resv_reserve_shared(). > + * > + * Note that actual semantics of what an exclusive or shared fence mean > + * is defined by the user, for reservation objects shared across drivers > + * see &dma_buf.resv. > + */ > struct dma_resv_list __rcu *fence; > }; > > @@ -98,6 +172,13 @@ static inline void dma_resv_reset_shared_max(struct dma_resv *obj) {} > * undefined order, a #ww_acquire_ctx is passed to unwind if a cycle > * is detected. See ww_mutex_lock() and ww_acquire_init(). A reservation > * object may be locked by itself by passing NULL as @ctx. > + * > + * When a die situation is indicated by returning -EDEADLK all locks held by > + * @ctx must be unlocked and then dma_resv_lock_slow() called on @obj. > + * > + * Unlocked by calling dma_resv_lock(). > + * > + * See also dma_resv_lock_interruptible() for the interruptible variant. > */ > static inline int dma_resv_lock(struct dma_resv *obj, > struct ww_acquire_ctx *ctx) > @@ -119,6 +200,12 @@ static inline int dma_resv_lock(struct dma_resv *obj, > * undefined order, a #ww_acquire_ctx is passed to unwind if a cycle > * is detected. See ww_mutex_lock() and ww_acquire_init(). A reservation > * object may be locked by itself by passing NULL as @ctx. > + * > + * When a die situation is indicated by returning -EDEADLK all locks held by > + * @ctx must be unlocked and then dma_resv_lock_slow_interruptible() called on > + * @obj. > + * > + * Unlocked by calling dma_resv_lock(). > */ > static inline int dma_resv_lock_interruptible(struct dma_resv *obj, > struct ww_acquire_ctx *ctx) > @@ -134,6 +221,8 @@ static inline int dma_resv_lock_interruptible(struct dma_resv *obj, > * Acquires the reservation object after a die case. This function > * will sleep until the lock becomes available. See dma_resv_lock() as > * well. > + * > + * See also dma_resv_lock_slow_interruptible() for the interruptible variant. > */ > static inline void dma_resv_lock_slow(struct dma_resv *obj, > struct ww_acquire_ctx *ctx) > @@ -167,7 +256,7 @@ static inline int dma_resv_lock_slow_interruptible(struct dma_resv *obj, > * if they overlap with a writer. > * > * Also note that since no context is provided, no deadlock protection is > - * possible. > + * possible, which is also not needed for a trylock. > * > * Returns true if the lock was acquired, false otherwise. > */ > @@ -193,6 +282,11 @@ static inline bool dma_resv_is_locked(struct dma_resv *obj) > * > * Returns the context used to lock a reservation object or NULL if no context > * was used or the object is not locked at all. > + * > + * WARNING: This interface is pretty horrible, but TTM needs it because it > + * doesn't pass the struct ww_acquire_ctx around in some very long callchains. > + * Everyone else just uses it to check whether they're holding a reservation or > + * not. > */ > static inline struct ww_acquire_ctx *dma_resv_locking_ctx(struct dma_resv *obj) > { From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.0 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CD15FC07E95 for ; Wed, 7 Jul 2021 11:45:01 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 79948619F1 for ; Wed, 7 Jul 2021 11:45:01 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 79948619F1 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 1A3EB6E10C; Wed, 7 Jul 2021 11:45:01 +0000 (UTC) Received: from mail-wm1-x32c.google.com (mail-wm1-x32c.google.com [IPv6:2a00:1450:4864:20::32c]) by gabe.freedesktop.org (Postfix) with ESMTPS id 9CEA86E83B; Wed, 7 Jul 2021 08:06:50 +0000 (UTC) Received: by mail-wm1-x32c.google.com with SMTP id u8-20020a7bcb080000b02901e44e9caa2aso1046730wmj.4; Wed, 07 Jul 2021 01:06:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=subject:to:cc:references:from:message-id:date:user-agent :mime-version:in-reply-to:content-transfer-encoding:content-language; bh=QSYk948aOngX87+6HnocMDT9EPo+d77Our2uSzGUozI=; b=aZ+4/9hHlQ9iZh6NsuxPcFh3Q3ZvL5a6wdBHSb4cm7cAArwKyVMMAtlNQYl/TfsLFF mV08KPKE/4mVBmmyW9dIzKIO+HNnQE1nsUZ2x8fay7rpB+/4iJsqjYCX/hhHv+KyOFuZ bvzQ+4dQJidTXJk6licog1tgquszQPQ+SrQy8CbO92ZoYaJdidd3F64qOr2geoFBBEfE xCN4XS4TEu9hEMieA/adggrOLKXoHknrsCcWoCY/32U+nHdPZqTOcXw5w4ERTG9vfSfK nLP3AdQ9vaZ0BL2d4FXWZwzpWp/3BgpOGfTFtOwxgQfKLTOT8pv8Fo8UZZKKkW+/t4Ns Dofw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:cc:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-transfer-encoding :content-language; bh=QSYk948aOngX87+6HnocMDT9EPo+d77Our2uSzGUozI=; b=bZELKfKLht8KL/6glgTGLO88fSsH1oDQITFi6pDFKIXBpOBtgVW5sqQ1gvlJEB9MOe bIp3LaJh9dZ5BkiLbCVKFRkLYc2NQ9I3dbgxXQ1u9472DGEcXAEZ2CKIBnnMB3WfjHos Whlgg9bYXzdnqq8JFds79YGusAQEQHLpUF9gxACq/S6cK353+j8jvPMFu6zeok7hYBj1 f2hTF1XEgntZSsBpgVFfgEZ1SsNXdZz+3r67okiZX+pMa84ZQ87fyCLaZ9XFKGOPXQLy DjHVU3KaGwdpTDnKxYBNWFuVZN6Bkb81PyMcof/SdbLWHU28UZKIQSbiDiJu6E+1gCM3 rT4A== X-Gm-Message-State: AOAM530LZDxsbg6lMM3vZtwBmNJFblHKQa6gHlOXUtXr3EB74Z7Msxf1 fLOOcRWZuZZL94PjynkVOxc= X-Google-Smtp-Source: ABdhPJyUc2JfiJLCx4TuDJWK+0d4s/cWbxTsxfV1qNDmTIl/+1pHyQ34yut/8CVhLv/DIanK8iL8Aw== X-Received: by 2002:a05:600c:35c1:: with SMTP id r1mr24661124wmq.13.1625645209353; Wed, 07 Jul 2021 01:06:49 -0700 (PDT) Received: from ?IPv6:2a02:908:1252:fb60:7671:3328:2129:96b5? ([2a02:908:1252:fb60:7671:3328:2129:96b5]) by smtp.gmail.com with ESMTPSA id r4sm20513204wre.84.2021.07.07.01.06.48 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 07 Jul 2021 01:06:48 -0700 (PDT) To: Daniel Vetter , DRI Development References: <20210706101209.3034092-1-daniel.vetter@ffwll.ch> <20210706101209.3034092-8-daniel.vetter@ffwll.ch> From: =?UTF-8?Q?Christian_K=c3=b6nig?= Message-ID: <280c31ae-0d9f-0548-11ea-5b25546741ae@gmail.com> Date: Wed, 7 Jul 2021 10:06:47 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.11.0 MIME-Version: 1.0 In-Reply-To: <20210706101209.3034092-8-daniel.vetter@ffwll.ch> Content-Language: en-US X-Mailman-Approved-At: Wed, 07 Jul 2021 11:44:56 +0000 Subject: Re: [Intel-gfx] [Linaro-mm-sig] [PATCH 7/7] dma-resv: Give the docs a do-over X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linaro-mm-sig@lists.linaro.org, Daniel Vetter , Intel Graphics Development , =?UTF-8?Q?Christian_K=c3=b6nig?= , linux-media@vger.kernel.org Content-Transfer-Encoding: base64 Content-Type: text/plain; charset="utf-8"; Format="flowed" Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" QW0gMDYuMDcuMjEgdW0gMTI6MTIgc2NocmllYiBEYW5pZWwgVmV0dGVyOgo+IFNwZWNpZmljYWxs eSBkb2N1bWVudCB0aGUgbmV3L2NsYXJpZmllZCBydWxlcyBhcm91bmQgaG93IHRoZSBzaGFyZWQK PiBmZW5jZXMgZG8gbm90IGhhdmUgYW55IG9yZGVyaW5nIHJlcXVpcmVtZW50cyBhZ2FpbnN0IHRo ZSBleGNsdXNpdmUKPiBmZW5jZS4KPgo+IEJ1dCBhbHNvIGRvY3VtZW50IGFsbCB0aGUgdGhpbmdz IGEgYml0IGJldHRlciwgZ2l2ZW4gaG93IGNlbnRyYWwKPiBzdHJ1Y3QgZG1hX3Jlc3YgdG8gZHlu YW1pYyBidWZmZXIgbWFuYWdlbWVudCB0aGUgZG9jcyBoYXZlIGJlZW4gdmVyeQo+IGluYWRlcXVh dC4KPgo+IC0gTG90cyBtb3JlIGxpbmtzIHRvIG90aGVyIHBpZWNlcyBvZiB0aGUgcHV6emxlLiBV bmZvcnR1bmF0ZWx5Cj4gICAgdHRtX2J1ZmZlcl9vYmplY3QgaGFzIG5vIGRvY3MsIHNvIG5vIGxp bmtzIDotKAo+Cj4gLSBFeHBsYWluL2NvbXBsYWluIGEgYml0IGFib3V0IGRtYV9yZXN2X2xvY2tp bmdfY3R4KCkuIEkgc3RpbGwgZG9uJ3QKPiAgICBsaWtlIHRoYXQgb25lLCBidXQgZml4aW5nIHRo ZSB0dG0gY2FsbCBjaGFpbnMgaXMgZ29pbmcgdG8gYmUKPiAgICBob3JyaWJsZS4gUGx1cyB3ZSB3 YW50IHRvIHBsdWcgaW4gcmVhbCBzbG93cGF0aCBsb2NraW5nIHdoZW4gd2UgZG8KPiAgICB0aGF0 IGFueXdheS4KPgo+IC0gTWFpbiBwYXJ0IG9mIHRoZSBwYXRjaCBpcyBzb21lIGFjdHVhbCBkb2Nz IGZvciBzdHJ1Y3QgZG1hX3Jlc3YuCj4KPiBPdmVyYWxsIEkgdGhpbmsgd2Ugc3RpbGwgaGF2ZSBh IGxvdCBvZiBiYWQgbmFtaW5nIGluIHRoaXMgYXJlYSAoZS5nLgo+IGRtYV9yZXN2LmZlbmNlIGlz IHNpbmd1bGFyLCBidXQgY29udGFpbnMgdGhlIG11bHRpcGxlIHNoYXJlZCBmZW5jZXMpLAo+IGJ1 dCBJIHRoaW5rIHRoYXQncyBtb3JlIGluZGljYXRpdmUgb2YgaG93IHRoZSBzZW1hbnRpY3MgYW5k IHJ1bGVzIGFyZQo+IGp1c3Qgbm90IGdyZWF0Lgo+Cj4gQW5vdGhlciB0aGluZyB0aGF0J3MgcmVh bCBhd2thcmQgaXMgaG93IGNoYWluaW5nIGV4Y2x1c2l2ZSBmZW5jZXMKPiByaWdodCBub3cgbWVh bnMgZGlyZWN0IGRtYV9yZXN2LmV4Y2x1c2l2ZV9mZW5jZSBwb2ludGVyIGFjY2VzcyB3aXRoIGFu Cj4gcmN1X2Fzc2lnbl9wb2ludGVyLiBOb3Qgc28gZ3JlYXQgZWl0aGVyLgo+Cj4gU2lnbmVkLW9m Zi1ieTogRGFuaWVsIFZldHRlciA8ZGFuaWVsLnZldHRlckBpbnRlbC5jb20+Cj4gQ2M6IFN1bWl0 IFNlbXdhbCA8c3VtaXQuc2Vtd2FsQGxpbmFyby5vcmc+Cj4gQ2M6ICJDaHJpc3RpYW4gS8O2bmln IiA8Y2hyaXN0aWFuLmtvZW5pZ0BhbWQuY29tPgo+IENjOiBsaW51eC1tZWRpYUB2Z2VyLmtlcm5l bC5vcmcKPiBDYzogbGluYXJvLW1tLXNpZ0BsaXN0cy5saW5hcm8ub3JnCj4gLS0tCj4gICBkcml2 ZXJzL2RtYS1idWYvZG1hLXJlc3YuYyB8ICAyMiArKysrKystLQo+ICAgaW5jbHVkZS9saW51eC9k bWEtcmVzdi5oICAgfCAxMDQgKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKystLQo+ ICAgMiBmaWxlcyBjaGFuZ2VkLCAxMTYgaW5zZXJ0aW9ucygrKSwgMTAgZGVsZXRpb25zKC0pCj4K PiBkaWZmIC0tZ2l0IGEvZHJpdmVycy9kbWEtYnVmL2RtYS1yZXN2LmMgYi9kcml2ZXJzL2RtYS1i dWYvZG1hLXJlc3YuYwo+IGluZGV4IGYyNmM3MTc0N2Q0My4uODk4ZjhkODk0YmJkIDEwMDY0NAo+ IC0tLSBhL2RyaXZlcnMvZG1hLWJ1Zi9kbWEtcmVzdi5jCj4gKysrIGIvZHJpdmVycy9kbWEtYnVm L2RtYS1yZXN2LmMKPiBAQCAtNDgsNiArNDgsOCBAQAo+ICAgICogd3JpdGUgb3BlcmF0aW9ucykg b3IgTiBzaGFyZWQgZmVuY2VzIChyZWFkIG9wZXJhdGlvbnMpLiAgVGhlIFJDVQo+ICAgICogbWVj aGFuaXNtIGlzIHVzZWQgdG8gcHJvdGVjdCByZWFkIGFjY2VzcyB0byBmZW5jZXMgZnJvbSBsb2Nr ZWQKPiAgICAqIHdyaXRlLXNpZGUgdXBkYXRlcy4KPiArICoKPiArICogU2VlIHN0cnVjdCBkbWFf cmVzdiBmb3IgbW9yZSBkZXRhaWxzLgo+ICAgICovCj4gICAKPiAgIERFRklORV9XRF9DTEFTUyhy ZXNlcnZhdGlvbl93d19jbGFzcyk7Cj4gQEAgLTEzNyw3ICsxMzksMTEgQEAgRVhQT1JUX1NZTUJP TChkbWFfcmVzdl9maW5pKTsKPiAgICAqIEBudW1fZmVuY2VzOiBudW1iZXIgb2YgZmVuY2VzIHdl IHdhbnQgdG8gYWRkCj4gICAgKgo+ICAgICogU2hvdWxkIGJlIGNhbGxlZCBiZWZvcmUgZG1hX3Jl c3ZfYWRkX3NoYXJlZF9mZW5jZSgpLiAgTXVzdAo+IC0gKiBiZSBjYWxsZWQgd2l0aCBvYmotPmxv Y2sgaGVsZC4KPiArICogYmUgY2FsbGVkIHdpdGggQG9iaiBsb2NrZWQgdGhyb3VnaCBkbWFfcmVz dl9sb2NrKCkuCj4gKyAqCj4gKyAqIE5vdGUgdGhhdCB0aGUgcHJlYWxsb2NhdGVkIHNsb3RzIG5l ZWQgdG8gYmUgcmUtcmVzZXJ2ZWQgaWYgQG9iaiBpcyB1bmxvY2tlZAo+ICsgKiBhdCBhbnkgdGlt ZSBiZWZvcmUgY2FsbGluZCBkbWFfcmVzdl9hZGRfc2hhcmVkX2ZlbmNlKCkuIFRoaXMgaXMgdmFs aWRhdGUgd2hlbgo+ICsgKiBDT05GSUdfREVCVUdfTVVURVhFUyBpcyBlbmFibGVkLgo+ICAgICoK PiAgICAqIFJFVFVSTlMKPiAgICAqIFplcm8gZm9yIHN1Y2Nlc3MsIG9yIC1lcnJubwo+IEBAIC0y MzQsOCArMjQwLDEwIEBAIEVYUE9SVF9TWU1CT0woZG1hX3Jlc3ZfcmVzZXRfc2hhcmVkX21heCk7 Cj4gICAgKiBAb2JqOiB0aGUgcmVzZXJ2YXRpb24gb2JqZWN0Cj4gICAgKiBAZmVuY2U6IHRoZSBz aGFyZWQgZmVuY2UgdG8gYWRkCj4gICAgKgo+IC0gKiBBZGQgYSBmZW5jZSB0byBhIHNoYXJlZCBz bG90LCBvYmotPmxvY2sgbXVzdCBiZSBoZWxkLCBhbmQKPiArICogQWRkIGEgZmVuY2UgdG8gYSBz aGFyZWQgc2xvdCwgQG9iaiBtdXN0IGJlIGxvY2tlZCB3aXRoIGRtYV9yZXN2X2xvY2soKSwgYW5k Cj4gICAgKiBkbWFfcmVzdl9yZXNlcnZlX3NoYXJlZCgpIGhhcyBiZWVuIGNhbGxlZC4KPiArICoK PiArICogU2VlIGFsc28gJmRtYV9yZXN2LmZlbmNlIGZvciBhIGRpc2N1c3Npb24gb2YgdGhlIHNl bWFudGljcy4KPiAgICAqLwo+ICAgdm9pZCBkbWFfcmVzdl9hZGRfc2hhcmVkX2ZlbmNlKHN0cnVj dCBkbWFfcmVzdiAqb2JqLCBzdHJ1Y3QgZG1hX2ZlbmNlICpmZW5jZSkKPiAgIHsKPiBAQCAtMjgw LDcgKzI4OCw5IEBAIEVYUE9SVF9TWU1CT0woZG1hX3Jlc3ZfYWRkX3NoYXJlZF9mZW5jZSk7Cj4g ICAgKiBAb2JqOiB0aGUgcmVzZXJ2YXRpb24gb2JqZWN0Cj4gICAgKiBAZmVuY2U6IHRoZSBzaGFy ZWQgZmVuY2UgdG8gYWRkCj4gICAgKgo+IC0gKiBBZGQgYSBmZW5jZSB0byB0aGUgZXhjbHVzaXZl IHNsb3QuICBUaGUgb2JqLT5sb2NrIG11c3QgYmUgaGVsZC4KPiArICogQWRkIGEgZmVuY2UgdG8g dGhlIGV4Y2x1c2l2ZSBzbG90LiBAb2JqIG11c3QgYmUgbG9ja2VkIHdpdGggZG1hX3Jlc3ZfbG9j aygpLgo+ICsgKiBOb3RlIHRoYXQgdGhpcyBmdW5jdGlvbiByZXBsYWNlcyBhbGwgZmVuY2VzIGF0 dGFjaGVkIHRvIEBvYmosIHNlZSBhbHNvCj4gKyAqICZkbWFfcmVzdi5mZW5jZV9leGNsIGZvciBh IGRpc2N1c3Npb24gb2YgdGhlIHNlbWFudGljcy4KPiAgICAqLwo+ICAgdm9pZCBkbWFfcmVzdl9h ZGRfZXhjbF9mZW5jZShzdHJ1Y3QgZG1hX3Jlc3YgKm9iaiwgc3RydWN0IGRtYV9mZW5jZSAqZmVu Y2UpCj4gICB7Cj4gQEAgLTYwOSw5ICs2MTksMTEgQEAgc3RhdGljIGlubGluZSBpbnQgZG1hX3Jl c3ZfdGVzdF9zaWduYWxlZF9zaW5nbGUoc3RydWN0IGRtYV9mZW5jZSAqcGFzc2VkX2ZlbmNlKQo+ ICAgICogZmVuY2UKPiAgICAqCj4gICAgKiBDYWxsZXJzIGFyZSBub3QgcmVxdWlyZWQgdG8gaG9s ZCBzcGVjaWZpYyBsb2NrcywgYnV0IG1heWJlIGhvbGQKPiAtICogZG1hX3Jlc3ZfbG9jaygpIGFs cmVhZHkKPiArICogZG1hX3Jlc3ZfbG9jaygpIGFscmVhZHkuCj4gKyAqCj4gICAgKiBSRVRVUk5T Cj4gLSAqIHRydWUgaWYgYWxsIGZlbmNlcyBzaWduYWxlZCwgZWxzZSBmYWxzZQo+ICsgKgo+ICsg KiBUcnVlIGlmIGFsbCBmZW5jZXMgc2lnbmFsZWQsIGVsc2UgZmFsc2UuCj4gICAgKi8KPiAgIGJv b2wgZG1hX3Jlc3ZfdGVzdF9zaWduYWxlZChzdHJ1Y3QgZG1hX3Jlc3YgKm9iaiwgYm9vbCB0ZXN0 X2FsbCkKPiAgIHsKPiBkaWZmIC0tZ2l0IGEvaW5jbHVkZS9saW51eC9kbWEtcmVzdi5oIGIvaW5j bHVkZS9saW51eC9kbWEtcmVzdi5oCj4gaW5kZXggZTFjYTIwODBhMWZmLi5jNzdmZDU0ZDAzM2Yg MTAwNjQ0Cj4gLS0tIGEvaW5jbHVkZS9saW51eC9kbWEtcmVzdi5oCj4gKysrIGIvaW5jbHVkZS9s aW51eC9kbWEtcmVzdi5oCj4gQEAgLTYyLDE2ICs2Miw5MCBAQCBzdHJ1Y3QgZG1hX3Jlc3ZfbGlz dCB7Cj4gICAKPiAgIC8qKgo+ICAgICogc3RydWN0IGRtYV9yZXN2IC0gYSByZXNlcnZhdGlvbiBv YmplY3QgbWFuYWdlcyBmZW5jZXMgZm9yIGEgYnVmZmVyCj4gLSAqIEBsb2NrOiB1cGRhdGUgc2lk ZSBsb2NrCj4gLSAqIEBzZXE6IHNlcXVlbmNlIGNvdW50IGZvciBtYW5hZ2luZyBSQ1UgcmVhZC1z aWRlIHN5bmNocm9uaXphdGlvbgo+IC0gKiBAZmVuY2VfZXhjbDogdGhlIGV4Y2x1c2l2ZSBmZW5j ZSwgaWYgdGhlcmUgaXMgb25lIGN1cnJlbnRseQo+IC0gKiBAZmVuY2U6IGxpc3Qgb2YgY3VycmVu dCBzaGFyZWQgZmVuY2VzCj4gKyAqCj4gKyAqIFRoZXJlIGFyZSBtdWx0aXBsZSB1c2VzIGZvciB0 aGlzLCB3aXRoIHNvbWV0aW1lcyBzbGlnaHRseSBkaWZmZXJlbnQgcnVsZXMgaW4KPiArICogaG93 IHRoZSBmZW5jZSBzbG90cyBhcmUgdXNlZC4KPiArICoKPiArICogT25lIHVzZSBpcyB0byBzeW5j aHJvbml6ZSBjcm9zcy1kcml2ZXIgYWNjZXNzIHRvIGEgc3RydWN0IGRtYV9idWYsIGVpdGhlciBm b3IKPiArICogZHluYW1pYyBidWZmZXIgbWFuYWdlbWVudCBvciBqdXN0IHRvIGhhbmRsZSBpbXBs aWNpdCBzeW5jaHJvbml6YXRpb24gYmV0d2Vlbgo+ICsgKiBkaWZmZXJlbnQgdXNlcnMgb2YgdGhl IGJ1ZmZlciBpbiB1c2Vyc3BhY2UuIFNlZSAmZG1hX2J1Zi5yZXN2IGZvciBhIG1vcmUKPiArICog aW4tZGVwdGggZGlzY3Vzc2lvbi4KPiArICoKPiArICogVGhlIG90aGVyIG1ham9yIHVzZSBpcyB0 byBtYW5hZ2UgYWNjZXNzIGFuZCBsb2NraW5nIHdpdGhpbiBhIGRyaXZlciBpbiBhCj4gKyAqIGJ1 ZmZlciBiYXNlZCBtZW1vcnkgbWFuYWdlci4gc3RydWN0IHR0bV9idWZmZXJfb2JqZWN0IGlzIHRo ZSBjYW5vbmljYWwKPiArICogZXhhbXBsZSBoZXJlLCBzaW5jZSB0aGlzIGlzIHdlcmUgcmVzZXJ2 YXRpb24gb2JqZWN0cyBvcmlnaW5hdGVkIGZyb20uIEJ1dCB1c2UKPiArICogaW4gZHJpdmVycyBp cyBzcHJlYWRpbmcgYW5kIHNvbWUgZHJpdmVycyBhbHNvIG1hbmFnZSBzdHJ1Y3QKPiArICogZHJt X2dlbV9vYmplY3Qgd2l0aCB0aGUgc2FtZSBzY2hlbWUuCgpJIHdvdWxkIHN0aWxsIG1ha2UgdGhh dCBldmVuIGhhcmRlciwgZS5nLiBtZW50aW9uaW5nIHRoYXQgeW91IHJ1biBpbnRvIAp1c2UgYWZ0 ZXIgZnJlZSBhbmQgdGhlIHJlc3VsdGluZyBtZW1vcnkgY29ycnVwdGlvbiBpZiB5b3UgZG9uJ3Qg b2JleSB0aGUgCnJ1bGVzLgoKQXBhcnQgZnJvbSB0aGF0IHdpdGggdGhlIHNwZWxsaW5nIHN0dWZm IHBvaW50ZWQgb3V0IGJ5IG90aGVycyBmaXhlZCB0aGUgCnBhdGNoIGlzIFJldmlld2VkLWJ5OiBD aHJpc3RpYW4gS8O2bmlnIDxjaHJpc3RpYW4ua29lbmlnQGFtZC5jb20+CgpSZWdhcmRzLApDaHJp c3RpYW4uCgo+ICAgICovCj4gICBzdHJ1Y3QgZG1hX3Jlc3Ygewo+ICsJLyoqCj4gKwkgKiBAbG9j azoKPiArCSAqCj4gKwkgKiBVcGRhdGUgc2lkZSBsb2NrLiBEb24ndCB1c2UgZGlyZWN0bHksIGlu c3RlYWQgdXNlIHRoZSB3cmFwcGVyCj4gKwkgKiBmdW5jdGlvbnMgbGlrZSBkbWFfcmVzdl9sb2Nr KCkgYW5kIGRtYV9yZXN2X3VubG9jaygpLgo+ICsJICoKPiArCSAqIERyaXZlcnMgd2hpY2ggdXNl IHRoZSByZXNlcnZhdGlvbiBvYmplY3QgdG8gbWFuYWdlIG1lbW9yeSBkeW5hbWljYWxseQo+ICsJ ICogYWxzbyB1c2UgdGhpcyBsb2NrIHRvIHByb3RlY3QgYnVmZmVyIG9iamVjdCBzdGF0ZSBsaWtl IHBsYWNlbWVudCwKPiArCSAqIGFsbG9jYXRpb24gcG9saWNpZXMgb3IgdGhyb3VnaG91dCBjb21t YW5kIHN1Ym1pc3Npb24uCj4gKwkgKi8KPiAgIAlzdHJ1Y3Qgd3dfbXV0ZXggbG9jazsKPiArCj4g KwkvKioKPiArCSAqIEBzZXE6Cj4gKwkgKgo+ICsJICogU2VxdWVuY2UgY291bnQgZm9yIG1hbmFn aW5nIFJDVSByZWFkLXNpZGUgc3luY2hyb25pemF0aW9uLCBhbGxvd3MKPiArCSAqIHJlYWQtb25s eSBhY2Nlc3MgdG8gQGZlbmNlX2V4Y2wgYW5kIEBmZW5jZSB3aGlsZSBlbnN1cmluZyB3ZSB0YWtl IGEKPiArCSAqIGNvbnNpc3RlbnQgc25hcHNob3QuCj4gKwkgKi8KPiAgIAlzZXFjb3VudF93d19t dXRleF90IHNlcTsKPiAgIAo+ICsJLyoqCj4gKwkgKiBAZmVuY2VfZXhjbDoKPiArCSAqCj4gKwkg KiBUaGUgZXhjbHVzaXZlIGZlbmNlLCBpZiB0aGVyZSBpcyBvbmUgY3VycmVudGx5Lgo+ICsJICoK PiArCSAqIFRoZXJlIGFyZSB0d28gd2FzIHRvIHVwZGF0ZSB0aGlzIGZlbmNlOgo+ICsJICoKPiAr CSAqIC0gRmlyc3QgYnkgY2FsbGluZyBkbWFfcmVzdl9hZGRfZXhjbF9mZW5jZSgpLCB3aGljaCBy ZXBsYWNlcyBhbGwKPiArCSAqICAgZmVuY2VzIGF0dGFjaGVkIHRvIHRoZSByZXNlcnZhdGlvbiBv YmplY3QuIFRvIGd1YXJhbnRlZSB0aGF0IG5vCj4gKwkgKiAgIGZlbmNlcyBhcmUgbG9zdCB0aGlz IG5ldyBmZW5jZSBtdXN0IHNpZ25hbCBvbmx5IGFmdGVyIGFsbCBwcmV2aW91cwo+ICsJICogICBm ZW5jZXMsIGJvdGggc2hhcmVkIGFuZCBleGNsdXNpdmUsIGhhdmUgc2lnbmFsbGVkLiBJbiBzb21l IGNhc2VzIGl0Cj4gKwkgKiAgIGlzIGNvbnZlbmllbnQgdG8gYWNoaWV2ZSB0aGF0IGJ5IGF0dGFj aGluZyBhIHN0cnVjdCBkbWFfZmVuY2VfYXJyYXkKPiArCSAqICAgd2l0aCBhbGwgdGhlIG5ldyBh bmQgb2xkIGZlbmNlcy4KPiArCSAqCj4gKwkgKiAtIEFsdGVybmF0aXZlbHkgdGhlIGZlbmNlIGNh biBiZSBzZXQgZGlyZWN0bHksIHdoaWNoIGxlYXZlcyB0aGUKPiArCSAqICAgc2hhcmVkIGZlbmNl cyB1bmNoYW5nZWQuIFRvIGd1YXJhbnRlZSB0aGF0IG5vIGZlbmNlcyBhcmUgbG9zdCB0aGlzCj4g KwkgKiAgIG5ldyBmZW5jZSBtdXN0IHNpZ25hbGUgb25seSBhZnRlciB0aGUgcHJldmlvdXMgZXhj bHVzaXZlIGZlbmNlIGhhcwo+ICsJICogICBzaW5nYWxsZWQuIFNpbmNlIHRoZSBzaGFyZWQgZmVu Y2VzIGFyZSBzdGF5aW5nIGludGFjdCwgaXQgaXMgbm90Cj4gKwkgKiAgIG5lY2Vzc2FyeSB0byBt YWludGFpbiBhbnkgb3JkZXJpbmcgYWdhaW5zdCB0aG9zZS4gSWYgc2VtYW50aWNhbGx5Cj4gKwkg KiAgIG9ubHkgYSBuZXcgYWNjZXNzIGlzIGFkZGVkIHdpdGhvdXQgYWN0dWFsbHkgdHJlYXRpbmcg dGhlIHByZXZpb3VzCj4gKwkgKiAgIG9uZSBhcyBhIGRlcGVuZGVuY3kgdGhlIGV4Y2x1c2l2ZSBm ZW5jZXMgY2FuIGJlIHN0cnVuZyB0b2dldGhlcgo+ICsJICogICB1c2luZyBzdHJ1Y3QgZG1hX2Zl bmNlX2NoYWluLgo+ICsJICoKPiArCSAqIE5vdGUgdGhhdCBhY3R1YWwgc2VtYW50aWNzIG9mIHdo YXQgYW4gZXhjbHVzaXZlIG9yIHNoYXJlZCBmZW5jZSBtZWFuCj4gKwkgKiBpcyBkZWZpbmVkIGJ5 IHRoZSB1c2VyLCBmb3IgcmVzZXJ2YXRpb24gb2JqZWN0cyBzaGFyZWQgYWNyb3NzIGRyaXZlcnMK PiArCSAqIHNlZSAmZG1hX2J1Zi5yZXN2Lgo+ICsJICovCj4gICAJc3RydWN0IGRtYV9mZW5jZSBf X3JjdSAqZmVuY2VfZXhjbDsKPiArCj4gKwkvKioKPiArCSAqIEBmZW5jZToKPiArCSAqCj4gKwkg KiBMaXN0IG9mIGN1cnJlbnQgc2hhcmVkIGZlbmNlcy4KPiArCSAqCj4gKwkgKiBUaGVyZSBhcmUg bm8gb3JkZXJpbmcgY29uc3RyYWludHMgb2Ygc2hhcmVkIGZlbmNlcyBhZ2FpbnN0IHRoZQo+ICsJ ICogZXhjbHVzaXZlIGZlbmNlIHNsb3QuIElmIGEgd2FpdGVyIG5lZWRzIHRvIHdhaXQgZm9yIGFs bCBhY2Nlc3MsIGl0Cj4gKwkgKiBoYXMgdG8gd2FpdCBmb3IgYm90aCBzZXQgb2YgZmVuY2VzIHRv IHNpZ25hbC4KPiArCSAqCj4gKwkgKiBBIG5ldyBmZW5jZSBpcyBhZGRlZCBieSBjYWxsaW5nIGRt YV9yZXN2X2FkZF9zaGFyZWRfZmVuY2UoKS4gU2luY2UKPiArCSAqIHRoaXMgb2Z0ZW4gbmVlZHMg dG8gYmUgZG9uZSBwYXN0IHRoZSBwb2ludCBvZiBubyByZXR1cm4gaW4gY29tbWFuZAo+ICsJICog c3VibWlzc2lvbiBpdCBjYW5ub3QgZmFpbCwgYW5kIHRoZXJlZm9yIHN1ZmZpY2llbnQgc2xvdHMg bmVlZCB0byBiZQo+ICsJICogcmVzZXJ2ZWQgYnkgY2FsbGluZyBkbWFfcmVzdl9yZXNlcnZlX3No YXJlZCgpLgo+ICsJICoKPiArCSAqIE5vdGUgdGhhdCBhY3R1YWwgc2VtYW50aWNzIG9mIHdoYXQg YW4gZXhjbHVzaXZlIG9yIHNoYXJlZCBmZW5jZSBtZWFuCj4gKwkgKiBpcyBkZWZpbmVkIGJ5IHRo ZSB1c2VyLCBmb3IgcmVzZXJ2YXRpb24gb2JqZWN0cyBzaGFyZWQgYWNyb3NzIGRyaXZlcnMKPiAr CSAqIHNlZSAmZG1hX2J1Zi5yZXN2Lgo+ICsJICovCj4gICAJc3RydWN0IGRtYV9yZXN2X2xpc3Qg X19yY3UgKmZlbmNlOwo+ICAgfTsKPiAgIAo+IEBAIC05OCw2ICsxNzIsMTMgQEAgc3RhdGljIGlu bGluZSB2b2lkIGRtYV9yZXN2X3Jlc2V0X3NoYXJlZF9tYXgoc3RydWN0IGRtYV9yZXN2ICpvYmop IHt9Cj4gICAgKiB1bmRlZmluZWQgb3JkZXIsIGEgI3d3X2FjcXVpcmVfY3R4IGlzIHBhc3NlZCB0 byB1bndpbmQgaWYgYSBjeWNsZQo+ICAgICogaXMgZGV0ZWN0ZWQuIFNlZSB3d19tdXRleF9sb2Nr KCkgYW5kIHd3X2FjcXVpcmVfaW5pdCgpLiBBIHJlc2VydmF0aW9uCj4gICAgKiBvYmplY3QgbWF5 IGJlIGxvY2tlZCBieSBpdHNlbGYgYnkgcGFzc2luZyBOVUxMIGFzIEBjdHguCj4gKyAqCj4gKyAq IFdoZW4gYSBkaWUgc2l0dWF0aW9uIGlzIGluZGljYXRlZCBieSByZXR1cm5pbmcgLUVERUFETEsg YWxsIGxvY2tzIGhlbGQgYnkKPiArICogQGN0eCBtdXN0IGJlIHVubG9ja2VkIGFuZCB0aGVuIGRt YV9yZXN2X2xvY2tfc2xvdygpIGNhbGxlZCBvbiBAb2JqLgo+ICsgKgo+ICsgKiBVbmxvY2tlZCBi eSBjYWxsaW5nIGRtYV9yZXN2X2xvY2soKS4KPiArICoKPiArICogU2VlIGFsc28gZG1hX3Jlc3Zf bG9ja19pbnRlcnJ1cHRpYmxlKCkgZm9yIHRoZSBpbnRlcnJ1cHRpYmxlIHZhcmlhbnQuCj4gICAg Ki8KPiAgIHN0YXRpYyBpbmxpbmUgaW50IGRtYV9yZXN2X2xvY2soc3RydWN0IGRtYV9yZXN2ICpv YmosCj4gICAJCQkJc3RydWN0IHd3X2FjcXVpcmVfY3R4ICpjdHgpCj4gQEAgLTExOSw2ICsyMDAs MTIgQEAgc3RhdGljIGlubGluZSBpbnQgZG1hX3Jlc3ZfbG9jayhzdHJ1Y3QgZG1hX3Jlc3YgKm9i aiwKPiAgICAqIHVuZGVmaW5lZCBvcmRlciwgYSAjd3dfYWNxdWlyZV9jdHggaXMgcGFzc2VkIHRv IHVud2luZCBpZiBhIGN5Y2xlCj4gICAgKiBpcyBkZXRlY3RlZC4gU2VlIHd3X211dGV4X2xvY2so KSBhbmQgd3dfYWNxdWlyZV9pbml0KCkuIEEgcmVzZXJ2YXRpb24KPiAgICAqIG9iamVjdCBtYXkg YmUgbG9ja2VkIGJ5IGl0c2VsZiBieSBwYXNzaW5nIE5VTEwgYXMgQGN0eC4KPiArICoKPiArICog V2hlbiBhIGRpZSBzaXR1YXRpb24gaXMgaW5kaWNhdGVkIGJ5IHJldHVybmluZyAtRURFQURMSyBh bGwgbG9ja3MgaGVsZCBieQo+ICsgKiBAY3R4IG11c3QgYmUgdW5sb2NrZWQgYW5kIHRoZW4gZG1h X3Jlc3ZfbG9ja19zbG93X2ludGVycnVwdGlibGUoKSBjYWxsZWQgb24KPiArICogQG9iai4KPiAr ICoKPiArICogVW5sb2NrZWQgYnkgY2FsbGluZyBkbWFfcmVzdl9sb2NrKCkuCj4gICAgKi8KPiAg IHN0YXRpYyBpbmxpbmUgaW50IGRtYV9yZXN2X2xvY2tfaW50ZXJydXB0aWJsZShzdHJ1Y3QgZG1h X3Jlc3YgKm9iaiwKPiAgIAkJCQkJICAgICAgc3RydWN0IHd3X2FjcXVpcmVfY3R4ICpjdHgpCj4g QEAgLTEzNCw2ICsyMjEsOCBAQCBzdGF0aWMgaW5saW5lIGludCBkbWFfcmVzdl9sb2NrX2ludGVy cnVwdGlibGUoc3RydWN0IGRtYV9yZXN2ICpvYmosCj4gICAgKiBBY3F1aXJlcyB0aGUgcmVzZXJ2 YXRpb24gb2JqZWN0IGFmdGVyIGEgZGllIGNhc2UuIFRoaXMgZnVuY3Rpb24KPiAgICAqIHdpbGwg c2xlZXAgdW50aWwgdGhlIGxvY2sgYmVjb21lcyBhdmFpbGFibGUuIFNlZSBkbWFfcmVzdl9sb2Nr KCkgYXMKPiAgICAqIHdlbGwuCj4gKyAqCj4gKyAqIFNlZSBhbHNvIGRtYV9yZXN2X2xvY2tfc2xv d19pbnRlcnJ1cHRpYmxlKCkgZm9yIHRoZSBpbnRlcnJ1cHRpYmxlIHZhcmlhbnQuCj4gICAgKi8K PiAgIHN0YXRpYyBpbmxpbmUgdm9pZCBkbWFfcmVzdl9sb2NrX3Nsb3coc3RydWN0IGRtYV9yZXN2 ICpvYmosCj4gICAJCQkJICAgICAgc3RydWN0IHd3X2FjcXVpcmVfY3R4ICpjdHgpCj4gQEAgLTE2 Nyw3ICsyNTYsNyBAQCBzdGF0aWMgaW5saW5lIGludCBkbWFfcmVzdl9sb2NrX3Nsb3dfaW50ZXJy dXB0aWJsZShzdHJ1Y3QgZG1hX3Jlc3YgKm9iaiwKPiAgICAqIGlmIHRoZXkgb3ZlcmxhcCB3aXRo IGEgd3JpdGVyLgo+ICAgICoKPiAgICAqIEFsc28gbm90ZSB0aGF0IHNpbmNlIG5vIGNvbnRleHQg aXMgcHJvdmlkZWQsIG5vIGRlYWRsb2NrIHByb3RlY3Rpb24gaXMKPiAtICogcG9zc2libGUuCj4g KyAqIHBvc3NpYmxlLCB3aGljaCBpcyBhbHNvIG5vdCBuZWVkZWQgZm9yIGEgdHJ5bG9jay4KPiAg ICAqCj4gICAgKiBSZXR1cm5zIHRydWUgaWYgdGhlIGxvY2sgd2FzIGFjcXVpcmVkLCBmYWxzZSBv dGhlcndpc2UuCj4gICAgKi8KPiBAQCAtMTkzLDYgKzI4MiwxMSBAQCBzdGF0aWMgaW5saW5lIGJv b2wgZG1hX3Jlc3ZfaXNfbG9ja2VkKHN0cnVjdCBkbWFfcmVzdiAqb2JqKQo+ICAgICoKPiAgICAq IFJldHVybnMgdGhlIGNvbnRleHQgdXNlZCB0byBsb2NrIGEgcmVzZXJ2YXRpb24gb2JqZWN0IG9y IE5VTEwgaWYgbm8gY29udGV4dAo+ICAgICogd2FzIHVzZWQgb3IgdGhlIG9iamVjdCBpcyBub3Qg bG9ja2VkIGF0IGFsbC4KPiArICoKPiArICogV0FSTklORzogVGhpcyBpbnRlcmZhY2UgaXMgcHJl dHR5IGhvcnJpYmxlLCBidXQgVFRNIG5lZWRzIGl0IGJlY2F1c2UgaXQKPiArICogZG9lc24ndCBw YXNzIHRoZSBzdHJ1Y3Qgd3dfYWNxdWlyZV9jdHggYXJvdW5kIGluIHNvbWUgdmVyeSBsb25nIGNh bGxjaGFpbnMuCj4gKyAqIEV2ZXJ5b25lIGVsc2UganVzdCB1c2VzIGl0IHRvIGNoZWNrIHdoZXRo ZXIgdGhleSdyZSBob2xkaW5nIGEgcmVzZXJ2YXRpb24gb3IKPiArICogbm90Lgo+ICAgICovCj4g ICBzdGF0aWMgaW5saW5lIHN0cnVjdCB3d19hY3F1aXJlX2N0eCAqZG1hX3Jlc3ZfbG9ja2luZ19j dHgoc3RydWN0IGRtYV9yZXN2ICpvYmopCj4gICB7CgpfX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fXwpJbnRlbC1nZnggbWFpbGluZyBsaXN0CkludGVsLWdmeEBs aXN0cy5mcmVlZGVza3RvcC5vcmcKaHR0cHM6Ly9saXN0cy5mcmVlZGVza3RvcC5vcmcvbWFpbG1h bi9saXN0aW5mby9pbnRlbC1nZngK