From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.0 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 06C0BC07E96 for ; Thu, 8 Jul 2021 17:38:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E43E361879 for ; Thu, 8 Jul 2021 17:38:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230106AbhGHRlO (ORCPT ); Thu, 8 Jul 2021 13:41:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35454 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230038AbhGHRlM (ORCPT ); Thu, 8 Jul 2021 13:41:12 -0400 Received: from mail-wm1-x334.google.com (mail-wm1-x334.google.com [IPv6:2a00:1450:4864:20::334]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D13DFC061574 for ; Thu, 8 Jul 2021 10:38:29 -0700 (PDT) Received: by mail-wm1-x334.google.com with SMTP id j16-20020a05600c1c10b0290204b096b0caso4548534wms.1 for ; Thu, 08 Jul 2021 10:38:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ffwll.ch; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=2ZhKyW2iXAQnZB65HuPkrPJPq0kH1huhhVBcvQ5De2U=; b=LHatzWvSbADBpiEA60FUUoMCtMvyXCOO28RppY4XXcC5auDMB/2eJZraPfm2U9lb+V mv9LlVNKbTezQoIpN9BEemLw+d8ejGze2bBr4t+4XJyJk9p4RXklJsloIz3qX9iVe7PG uMaK+LE/T5tR3Y2Kwconjw0AKsvkTRXrj3o8Q= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=2ZhKyW2iXAQnZB65HuPkrPJPq0kH1huhhVBcvQ5De2U=; b=F5zsXjWLL8bKKTmJ9XR8c+svg1lnMQiz7FI4rC6MOZ5B5V8SCPL4w5piMtM3bN6BjQ QHeQ3BG+gFbJJYXoB67A1/ret0cd2gi0XHn75RwYQene7cP+rfuokbL7fBvnNem6c9ux tZz+1sd8BlL7ey52yHB48zfNR699QY0uwgQqv7f+lhf5W6LwLcLhWKFnj6cIF8hLusdl /wuRj0G1NAW1TgSJDa0BEEYkbmDD4JJ8+jBwOnM0kOWYQCU4BAHRFTN98WkXAUy4avWD W13KGQfw6nDlC7H3qASBEm6ksYOiGfVoUb0aku2qgkde8faPioXvr4YM27o1jsv1widq AC/w== X-Gm-Message-State: AOAM532nrvuFbZrZjlQhuIAF6lJfttZBHvJk600ra98u8nTqZjjNH45b kRWbdwjrMur2WgVVT0mw0G+0Jw== X-Google-Smtp-Source: ABdhPJwxGV84X2b9ACK5a53uTXue40qCMinFJKOGDBNVRPXtiiqGSwbmCe2H2iUru49EJes1mbdX1w== X-Received: by 2002:a05:600c:1d23:: with SMTP id l35mr6748551wms.134.1625765908475; Thu, 08 Jul 2021 10:38:28 -0700 (PDT) Received: from phenom.ffwll.local ([2a02:168:57f4:0:efd0:b9e5:5ae6:c2fa]) by smtp.gmail.com with ESMTPSA id l4sm2521853wme.26.2021.07.08.10.38.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 08 Jul 2021 10:38:28 -0700 (PDT) From: Daniel Vetter To: DRI Development Cc: Intel Graphics Development , Daniel Vetter , =?UTF-8?q?Christian=20K=C3=B6nig?= , Jason Ekstrand , Matthew Auld , Daniel Vetter , Sumit Semwal , linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org Subject: [PATCH v3 20/20] dma-resv: Give the docs a do-over Date: Thu, 8 Jul 2021 19:37:54 +0200 Message-Id: <20210708173754.3877540-21-daniel.vetter@ffwll.ch> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210708173754.3877540-1-daniel.vetter@ffwll.ch> References: <20210708173754.3877540-1-daniel.vetter@ffwll.ch> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Specifically document the new/clarified rules around how the shared fences do not have any ordering requirements against the exclusive fence. But also document all the things a bit better, given how central struct dma_resv to dynamic buffer management the docs have been very inadequat. - Lots more links to other pieces of the puzzle. Unfortunately ttm_buffer_object has no docs, so no links :-( - Explain/complain a bit about dma_resv_locking_ctx(). I still don't like that one, but fixing the ttm call chains is going to be horrible. Plus we want to plug in real slowpath locking when we do that anyway. - Main part of the patch is some actual docs for struct dma_resv. Overall I think we still have a lot of bad naming in this area (e.g. dma_resv.fence is singular, but contains the multiple shared fences), but I think that's more indicative of how the semantics and rules are just not great. Another thing that's real awkard is how chaining exclusive fences right now means direct dma_resv.exclusive_fence pointer access with an rcu_assign_pointer. Not so great either. v2: - Fix a pile of typos (Matt, Jason) - Hammer it in that breaking the rules leads to use-after-free issues around dma-buf sharing (Christian) Reviewed-by: Christian König Cc: Jason Ekstrand Cc: Matthew Auld Reviewed-by: Matthew Auld Signed-off-by: Daniel Vetter Cc: Sumit Semwal Cc: "Christian König" Cc: linux-media@vger.kernel.org Cc: linaro-mm-sig@lists.linaro.org --- drivers/dma-buf/dma-resv.c | 24 ++++++--- include/linux/dma-buf.h | 7 +++ include/linux/dma-resv.h | 104 +++++++++++++++++++++++++++++++++++-- 3 files changed, 124 insertions(+), 11 deletions(-) diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c index f26c71747d43..a3acb6479ddf 100644 --- a/drivers/dma-buf/dma-resv.c +++ b/drivers/dma-buf/dma-resv.c @@ -48,6 +48,8 @@ * write operations) or N shared fences (read operations). The RCU * mechanism is used to protect read access to fences from locked * write-side updates. + * + * See struct dma_resv for more details. */ DEFINE_WD_CLASS(reservation_ww_class); @@ -137,7 +139,11 @@ EXPORT_SYMBOL(dma_resv_fini); * @num_fences: number of fences we want to add * * Should be called before dma_resv_add_shared_fence(). Must - * be called with obj->lock held. + * be called with @obj locked through dma_resv_lock(). + * + * Note that the preallocated slots need to be re-reserved if @obj is unlocked + * at any time before calling dma_resv_add_shared_fence(). This is validated + * when CONFIG_DEBUG_MUTEXES is enabled. * * RETURNS * Zero for success, or -errno @@ -234,8 +240,10 @@ EXPORT_SYMBOL(dma_resv_reset_shared_max); * @obj: the reservation object * @fence: the shared fence to add * - * Add a fence to a shared slot, obj->lock must be held, and + * Add a fence to a shared slot, @obj must be locked with dma_resv_lock(), and * dma_resv_reserve_shared() has been called. + * + * See also &dma_resv.fence for a discussion of the semantics. */ void dma_resv_add_shared_fence(struct dma_resv *obj, struct dma_fence *fence) { @@ -278,9 +286,11 @@ EXPORT_SYMBOL(dma_resv_add_shared_fence); /** * dma_resv_add_excl_fence - Add an exclusive fence. * @obj: the reservation object - * @fence: the shared fence to add + * @fence: the exclusive fence to add * - * Add a fence to the exclusive slot. The obj->lock must be held. + * Add a fence to the exclusive slot. @obj must be locked with dma_resv_lock(). + * Note that this function replaces all fences attached to @obj, see also + * &dma_resv.fence_excl for a discussion of the semantics. */ void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence) { @@ -609,9 +619,11 @@ static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence) * fence * * Callers are not required to hold specific locks, but maybe hold - * dma_resv_lock() already + * dma_resv_lock() already. + * * RETURNS - * true if all fences signaled, else false + * + * True if all fences signaled, else false. */ bool dma_resv_test_signaled(struct dma_resv *obj, bool test_all) { diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index 2b814fde0d11..8cc0c55877a6 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -420,6 +420,13 @@ struct dma_buf { * - Dynamic importers should set fences for any access that they can't * disable immediately from their &dma_buf_attach_ops.move_notify * callback. + * + * IMPORTANT: + * + * All drivers must obey the struct dma_resv rules, specifically the + * rules for updating fences, see &dma_resv.fence_excl and + * &dma_resv.fence. If these dependency rules are broken access tracking + * can be lost resulting in use after free issues. */ struct dma_resv *resv; diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h index e1ca2080a1ff..9100dd3dc21f 100644 --- a/include/linux/dma-resv.h +++ b/include/linux/dma-resv.h @@ -62,16 +62,90 @@ struct dma_resv_list { /** * struct dma_resv - a reservation object manages fences for a buffer - * @lock: update side lock - * @seq: sequence count for managing RCU read-side synchronization - * @fence_excl: the exclusive fence, if there is one currently - * @fence: list of current shared fences + * + * There are multiple uses for this, with sometimes slightly different rules in + * how the fence slots are used. + * + * One use is to synchronize cross-driver access to a struct dma_buf, either for + * dynamic buffer management or just to handle implicit synchronization between + * different users of the buffer in userspace. See &dma_buf.resv for a more + * in-depth discussion. + * + * The other major use is to manage access and locking within a driver in a + * buffer based memory manager. struct ttm_buffer_object is the canonical + * example here, since this is where reservation objects originated from. But + * use in drivers is spreading and some drivers also manage struct + * drm_gem_object with the same scheme. */ struct dma_resv { + /** + * @lock: + * + * Update side lock. Don't use directly, instead use the wrapper + * functions like dma_resv_lock() and dma_resv_unlock(). + * + * Drivers which use the reservation object to manage memory dynamically + * also use this lock to protect buffer object state like placement, + * allocation policies or throughout command submission. + */ struct ww_mutex lock; + + /** + * @seq: + * + * Sequence count for managing RCU read-side synchronization, allows + * read-only access to @fence_excl and @fence while ensuring we take a + * consistent snapshot. + */ seqcount_ww_mutex_t seq; + /** + * @fence_excl: + * + * The exclusive fence, if there is one currently. + * + * There are two ways to update this fence: + * + * - First by calling dma_resv_add_excl_fence(), which replaces all + * fences attached to the reservation object. To guarantee that no + * fences are lost, this new fence must signal only after all previous + * fences, both shared and exclusive, have signalled. In some cases it + * is convenient to achieve that by attaching a struct dma_fence_array + * with all the new and old fences. + * + * - Alternatively the fence can be set directly, which leaves the + * shared fences unchanged. To guarantee that no fences are lost, this + * new fence must signal only after the previous exclusive fence has + * signalled. Since the shared fences are staying intact, it is not + * necessary to maintain any ordering against those. If semantically + * only a new access is added without actually treating the previous + * one as a dependency the exclusive fences can be strung together + * using struct dma_fence_chain. + * + * Note that actual semantics of what an exclusive or shared fence mean + * is defined by the user, for reservation objects shared across drivers + * see &dma_buf.resv. + */ struct dma_fence __rcu *fence_excl; + + /** + * @fence: + * + * List of current shared fences. + * + * There are no ordering constraints of shared fences against the + * exclusive fence slot. If a waiter needs to wait for all access, it + * has to wait for both sets of fences to signal. + * + * A new fence is added by calling dma_resv_add_shared_fence(). Since + * this often needs to be done past the point of no return in command + * submission it cannot fail, and therefore sufficient slots need to be + * reserved by calling dma_resv_reserve_shared(). + * + * Note that actual semantics of what an exclusive or shared fence mean + * is defined by the user, for reservation objects shared across drivers + * see &dma_buf.resv. + */ struct dma_resv_list __rcu *fence; }; @@ -98,6 +172,13 @@ static inline void dma_resv_reset_shared_max(struct dma_resv *obj) {} * undefined order, a #ww_acquire_ctx is passed to unwind if a cycle * is detected. See ww_mutex_lock() and ww_acquire_init(). A reservation * object may be locked by itself by passing NULL as @ctx. + * + * When a die situation is indicated by returning -EDEADLK all locks held by + * @ctx must be unlocked and then dma_resv_lock_slow() called on @obj. + * + * Unlocked by calling dma_resv_unlock(). + * + * See also dma_resv_lock_interruptible() for the interruptible variant. */ static inline int dma_resv_lock(struct dma_resv *obj, struct ww_acquire_ctx *ctx) @@ -119,6 +200,12 @@ static inline int dma_resv_lock(struct dma_resv *obj, * undefined order, a #ww_acquire_ctx is passed to unwind if a cycle * is detected. See ww_mutex_lock() and ww_acquire_init(). A reservation * object may be locked by itself by passing NULL as @ctx. + * + * When a die situation is indicated by returning -EDEADLK all locks held by + * @ctx must be unlocked and then dma_resv_lock_slow_interruptible() called on + * @obj. + * + * Unlocked by calling dma_resv_unlock(). */ static inline int dma_resv_lock_interruptible(struct dma_resv *obj, struct ww_acquire_ctx *ctx) @@ -134,6 +221,8 @@ static inline int dma_resv_lock_interruptible(struct dma_resv *obj, * Acquires the reservation object after a die case. This function * will sleep until the lock becomes available. See dma_resv_lock() as * well. + * + * See also dma_resv_lock_slow_interruptible() for the interruptible variant. */ static inline void dma_resv_lock_slow(struct dma_resv *obj, struct ww_acquire_ctx *ctx) @@ -167,7 +256,7 @@ static inline int dma_resv_lock_slow_interruptible(struct dma_resv *obj, * if they overlap with a writer. * * Also note that since no context is provided, no deadlock protection is - * possible. + * possible, which is also not needed for a trylock. * * Returns true if the lock was acquired, false otherwise. */ @@ -193,6 +282,11 @@ static inline bool dma_resv_is_locked(struct dma_resv *obj) * * Returns the context used to lock a reservation object or NULL if no context * was used or the object is not locked at all. + * + * WARNING: This interface is pretty horrible, but TTM needs it because it + * doesn't pass the struct ww_acquire_ctx around in some very long callchains. + * Everyone else just uses it to check whether they're holding a reservation or + * not. */ static inline struct ww_acquire_ctx *dma_resv_locking_ctx(struct dma_resv *obj) { -- 2.32.0 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EE480C07E96 for ; Thu, 8 Jul 2021 17:39:01 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C0A676191D for ; Thu, 8 Jul 2021 17:39:01 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C0A676191D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=ffwll.ch Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 3407B6E985; Thu, 8 Jul 2021 17:38:31 +0000 (UTC) Received: from mail-wm1-x32e.google.com (mail-wm1-x32e.google.com [IPv6:2a00:1450:4864:20::32e]) by gabe.freedesktop.org (Postfix) with ESMTPS id BE0056E985 for ; Thu, 8 Jul 2021 17:38:29 +0000 (UTC) Received: by mail-wm1-x32e.google.com with SMTP id t14-20020a05600c198eb029020c8aac53d4so20687052wmq.1 for ; Thu, 08 Jul 2021 10:38:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ffwll.ch; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=2ZhKyW2iXAQnZB65HuPkrPJPq0kH1huhhVBcvQ5De2U=; b=LHatzWvSbADBpiEA60FUUoMCtMvyXCOO28RppY4XXcC5auDMB/2eJZraPfm2U9lb+V mv9LlVNKbTezQoIpN9BEemLw+d8ejGze2bBr4t+4XJyJk9p4RXklJsloIz3qX9iVe7PG uMaK+LE/T5tR3Y2Kwconjw0AKsvkTRXrj3o8Q= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=2ZhKyW2iXAQnZB65HuPkrPJPq0kH1huhhVBcvQ5De2U=; b=PjU/g/EDcbk1e5bzMK3+cOznisMCcqU35PJM2yhXNwmgiVNtpoD4FD7/RBLp/SfCrb O8cEVYvNhC4UxX6Mi5PJvWmn4VvJNZbkjaAEe2c6XdWnTEMBn/aw7GbI2vY1FtK2aGrg saZG0oh0S02v2IPOmcSYPl3NyGdNPAf7qSS4UbP4+/nRb1Fa7lv+fI+4g7K0MIB5wzo0 omnMjyBWfOgUS6QeJ9CkvCchR5sb1GTw03jdkA87O1Cbchr9iwiDw2UorIY46E3Wsx65 s1DtcWQlAmSoIO9C/2xpuiHr0XxitVzTNpGafIkTD0ny6lQ/f5Yd2WWwGGArusPZ+Wdm xL6w== X-Gm-Message-State: AOAM533H1xWdrCO85lKaF22LlSpNi4/iae6+oEMOCn2X928x3IaNw+zg qx1c6bHYG1tno2VcoKYpMdFzMic3yxKrbg== X-Google-Smtp-Source: ABdhPJwxGV84X2b9ACK5a53uTXue40qCMinFJKOGDBNVRPXtiiqGSwbmCe2H2iUru49EJes1mbdX1w== X-Received: by 2002:a05:600c:1d23:: with SMTP id l35mr6748551wms.134.1625765908475; Thu, 08 Jul 2021 10:38:28 -0700 (PDT) Received: from phenom.ffwll.local ([2a02:168:57f4:0:efd0:b9e5:5ae6:c2fa]) by smtp.gmail.com with ESMTPSA id l4sm2521853wme.26.2021.07.08.10.38.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 08 Jul 2021 10:38:28 -0700 (PDT) From: Daniel Vetter To: DRI Development Subject: [PATCH v3 20/20] dma-resv: Give the docs a do-over Date: Thu, 8 Jul 2021 19:37:54 +0200 Message-Id: <20210708173754.3877540-21-daniel.vetter@ffwll.ch> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210708173754.3877540-1-daniel.vetter@ffwll.ch> References: <20210708173754.3877540-1-daniel.vetter@ffwll.ch> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Daniel Vetter , Intel Graphics Development , linaro-mm-sig@lists.linaro.org, Matthew Auld , Jason Ekstrand , Daniel Vetter , =?UTF-8?q?Christian=20K=C3=B6nig?= , linux-media@vger.kernel.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Specifically document the new/clarified rules around how the shared fences do not have any ordering requirements against the exclusive fence. But also document all the things a bit better, given how central struct dma_resv to dynamic buffer management the docs have been very inadequat. - Lots more links to other pieces of the puzzle. Unfortunately ttm_buffer_object has no docs, so no links :-( - Explain/complain a bit about dma_resv_locking_ctx(). I still don't like that one, but fixing the ttm call chains is going to be horrible. Plus we want to plug in real slowpath locking when we do that anyway. - Main part of the patch is some actual docs for struct dma_resv. Overall I think we still have a lot of bad naming in this area (e.g. dma_resv.fence is singular, but contains the multiple shared fences), but I think that's more indicative of how the semantics and rules are just not great. Another thing that's real awkard is how chaining exclusive fences right now means direct dma_resv.exclusive_fence pointer access with an rcu_assign_pointer. Not so great either. v2: - Fix a pile of typos (Matt, Jason) - Hammer it in that breaking the rules leads to use-after-free issues around dma-buf sharing (Christian) Reviewed-by: Christian König Cc: Jason Ekstrand Cc: Matthew Auld Reviewed-by: Matthew Auld Signed-off-by: Daniel Vetter Cc: Sumit Semwal Cc: "Christian König" Cc: linux-media@vger.kernel.org Cc: linaro-mm-sig@lists.linaro.org --- drivers/dma-buf/dma-resv.c | 24 ++++++--- include/linux/dma-buf.h | 7 +++ include/linux/dma-resv.h | 104 +++++++++++++++++++++++++++++++++++-- 3 files changed, 124 insertions(+), 11 deletions(-) diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c index f26c71747d43..a3acb6479ddf 100644 --- a/drivers/dma-buf/dma-resv.c +++ b/drivers/dma-buf/dma-resv.c @@ -48,6 +48,8 @@ * write operations) or N shared fences (read operations). The RCU * mechanism is used to protect read access to fences from locked * write-side updates. + * + * See struct dma_resv for more details. */ DEFINE_WD_CLASS(reservation_ww_class); @@ -137,7 +139,11 @@ EXPORT_SYMBOL(dma_resv_fini); * @num_fences: number of fences we want to add * * Should be called before dma_resv_add_shared_fence(). Must - * be called with obj->lock held. + * be called with @obj locked through dma_resv_lock(). + * + * Note that the preallocated slots need to be re-reserved if @obj is unlocked + * at any time before calling dma_resv_add_shared_fence(). This is validated + * when CONFIG_DEBUG_MUTEXES is enabled. * * RETURNS * Zero for success, or -errno @@ -234,8 +240,10 @@ EXPORT_SYMBOL(dma_resv_reset_shared_max); * @obj: the reservation object * @fence: the shared fence to add * - * Add a fence to a shared slot, obj->lock must be held, and + * Add a fence to a shared slot, @obj must be locked with dma_resv_lock(), and * dma_resv_reserve_shared() has been called. + * + * See also &dma_resv.fence for a discussion of the semantics. */ void dma_resv_add_shared_fence(struct dma_resv *obj, struct dma_fence *fence) { @@ -278,9 +286,11 @@ EXPORT_SYMBOL(dma_resv_add_shared_fence); /** * dma_resv_add_excl_fence - Add an exclusive fence. * @obj: the reservation object - * @fence: the shared fence to add + * @fence: the exclusive fence to add * - * Add a fence to the exclusive slot. The obj->lock must be held. + * Add a fence to the exclusive slot. @obj must be locked with dma_resv_lock(). + * Note that this function replaces all fences attached to @obj, see also + * &dma_resv.fence_excl for a discussion of the semantics. */ void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence) { @@ -609,9 +619,11 @@ static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence) * fence * * Callers are not required to hold specific locks, but maybe hold - * dma_resv_lock() already + * dma_resv_lock() already. + * * RETURNS - * true if all fences signaled, else false + * + * True if all fences signaled, else false. */ bool dma_resv_test_signaled(struct dma_resv *obj, bool test_all) { diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index 2b814fde0d11..8cc0c55877a6 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -420,6 +420,13 @@ struct dma_buf { * - Dynamic importers should set fences for any access that they can't * disable immediately from their &dma_buf_attach_ops.move_notify * callback. + * + * IMPORTANT: + * + * All drivers must obey the struct dma_resv rules, specifically the + * rules for updating fences, see &dma_resv.fence_excl and + * &dma_resv.fence. If these dependency rules are broken access tracking + * can be lost resulting in use after free issues. */ struct dma_resv *resv; diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h index e1ca2080a1ff..9100dd3dc21f 100644 --- a/include/linux/dma-resv.h +++ b/include/linux/dma-resv.h @@ -62,16 +62,90 @@ struct dma_resv_list { /** * struct dma_resv - a reservation object manages fences for a buffer - * @lock: update side lock - * @seq: sequence count for managing RCU read-side synchronization - * @fence_excl: the exclusive fence, if there is one currently - * @fence: list of current shared fences + * + * There are multiple uses for this, with sometimes slightly different rules in + * how the fence slots are used. + * + * One use is to synchronize cross-driver access to a struct dma_buf, either for + * dynamic buffer management or just to handle implicit synchronization between + * different users of the buffer in userspace. See &dma_buf.resv for a more + * in-depth discussion. + * + * The other major use is to manage access and locking within a driver in a + * buffer based memory manager. struct ttm_buffer_object is the canonical + * example here, since this is where reservation objects originated from. But + * use in drivers is spreading and some drivers also manage struct + * drm_gem_object with the same scheme. */ struct dma_resv { + /** + * @lock: + * + * Update side lock. Don't use directly, instead use the wrapper + * functions like dma_resv_lock() and dma_resv_unlock(). + * + * Drivers which use the reservation object to manage memory dynamically + * also use this lock to protect buffer object state like placement, + * allocation policies or throughout command submission. + */ struct ww_mutex lock; + + /** + * @seq: + * + * Sequence count for managing RCU read-side synchronization, allows + * read-only access to @fence_excl and @fence while ensuring we take a + * consistent snapshot. + */ seqcount_ww_mutex_t seq; + /** + * @fence_excl: + * + * The exclusive fence, if there is one currently. + * + * There are two ways to update this fence: + * + * - First by calling dma_resv_add_excl_fence(), which replaces all + * fences attached to the reservation object. To guarantee that no + * fences are lost, this new fence must signal only after all previous + * fences, both shared and exclusive, have signalled. In some cases it + * is convenient to achieve that by attaching a struct dma_fence_array + * with all the new and old fences. + * + * - Alternatively the fence can be set directly, which leaves the + * shared fences unchanged. To guarantee that no fences are lost, this + * new fence must signal only after the previous exclusive fence has + * signalled. Since the shared fences are staying intact, it is not + * necessary to maintain any ordering against those. If semantically + * only a new access is added without actually treating the previous + * one as a dependency the exclusive fences can be strung together + * using struct dma_fence_chain. + * + * Note that actual semantics of what an exclusive or shared fence mean + * is defined by the user, for reservation objects shared across drivers + * see &dma_buf.resv. + */ struct dma_fence __rcu *fence_excl; + + /** + * @fence: + * + * List of current shared fences. + * + * There are no ordering constraints of shared fences against the + * exclusive fence slot. If a waiter needs to wait for all access, it + * has to wait for both sets of fences to signal. + * + * A new fence is added by calling dma_resv_add_shared_fence(). Since + * this often needs to be done past the point of no return in command + * submission it cannot fail, and therefore sufficient slots need to be + * reserved by calling dma_resv_reserve_shared(). + * + * Note that actual semantics of what an exclusive or shared fence mean + * is defined by the user, for reservation objects shared across drivers + * see &dma_buf.resv. + */ struct dma_resv_list __rcu *fence; }; @@ -98,6 +172,13 @@ static inline void dma_resv_reset_shared_max(struct dma_resv *obj) {} * undefined order, a #ww_acquire_ctx is passed to unwind if a cycle * is detected. See ww_mutex_lock() and ww_acquire_init(). A reservation * object may be locked by itself by passing NULL as @ctx. + * + * When a die situation is indicated by returning -EDEADLK all locks held by + * @ctx must be unlocked and then dma_resv_lock_slow() called on @obj. + * + * Unlocked by calling dma_resv_unlock(). + * + * See also dma_resv_lock_interruptible() for the interruptible variant. */ static inline int dma_resv_lock(struct dma_resv *obj, struct ww_acquire_ctx *ctx) @@ -119,6 +200,12 @@ static inline int dma_resv_lock(struct dma_resv *obj, * undefined order, a #ww_acquire_ctx is passed to unwind if a cycle * is detected. See ww_mutex_lock() and ww_acquire_init(). A reservation * object may be locked by itself by passing NULL as @ctx. + * + * When a die situation is indicated by returning -EDEADLK all locks held by + * @ctx must be unlocked and then dma_resv_lock_slow_interruptible() called on + * @obj. + * + * Unlocked by calling dma_resv_unlock(). */ static inline int dma_resv_lock_interruptible(struct dma_resv *obj, struct ww_acquire_ctx *ctx) @@ -134,6 +221,8 @@ static inline int dma_resv_lock_interruptible(struct dma_resv *obj, * Acquires the reservation object after a die case. This function * will sleep until the lock becomes available. See dma_resv_lock() as * well. + * + * See also dma_resv_lock_slow_interruptible() for the interruptible variant. */ static inline void dma_resv_lock_slow(struct dma_resv *obj, struct ww_acquire_ctx *ctx) @@ -167,7 +256,7 @@ static inline int dma_resv_lock_slow_interruptible(struct dma_resv *obj, * if they overlap with a writer. * * Also note that since no context is provided, no deadlock protection is - * possible. + * possible, which is also not needed for a trylock. * * Returns true if the lock was acquired, false otherwise. */ @@ -193,6 +282,11 @@ static inline bool dma_resv_is_locked(struct dma_resv *obj) * * Returns the context used to lock a reservation object or NULL if no context * was used or the object is not locked at all. + * + * WARNING: This interface is pretty horrible, but TTM needs it because it + * doesn't pass the struct ww_acquire_ctx around in some very long callchains. + * Everyone else just uses it to check whether they're holding a reservation or + * not. */ static inline struct ww_acquire_ctx *dma_resv_locking_ctx(struct dma_resv *obj) { -- 2.32.0 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B090C07E99 for ; Thu, 8 Jul 2021 17:39:03 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0CBBE61879 for ; Thu, 8 Jul 2021 17:39:03 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0CBBE61879 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=ffwll.ch Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id D760C6E988; Thu, 8 Jul 2021 17:38:34 +0000 (UTC) Received: from mail-wm1-x32f.google.com (mail-wm1-x32f.google.com [IPv6:2a00:1450:4864:20::32f]) by gabe.freedesktop.org (Postfix) with ESMTPS id BEA736E986 for ; Thu, 8 Jul 2021 17:38:29 +0000 (UTC) Received: by mail-wm1-x32f.google.com with SMTP id a5-20020a7bc1c50000b02901e3bbe0939bso4575393wmj.0 for ; Thu, 08 Jul 2021 10:38:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ffwll.ch; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=2ZhKyW2iXAQnZB65HuPkrPJPq0kH1huhhVBcvQ5De2U=; b=LHatzWvSbADBpiEA60FUUoMCtMvyXCOO28RppY4XXcC5auDMB/2eJZraPfm2U9lb+V mv9LlVNKbTezQoIpN9BEemLw+d8ejGze2bBr4t+4XJyJk9p4RXklJsloIz3qX9iVe7PG uMaK+LE/T5tR3Y2Kwconjw0AKsvkTRXrj3o8Q= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=2ZhKyW2iXAQnZB65HuPkrPJPq0kH1huhhVBcvQ5De2U=; b=uNBVqiH8kGDtQDp+hWikFlp9dIy4No3MBaZ9QELkOr0R1mZexGgLpK0kC0VWt2C1vd dlK9BCMs1CujIOMX0B2kqYvQaIQtmEZw67vEu0Me8d7mQK0eU2dy5I7EdNT/h8oGkbn1 5lmGG3FHI4ey9zdLDFTo+tNPYPj8fpe171e/IHcWm3uRrjZEhLYOoI/bQa5Mt1z5gYeR 7pK0FEPouA87nsWLkYupOE9nhUDfjaGZQLIB3lpiJCci7VtFAAO3xNbbTBsWV2faQxuw Ey5CMulupPxDm31tvHWVN/ymWioeDXn1ywbdm4lqKEP0UU3PWIsup591QhVkRhfbYlRG 4yAw== X-Gm-Message-State: AOAM5324Sd5tCq6wyNXlyh1VIwlqLtca+1GukahNB+obxz8nVnc5Ry3R yb/gJ5EpXGTtJ9PbteVE4QFxtg== X-Google-Smtp-Source: ABdhPJwxGV84X2b9ACK5a53uTXue40qCMinFJKOGDBNVRPXtiiqGSwbmCe2H2iUru49EJes1mbdX1w== X-Received: by 2002:a05:600c:1d23:: with SMTP id l35mr6748551wms.134.1625765908475; Thu, 08 Jul 2021 10:38:28 -0700 (PDT) Received: from phenom.ffwll.local ([2a02:168:57f4:0:efd0:b9e5:5ae6:c2fa]) by smtp.gmail.com with ESMTPSA id l4sm2521853wme.26.2021.07.08.10.38.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 08 Jul 2021 10:38:28 -0700 (PDT) From: Daniel Vetter To: DRI Development Date: Thu, 8 Jul 2021 19:37:54 +0200 Message-Id: <20210708173754.3877540-21-daniel.vetter@ffwll.ch> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210708173754.3877540-1-daniel.vetter@ffwll.ch> References: <20210708173754.3877540-1-daniel.vetter@ffwll.ch> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v3 20/20] dma-resv: Give the docs a do-over X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Daniel Vetter , Intel Graphics Development , Sumit Semwal , linaro-mm-sig@lists.linaro.org, Matthew Auld , Daniel Vetter , =?UTF-8?q?Christian=20K=C3=B6nig?= , linux-media@vger.kernel.org Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" U3BlY2lmaWNhbGx5IGRvY3VtZW50IHRoZSBuZXcvY2xhcmlmaWVkIHJ1bGVzIGFyb3VuZCBob3cg dGhlIHNoYXJlZApmZW5jZXMgZG8gbm90IGhhdmUgYW55IG9yZGVyaW5nIHJlcXVpcmVtZW50cyBh Z2FpbnN0IHRoZSBleGNsdXNpdmUKZmVuY2UuCgpCdXQgYWxzbyBkb2N1bWVudCBhbGwgdGhlIHRo aW5ncyBhIGJpdCBiZXR0ZXIsIGdpdmVuIGhvdyBjZW50cmFsCnN0cnVjdCBkbWFfcmVzdiB0byBk eW5hbWljIGJ1ZmZlciBtYW5hZ2VtZW50IHRoZSBkb2NzIGhhdmUgYmVlbiB2ZXJ5CmluYWRlcXVh dC4KCi0gTG90cyBtb3JlIGxpbmtzIHRvIG90aGVyIHBpZWNlcyBvZiB0aGUgcHV6emxlLiBVbmZv cnR1bmF0ZWx5CiAgdHRtX2J1ZmZlcl9vYmplY3QgaGFzIG5vIGRvY3MsIHNvIG5vIGxpbmtzIDot KAoKLSBFeHBsYWluL2NvbXBsYWluIGEgYml0IGFib3V0IGRtYV9yZXN2X2xvY2tpbmdfY3R4KCku IEkgc3RpbGwgZG9uJ3QKICBsaWtlIHRoYXQgb25lLCBidXQgZml4aW5nIHRoZSB0dG0gY2FsbCBj aGFpbnMgaXMgZ29pbmcgdG8gYmUKICBob3JyaWJsZS4gUGx1cyB3ZSB3YW50IHRvIHBsdWcgaW4g cmVhbCBzbG93cGF0aCBsb2NraW5nIHdoZW4gd2UgZG8KICB0aGF0IGFueXdheS4KCi0gTWFpbiBw YXJ0IG9mIHRoZSBwYXRjaCBpcyBzb21lIGFjdHVhbCBkb2NzIGZvciBzdHJ1Y3QgZG1hX3Jlc3Yu CgpPdmVyYWxsIEkgdGhpbmsgd2Ugc3RpbGwgaGF2ZSBhIGxvdCBvZiBiYWQgbmFtaW5nIGluIHRo aXMgYXJlYSAoZS5nLgpkbWFfcmVzdi5mZW5jZSBpcyBzaW5ndWxhciwgYnV0IGNvbnRhaW5zIHRo ZSBtdWx0aXBsZSBzaGFyZWQgZmVuY2VzKSwKYnV0IEkgdGhpbmsgdGhhdCdzIG1vcmUgaW5kaWNh dGl2ZSBvZiBob3cgdGhlIHNlbWFudGljcyBhbmQgcnVsZXMgYXJlCmp1c3Qgbm90IGdyZWF0LgoK QW5vdGhlciB0aGluZyB0aGF0J3MgcmVhbCBhd2thcmQgaXMgaG93IGNoYWluaW5nIGV4Y2x1c2l2 ZSBmZW5jZXMKcmlnaHQgbm93IG1lYW5zIGRpcmVjdCBkbWFfcmVzdi5leGNsdXNpdmVfZmVuY2Ug cG9pbnRlciBhY2Nlc3Mgd2l0aCBhbgpyY3VfYXNzaWduX3BvaW50ZXIuIE5vdCBzbyBncmVhdCBl aXRoZXIuCgp2MjoKLSBGaXggYSBwaWxlIG9mIHR5cG9zIChNYXR0LCBKYXNvbikKLSBIYW1tZXIg aXQgaW4gdGhhdCBicmVha2luZyB0aGUgcnVsZXMgbGVhZHMgdG8gdXNlLWFmdGVyLWZyZWUgaXNz dWVzCiAgYXJvdW5kIGRtYS1idWYgc2hhcmluZyAoQ2hyaXN0aWFuKQoKUmV2aWV3ZWQtYnk6IENo cmlzdGlhbiBLw7ZuaWcgPGNocmlzdGlhbi5rb2VuaWdAYW1kLmNvbT4KQ2M6IEphc29uIEVrc3Ry YW5kIDxqYXNvbkBqbGVrc3RyYW5kLm5ldD4KQ2M6IE1hdHRoZXcgQXVsZCA8bWF0dGhldy5hdWxk QGludGVsLmNvbT4KUmV2aWV3ZWQtYnk6IE1hdHRoZXcgQXVsZCA8bWF0dGhldy5hdWxkQGludGVs LmNvbT4KU2lnbmVkLW9mZi1ieTogRGFuaWVsIFZldHRlciA8ZGFuaWVsLnZldHRlckBpbnRlbC5j b20+CkNjOiBTdW1pdCBTZW13YWwgPHN1bWl0LnNlbXdhbEBsaW5hcm8ub3JnPgpDYzogIkNocmlz dGlhbiBLw7ZuaWciIDxjaHJpc3RpYW4ua29lbmlnQGFtZC5jb20+CkNjOiBsaW51eC1tZWRpYUB2 Z2VyLmtlcm5lbC5vcmcKQ2M6IGxpbmFyby1tbS1zaWdAbGlzdHMubGluYXJvLm9yZwotLS0KIGRy aXZlcnMvZG1hLWJ1Zi9kbWEtcmVzdi5jIHwgIDI0ICsrKysrKy0tLQogaW5jbHVkZS9saW51eC9k bWEtYnVmLmggICAgfCAgIDcgKysrCiBpbmNsdWRlL2xpbnV4L2RtYS1yZXN2LmggICB8IDEwNCAr KysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKy0tCiAzIGZpbGVzIGNoYW5nZWQsIDEy NCBpbnNlcnRpb25zKCspLCAxMSBkZWxldGlvbnMoLSkKCmRpZmYgLS1naXQgYS9kcml2ZXJzL2Rt YS1idWYvZG1hLXJlc3YuYyBiL2RyaXZlcnMvZG1hLWJ1Zi9kbWEtcmVzdi5jCmluZGV4IGYyNmM3 MTc0N2Q0My4uYTNhY2I2NDc5ZGRmIDEwMDY0NAotLS0gYS9kcml2ZXJzL2RtYS1idWYvZG1hLXJl c3YuYworKysgYi9kcml2ZXJzL2RtYS1idWYvZG1hLXJlc3YuYwpAQCAtNDgsNiArNDgsOCBAQAog ICogd3JpdGUgb3BlcmF0aW9ucykgb3IgTiBzaGFyZWQgZmVuY2VzIChyZWFkIG9wZXJhdGlvbnMp LiAgVGhlIFJDVQogICogbWVjaGFuaXNtIGlzIHVzZWQgdG8gcHJvdGVjdCByZWFkIGFjY2VzcyB0 byBmZW5jZXMgZnJvbSBsb2NrZWQKICAqIHdyaXRlLXNpZGUgdXBkYXRlcy4KKyAqCisgKiBTZWUg c3RydWN0IGRtYV9yZXN2IGZvciBtb3JlIGRldGFpbHMuCiAgKi8KIAogREVGSU5FX1dEX0NMQVNT KHJlc2VydmF0aW9uX3d3X2NsYXNzKTsKQEAgLTEzNyw3ICsxMzksMTEgQEAgRVhQT1JUX1NZTUJP TChkbWFfcmVzdl9maW5pKTsKICAqIEBudW1fZmVuY2VzOiBudW1iZXIgb2YgZmVuY2VzIHdlIHdh bnQgdG8gYWRkCiAgKgogICogU2hvdWxkIGJlIGNhbGxlZCBiZWZvcmUgZG1hX3Jlc3ZfYWRkX3No YXJlZF9mZW5jZSgpLiAgTXVzdAotICogYmUgY2FsbGVkIHdpdGggb2JqLT5sb2NrIGhlbGQuCisg KiBiZSBjYWxsZWQgd2l0aCBAb2JqIGxvY2tlZCB0aHJvdWdoIGRtYV9yZXN2X2xvY2soKS4KKyAq CisgKiBOb3RlIHRoYXQgdGhlIHByZWFsbG9jYXRlZCBzbG90cyBuZWVkIHRvIGJlIHJlLXJlc2Vy dmVkIGlmIEBvYmogaXMgdW5sb2NrZWQKKyAqIGF0IGFueSB0aW1lIGJlZm9yZSBjYWxsaW5nIGRt YV9yZXN2X2FkZF9zaGFyZWRfZmVuY2UoKS4gVGhpcyBpcyB2YWxpZGF0ZWQKKyAqIHdoZW4gQ09O RklHX0RFQlVHX01VVEVYRVMgaXMgZW5hYmxlZC4KICAqCiAgKiBSRVRVUk5TCiAgKiBaZXJvIGZv ciBzdWNjZXNzLCBvciAtZXJybm8KQEAgLTIzNCw4ICsyNDAsMTAgQEAgRVhQT1JUX1NZTUJPTChk bWFfcmVzdl9yZXNldF9zaGFyZWRfbWF4KTsKICAqIEBvYmo6IHRoZSByZXNlcnZhdGlvbiBvYmpl Y3QKICAqIEBmZW5jZTogdGhlIHNoYXJlZCBmZW5jZSB0byBhZGQKICAqCi0gKiBBZGQgYSBmZW5j ZSB0byBhIHNoYXJlZCBzbG90LCBvYmotPmxvY2sgbXVzdCBiZSBoZWxkLCBhbmQKKyAqIEFkZCBh IGZlbmNlIHRvIGEgc2hhcmVkIHNsb3QsIEBvYmogbXVzdCBiZSBsb2NrZWQgd2l0aCBkbWFfcmVz dl9sb2NrKCksIGFuZAogICogZG1hX3Jlc3ZfcmVzZXJ2ZV9zaGFyZWQoKSBoYXMgYmVlbiBjYWxs ZWQuCisgKgorICogU2VlIGFsc28gJmRtYV9yZXN2LmZlbmNlIGZvciBhIGRpc2N1c3Npb24gb2Yg dGhlIHNlbWFudGljcy4KICAqLwogdm9pZCBkbWFfcmVzdl9hZGRfc2hhcmVkX2ZlbmNlKHN0cnVj dCBkbWFfcmVzdiAqb2JqLCBzdHJ1Y3QgZG1hX2ZlbmNlICpmZW5jZSkKIHsKQEAgLTI3OCw5ICsy ODYsMTEgQEAgRVhQT1JUX1NZTUJPTChkbWFfcmVzdl9hZGRfc2hhcmVkX2ZlbmNlKTsKIC8qKgog ICogZG1hX3Jlc3ZfYWRkX2V4Y2xfZmVuY2UgLSBBZGQgYW4gZXhjbHVzaXZlIGZlbmNlLgogICog QG9iajogdGhlIHJlc2VydmF0aW9uIG9iamVjdAotICogQGZlbmNlOiB0aGUgc2hhcmVkIGZlbmNl IHRvIGFkZAorICogQGZlbmNlOiB0aGUgZXhjbHVzaXZlIGZlbmNlIHRvIGFkZAogICoKLSAqIEFk ZCBhIGZlbmNlIHRvIHRoZSBleGNsdXNpdmUgc2xvdC4gIFRoZSBvYmotPmxvY2sgbXVzdCBiZSBo ZWxkLgorICogQWRkIGEgZmVuY2UgdG8gdGhlIGV4Y2x1c2l2ZSBzbG90LiBAb2JqIG11c3QgYmUg bG9ja2VkIHdpdGggZG1hX3Jlc3ZfbG9jaygpLgorICogTm90ZSB0aGF0IHRoaXMgZnVuY3Rpb24g cmVwbGFjZXMgYWxsIGZlbmNlcyBhdHRhY2hlZCB0byBAb2JqLCBzZWUgYWxzbworICogJmRtYV9y ZXN2LmZlbmNlX2V4Y2wgZm9yIGEgZGlzY3Vzc2lvbiBvZiB0aGUgc2VtYW50aWNzLgogICovCiB2 b2lkIGRtYV9yZXN2X2FkZF9leGNsX2ZlbmNlKHN0cnVjdCBkbWFfcmVzdiAqb2JqLCBzdHJ1Y3Qg ZG1hX2ZlbmNlICpmZW5jZSkKIHsKQEAgLTYwOSw5ICs2MTksMTEgQEAgc3RhdGljIGlubGluZSBp bnQgZG1hX3Jlc3ZfdGVzdF9zaWduYWxlZF9zaW5nbGUoc3RydWN0IGRtYV9mZW5jZSAqcGFzc2Vk X2ZlbmNlKQogICogZmVuY2UKICAqCiAgKiBDYWxsZXJzIGFyZSBub3QgcmVxdWlyZWQgdG8gaG9s ZCBzcGVjaWZpYyBsb2NrcywgYnV0IG1heWJlIGhvbGQKLSAqIGRtYV9yZXN2X2xvY2soKSBhbHJl YWR5CisgKiBkbWFfcmVzdl9sb2NrKCkgYWxyZWFkeS4KKyAqCiAgKiBSRVRVUk5TCi0gKiB0cnVl IGlmIGFsbCBmZW5jZXMgc2lnbmFsZWQsIGVsc2UgZmFsc2UKKyAqCisgKiBUcnVlIGlmIGFsbCBm ZW5jZXMgc2lnbmFsZWQsIGVsc2UgZmFsc2UuCiAgKi8KIGJvb2wgZG1hX3Jlc3ZfdGVzdF9zaWdu YWxlZChzdHJ1Y3QgZG1hX3Jlc3YgKm9iaiwgYm9vbCB0ZXN0X2FsbCkKIHsKZGlmZiAtLWdpdCBh L2luY2x1ZGUvbGludXgvZG1hLWJ1Zi5oIGIvaW5jbHVkZS9saW51eC9kbWEtYnVmLmgKaW5kZXgg MmI4MTRmZGUwZDExLi44Y2MwYzU1ODc3YTYgMTAwNjQ0Ci0tLSBhL2luY2x1ZGUvbGludXgvZG1h LWJ1Zi5oCisrKyBiL2luY2x1ZGUvbGludXgvZG1hLWJ1Zi5oCkBAIC00MjAsNiArNDIwLDEzIEBA IHN0cnVjdCBkbWFfYnVmIHsKIAkgKiAtIER5bmFtaWMgaW1wb3J0ZXJzIHNob3VsZCBzZXQgZmVu Y2VzIGZvciBhbnkgYWNjZXNzIHRoYXQgdGhleSBjYW4ndAogCSAqICAgZGlzYWJsZSBpbW1lZGlh dGVseSBmcm9tIHRoZWlyICZkbWFfYnVmX2F0dGFjaF9vcHMubW92ZV9ub3RpZnkKIAkgKiAgIGNh bGxiYWNrLgorCSAqCisJICogSU1QT1JUQU5UOgorCSAqCisJICogQWxsIGRyaXZlcnMgbXVzdCBv YmV5IHRoZSBzdHJ1Y3QgZG1hX3Jlc3YgcnVsZXMsIHNwZWNpZmljYWxseSB0aGUKKwkgKiBydWxl cyBmb3IgdXBkYXRpbmcgZmVuY2VzLCBzZWUgJmRtYV9yZXN2LmZlbmNlX2V4Y2wgYW5kCisJICog JmRtYV9yZXN2LmZlbmNlLiBJZiB0aGVzZSBkZXBlbmRlbmN5IHJ1bGVzIGFyZSBicm9rZW4gYWNj ZXNzIHRyYWNraW5nCisJICogY2FuIGJlIGxvc3QgcmVzdWx0aW5nIGluIHVzZSBhZnRlciBmcmVl IGlzc3Vlcy4KIAkgKi8KIAlzdHJ1Y3QgZG1hX3Jlc3YgKnJlc3Y7CiAKZGlmZiAtLWdpdCBhL2lu Y2x1ZGUvbGludXgvZG1hLXJlc3YuaCBiL2luY2x1ZGUvbGludXgvZG1hLXJlc3YuaAppbmRleCBl MWNhMjA4MGExZmYuLjkxMDBkZDNkYzIxZiAxMDA2NDQKLS0tIGEvaW5jbHVkZS9saW51eC9kbWEt cmVzdi5oCisrKyBiL2luY2x1ZGUvbGludXgvZG1hLXJlc3YuaApAQCAtNjIsMTYgKzYyLDkwIEBA IHN0cnVjdCBkbWFfcmVzdl9saXN0IHsKIAogLyoqCiAgKiBzdHJ1Y3QgZG1hX3Jlc3YgLSBhIHJl c2VydmF0aW9uIG9iamVjdCBtYW5hZ2VzIGZlbmNlcyBmb3IgYSBidWZmZXIKLSAqIEBsb2NrOiB1 cGRhdGUgc2lkZSBsb2NrCi0gKiBAc2VxOiBzZXF1ZW5jZSBjb3VudCBmb3IgbWFuYWdpbmcgUkNV IHJlYWQtc2lkZSBzeW5jaHJvbml6YXRpb24KLSAqIEBmZW5jZV9leGNsOiB0aGUgZXhjbHVzaXZl IGZlbmNlLCBpZiB0aGVyZSBpcyBvbmUgY3VycmVudGx5Ci0gKiBAZmVuY2U6IGxpc3Qgb2YgY3Vy cmVudCBzaGFyZWQgZmVuY2VzCisgKgorICogVGhlcmUgYXJlIG11bHRpcGxlIHVzZXMgZm9yIHRo aXMsIHdpdGggc29tZXRpbWVzIHNsaWdodGx5IGRpZmZlcmVudCBydWxlcyBpbgorICogaG93IHRo ZSBmZW5jZSBzbG90cyBhcmUgdXNlZC4KKyAqCisgKiBPbmUgdXNlIGlzIHRvIHN5bmNocm9uaXpl IGNyb3NzLWRyaXZlciBhY2Nlc3MgdG8gYSBzdHJ1Y3QgZG1hX2J1ZiwgZWl0aGVyIGZvcgorICog ZHluYW1pYyBidWZmZXIgbWFuYWdlbWVudCBvciBqdXN0IHRvIGhhbmRsZSBpbXBsaWNpdCBzeW5j aHJvbml6YXRpb24gYmV0d2VlbgorICogZGlmZmVyZW50IHVzZXJzIG9mIHRoZSBidWZmZXIgaW4g dXNlcnNwYWNlLiBTZWUgJmRtYV9idWYucmVzdiBmb3IgYSBtb3JlCisgKiBpbi1kZXB0aCBkaXNj dXNzaW9uLgorICoKKyAqIFRoZSBvdGhlciBtYWpvciB1c2UgaXMgdG8gbWFuYWdlIGFjY2VzcyBh bmQgbG9ja2luZyB3aXRoaW4gYSBkcml2ZXIgaW4gYQorICogYnVmZmVyIGJhc2VkIG1lbW9yeSBt YW5hZ2VyLiBzdHJ1Y3QgdHRtX2J1ZmZlcl9vYmplY3QgaXMgdGhlIGNhbm9uaWNhbAorICogZXhh bXBsZSBoZXJlLCBzaW5jZSB0aGlzIGlzIHdoZXJlIHJlc2VydmF0aW9uIG9iamVjdHMgb3JpZ2lu YXRlZCBmcm9tLiBCdXQKKyAqIHVzZSBpbiBkcml2ZXJzIGlzIHNwcmVhZGluZyBhbmQgc29tZSBk cml2ZXJzIGFsc28gbWFuYWdlIHN0cnVjdAorICogZHJtX2dlbV9vYmplY3Qgd2l0aCB0aGUgc2Ft ZSBzY2hlbWUuCiAgKi8KIHN0cnVjdCBkbWFfcmVzdiB7CisJLyoqCisJICogQGxvY2s6CisJICoK KwkgKiBVcGRhdGUgc2lkZSBsb2NrLiBEb24ndCB1c2UgZGlyZWN0bHksIGluc3RlYWQgdXNlIHRo ZSB3cmFwcGVyCisJICogZnVuY3Rpb25zIGxpa2UgZG1hX3Jlc3ZfbG9jaygpIGFuZCBkbWFfcmVz dl91bmxvY2soKS4KKwkgKgorCSAqIERyaXZlcnMgd2hpY2ggdXNlIHRoZSByZXNlcnZhdGlvbiBv YmplY3QgdG8gbWFuYWdlIG1lbW9yeSBkeW5hbWljYWxseQorCSAqIGFsc28gdXNlIHRoaXMgbG9j ayB0byBwcm90ZWN0IGJ1ZmZlciBvYmplY3Qgc3RhdGUgbGlrZSBwbGFjZW1lbnQsCisJICogYWxs b2NhdGlvbiBwb2xpY2llcyBvciB0aHJvdWdob3V0IGNvbW1hbmQgc3VibWlzc2lvbi4KKwkgKi8K IAlzdHJ1Y3Qgd3dfbXV0ZXggbG9jazsKKworCS8qKgorCSAqIEBzZXE6CisJICoKKwkgKiBTZXF1 ZW5jZSBjb3VudCBmb3IgbWFuYWdpbmcgUkNVIHJlYWQtc2lkZSBzeW5jaHJvbml6YXRpb24sIGFs bG93cworCSAqIHJlYWQtb25seSBhY2Nlc3MgdG8gQGZlbmNlX2V4Y2wgYW5kIEBmZW5jZSB3aGls ZSBlbnN1cmluZyB3ZSB0YWtlIGEKKwkgKiBjb25zaXN0ZW50IHNuYXBzaG90LgorCSAqLwogCXNl cWNvdW50X3d3X211dGV4X3Qgc2VxOwogCisJLyoqCisJICogQGZlbmNlX2V4Y2w6CisJICoKKwkg KiBUaGUgZXhjbHVzaXZlIGZlbmNlLCBpZiB0aGVyZSBpcyBvbmUgY3VycmVudGx5LgorCSAqCisJ ICogVGhlcmUgYXJlIHR3byB3YXlzIHRvIHVwZGF0ZSB0aGlzIGZlbmNlOgorCSAqCisJICogLSBG aXJzdCBieSBjYWxsaW5nIGRtYV9yZXN2X2FkZF9leGNsX2ZlbmNlKCksIHdoaWNoIHJlcGxhY2Vz IGFsbAorCSAqICAgZmVuY2VzIGF0dGFjaGVkIHRvIHRoZSByZXNlcnZhdGlvbiBvYmplY3QuIFRv IGd1YXJhbnRlZSB0aGF0IG5vCisJICogICBmZW5jZXMgYXJlIGxvc3QsIHRoaXMgbmV3IGZlbmNl IG11c3Qgc2lnbmFsIG9ubHkgYWZ0ZXIgYWxsIHByZXZpb3VzCisJICogICBmZW5jZXMsIGJvdGgg c2hhcmVkIGFuZCBleGNsdXNpdmUsIGhhdmUgc2lnbmFsbGVkLiBJbiBzb21lIGNhc2VzIGl0CisJ ICogICBpcyBjb252ZW5pZW50IHRvIGFjaGlldmUgdGhhdCBieSBhdHRhY2hpbmcgYSBzdHJ1Y3Qg ZG1hX2ZlbmNlX2FycmF5CisJICogICB3aXRoIGFsbCB0aGUgbmV3IGFuZCBvbGQgZmVuY2VzLgor CSAqCisJICogLSBBbHRlcm5hdGl2ZWx5IHRoZSBmZW5jZSBjYW4gYmUgc2V0IGRpcmVjdGx5LCB3 aGljaCBsZWF2ZXMgdGhlCisJICogICBzaGFyZWQgZmVuY2VzIHVuY2hhbmdlZC4gVG8gZ3VhcmFu dGVlIHRoYXQgbm8gZmVuY2VzIGFyZSBsb3N0LCB0aGlzCisJICogICBuZXcgZmVuY2UgbXVzdCBz aWduYWwgb25seSBhZnRlciB0aGUgcHJldmlvdXMgZXhjbHVzaXZlIGZlbmNlIGhhcworCSAqICAg c2lnbmFsbGVkLiBTaW5jZSB0aGUgc2hhcmVkIGZlbmNlcyBhcmUgc3RheWluZyBpbnRhY3QsIGl0 IGlzIG5vdAorCSAqICAgbmVjZXNzYXJ5IHRvIG1haW50YWluIGFueSBvcmRlcmluZyBhZ2FpbnN0 IHRob3NlLiBJZiBzZW1hbnRpY2FsbHkKKwkgKiAgIG9ubHkgYSBuZXcgYWNjZXNzIGlzIGFkZGVk IHdpdGhvdXQgYWN0dWFsbHkgdHJlYXRpbmcgdGhlIHByZXZpb3VzCisJICogICBvbmUgYXMgYSBk ZXBlbmRlbmN5IHRoZSBleGNsdXNpdmUgZmVuY2VzIGNhbiBiZSBzdHJ1bmcgdG9nZXRoZXIKKwkg KiAgIHVzaW5nIHN0cnVjdCBkbWFfZmVuY2VfY2hhaW4uCisJICoKKwkgKiBOb3RlIHRoYXQgYWN0 dWFsIHNlbWFudGljcyBvZiB3aGF0IGFuIGV4Y2x1c2l2ZSBvciBzaGFyZWQgZmVuY2UgbWVhbgor CSAqIGlzIGRlZmluZWQgYnkgdGhlIHVzZXIsIGZvciByZXNlcnZhdGlvbiBvYmplY3RzIHNoYXJl ZCBhY3Jvc3MgZHJpdmVycworCSAqIHNlZSAmZG1hX2J1Zi5yZXN2LgorCSAqLwogCXN0cnVjdCBk bWFfZmVuY2UgX19yY3UgKmZlbmNlX2V4Y2w7CisKKwkvKioKKwkgKiBAZmVuY2U6CisJICoKKwkg KiBMaXN0IG9mIGN1cnJlbnQgc2hhcmVkIGZlbmNlcy4KKwkgKgorCSAqIFRoZXJlIGFyZSBubyBv cmRlcmluZyBjb25zdHJhaW50cyBvZiBzaGFyZWQgZmVuY2VzIGFnYWluc3QgdGhlCisJICogZXhj bHVzaXZlIGZlbmNlIHNsb3QuIElmIGEgd2FpdGVyIG5lZWRzIHRvIHdhaXQgZm9yIGFsbCBhY2Nl c3MsIGl0CisJICogaGFzIHRvIHdhaXQgZm9yIGJvdGggc2V0cyBvZiBmZW5jZXMgdG8gc2lnbmFs LgorCSAqCisJICogQSBuZXcgZmVuY2UgaXMgYWRkZWQgYnkgY2FsbGluZyBkbWFfcmVzdl9hZGRf c2hhcmVkX2ZlbmNlKCkuIFNpbmNlCisJICogdGhpcyBvZnRlbiBuZWVkcyB0byBiZSBkb25lIHBh c3QgdGhlIHBvaW50IG9mIG5vIHJldHVybiBpbiBjb21tYW5kCisJICogc3VibWlzc2lvbiBpdCBj YW5ub3QgZmFpbCwgYW5kIHRoZXJlZm9yZSBzdWZmaWNpZW50IHNsb3RzIG5lZWQgdG8gYmUKKwkg KiByZXNlcnZlZCBieSBjYWxsaW5nIGRtYV9yZXN2X3Jlc2VydmVfc2hhcmVkKCkuCisJICoKKwkg KiBOb3RlIHRoYXQgYWN0dWFsIHNlbWFudGljcyBvZiB3aGF0IGFuIGV4Y2x1c2l2ZSBvciBzaGFy ZWQgZmVuY2UgbWVhbgorCSAqIGlzIGRlZmluZWQgYnkgdGhlIHVzZXIsIGZvciByZXNlcnZhdGlv biBvYmplY3RzIHNoYXJlZCBhY3Jvc3MgZHJpdmVycworCSAqIHNlZSAmZG1hX2J1Zi5yZXN2Lgor CSAqLwogCXN0cnVjdCBkbWFfcmVzdl9saXN0IF9fcmN1ICpmZW5jZTsKIH07CiAKQEAgLTk4LDYg KzE3MiwxMyBAQCBzdGF0aWMgaW5saW5lIHZvaWQgZG1hX3Jlc3ZfcmVzZXRfc2hhcmVkX21heChz dHJ1Y3QgZG1hX3Jlc3YgKm9iaikge30KICAqIHVuZGVmaW5lZCBvcmRlciwgYSAjd3dfYWNxdWly ZV9jdHggaXMgcGFzc2VkIHRvIHVud2luZCBpZiBhIGN5Y2xlCiAgKiBpcyBkZXRlY3RlZC4gU2Vl IHd3X211dGV4X2xvY2soKSBhbmQgd3dfYWNxdWlyZV9pbml0KCkuIEEgcmVzZXJ2YXRpb24KICAq IG9iamVjdCBtYXkgYmUgbG9ja2VkIGJ5IGl0c2VsZiBieSBwYXNzaW5nIE5VTEwgYXMgQGN0eC4K KyAqCisgKiBXaGVuIGEgZGllIHNpdHVhdGlvbiBpcyBpbmRpY2F0ZWQgYnkgcmV0dXJuaW5nIC1F REVBRExLIGFsbCBsb2NrcyBoZWxkIGJ5CisgKiBAY3R4IG11c3QgYmUgdW5sb2NrZWQgYW5kIHRo ZW4gZG1hX3Jlc3ZfbG9ja19zbG93KCkgY2FsbGVkIG9uIEBvYmouCisgKgorICogVW5sb2NrZWQg YnkgY2FsbGluZyBkbWFfcmVzdl91bmxvY2soKS4KKyAqCisgKiBTZWUgYWxzbyBkbWFfcmVzdl9s b2NrX2ludGVycnVwdGlibGUoKSBmb3IgdGhlIGludGVycnVwdGlibGUgdmFyaWFudC4KICAqLwog c3RhdGljIGlubGluZSBpbnQgZG1hX3Jlc3ZfbG9jayhzdHJ1Y3QgZG1hX3Jlc3YgKm9iaiwKIAkJ CQlzdHJ1Y3Qgd3dfYWNxdWlyZV9jdHggKmN0eCkKQEAgLTExOSw2ICsyMDAsMTIgQEAgc3RhdGlj IGlubGluZSBpbnQgZG1hX3Jlc3ZfbG9jayhzdHJ1Y3QgZG1hX3Jlc3YgKm9iaiwKICAqIHVuZGVm aW5lZCBvcmRlciwgYSAjd3dfYWNxdWlyZV9jdHggaXMgcGFzc2VkIHRvIHVud2luZCBpZiBhIGN5 Y2xlCiAgKiBpcyBkZXRlY3RlZC4gU2VlIHd3X211dGV4X2xvY2soKSBhbmQgd3dfYWNxdWlyZV9p bml0KCkuIEEgcmVzZXJ2YXRpb24KICAqIG9iamVjdCBtYXkgYmUgbG9ja2VkIGJ5IGl0c2VsZiBi eSBwYXNzaW5nIE5VTEwgYXMgQGN0eC4KKyAqCisgKiBXaGVuIGEgZGllIHNpdHVhdGlvbiBpcyBp bmRpY2F0ZWQgYnkgcmV0dXJuaW5nIC1FREVBRExLIGFsbCBsb2NrcyBoZWxkIGJ5CisgKiBAY3R4 IG11c3QgYmUgdW5sb2NrZWQgYW5kIHRoZW4gZG1hX3Jlc3ZfbG9ja19zbG93X2ludGVycnVwdGli bGUoKSBjYWxsZWQgb24KKyAqIEBvYmouCisgKgorICogVW5sb2NrZWQgYnkgY2FsbGluZyBkbWFf cmVzdl91bmxvY2soKS4KICAqLwogc3RhdGljIGlubGluZSBpbnQgZG1hX3Jlc3ZfbG9ja19pbnRl cnJ1cHRpYmxlKHN0cnVjdCBkbWFfcmVzdiAqb2JqLAogCQkJCQkgICAgICBzdHJ1Y3Qgd3dfYWNx dWlyZV9jdHggKmN0eCkKQEAgLTEzNCw2ICsyMjEsOCBAQCBzdGF0aWMgaW5saW5lIGludCBkbWFf cmVzdl9sb2NrX2ludGVycnVwdGlibGUoc3RydWN0IGRtYV9yZXN2ICpvYmosCiAgKiBBY3F1aXJl cyB0aGUgcmVzZXJ2YXRpb24gb2JqZWN0IGFmdGVyIGEgZGllIGNhc2UuIFRoaXMgZnVuY3Rpb24K ICAqIHdpbGwgc2xlZXAgdW50aWwgdGhlIGxvY2sgYmVjb21lcyBhdmFpbGFibGUuIFNlZSBkbWFf cmVzdl9sb2NrKCkgYXMKICAqIHdlbGwuCisgKgorICogU2VlIGFsc28gZG1hX3Jlc3ZfbG9ja19z bG93X2ludGVycnVwdGlibGUoKSBmb3IgdGhlIGludGVycnVwdGlibGUgdmFyaWFudC4KICAqLwog c3RhdGljIGlubGluZSB2b2lkIGRtYV9yZXN2X2xvY2tfc2xvdyhzdHJ1Y3QgZG1hX3Jlc3YgKm9i aiwKIAkJCQkgICAgICBzdHJ1Y3Qgd3dfYWNxdWlyZV9jdHggKmN0eCkKQEAgLTE2Nyw3ICsyNTYs NyBAQCBzdGF0aWMgaW5saW5lIGludCBkbWFfcmVzdl9sb2NrX3Nsb3dfaW50ZXJydXB0aWJsZShz dHJ1Y3QgZG1hX3Jlc3YgKm9iaiwKICAqIGlmIHRoZXkgb3ZlcmxhcCB3aXRoIGEgd3JpdGVyLgog ICoKICAqIEFsc28gbm90ZSB0aGF0IHNpbmNlIG5vIGNvbnRleHQgaXMgcHJvdmlkZWQsIG5vIGRl YWRsb2NrIHByb3RlY3Rpb24gaXMKLSAqIHBvc3NpYmxlLgorICogcG9zc2libGUsIHdoaWNoIGlz IGFsc28gbm90IG5lZWRlZCBmb3IgYSB0cnlsb2NrLgogICoKICAqIFJldHVybnMgdHJ1ZSBpZiB0 aGUgbG9jayB3YXMgYWNxdWlyZWQsIGZhbHNlIG90aGVyd2lzZS4KICAqLwpAQCAtMTkzLDYgKzI4 MiwxMSBAQCBzdGF0aWMgaW5saW5lIGJvb2wgZG1hX3Jlc3ZfaXNfbG9ja2VkKHN0cnVjdCBkbWFf cmVzdiAqb2JqKQogICoKICAqIFJldHVybnMgdGhlIGNvbnRleHQgdXNlZCB0byBsb2NrIGEgcmVz ZXJ2YXRpb24gb2JqZWN0IG9yIE5VTEwgaWYgbm8gY29udGV4dAogICogd2FzIHVzZWQgb3IgdGhl IG9iamVjdCBpcyBub3QgbG9ja2VkIGF0IGFsbC4KKyAqCisgKiBXQVJOSU5HOiBUaGlzIGludGVy ZmFjZSBpcyBwcmV0dHkgaG9ycmlibGUsIGJ1dCBUVE0gbmVlZHMgaXQgYmVjYXVzZSBpdAorICog ZG9lc24ndCBwYXNzIHRoZSBzdHJ1Y3Qgd3dfYWNxdWlyZV9jdHggYXJvdW5kIGluIHNvbWUgdmVy eSBsb25nIGNhbGxjaGFpbnMuCisgKiBFdmVyeW9uZSBlbHNlIGp1c3QgdXNlcyBpdCB0byBjaGVj ayB3aGV0aGVyIHRoZXkncmUgaG9sZGluZyBhIHJlc2VydmF0aW9uIG9yCisgKiBub3QuCiAgKi8K IHN0YXRpYyBpbmxpbmUgc3RydWN0IHd3X2FjcXVpcmVfY3R4ICpkbWFfcmVzdl9sb2NraW5nX2N0 eChzdHJ1Y3QgZG1hX3Jlc3YgKm9iaikKIHsKLS0gCjIuMzIuMAoKX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX18KSW50ZWwtZ2Z4IG1haWxpbmcgbGlzdApJbnRl bC1nZnhAbGlzdHMuZnJlZWRlc2t0b3Aub3JnCmh0dHBzOi8vbGlzdHMuZnJlZWRlc2t0b3Aub3Jn L21haWxtYW4vbGlzdGluZm8vaW50ZWwtZ2Z4Cg==