From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3D794C4332F for ; Tue, 23 Nov 2021 14:21:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237831AbhKWOYe (ORCPT ); Tue, 23 Nov 2021 09:24:34 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32956 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237651AbhKWOYe (ORCPT ); Tue, 23 Nov 2021 09:24:34 -0500 Received: from mail-wm1-x336.google.com (mail-wm1-x336.google.com [IPv6:2a00:1450:4864:20::336]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E27BDC061574 for ; Tue, 23 Nov 2021 06:21:25 -0800 (PST) Received: by mail-wm1-x336.google.com with SMTP id p3-20020a05600c1d8300b003334fab53afso2412039wms.3 for ; Tue, 23 Nov 2021 06:21:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=0/rDAVFiY30/F7xfD+fR2MOJrR1IoDd86y6cHd2nyko=; b=h0pv0Pdyf+oL1w0BsXu44X4uBTbwUvURvbCWvcbKOMQyViix4JWyfnq7ym+4UH/Eja wGQUZKwqk49GO0IYE2rDiiUx+qZS2Nz10PWclCtUmYncyrB3VdGVqwkcvmbOZoufeja/ Pb9JvZa8l3DoYwGpFcV7EY7XDXovA53INScLiAaGxt4EYd8xbC7uwP3EUMM2tWfq7VSw r6n3bftTeg1eRLvLcU5Cpv8+YjEXqPPmqIrl+4Zdk51J5ZOe0UtslrjFgkfJtxlVaDiI eXb1PEeTvb1IzjdMl0ByPsK08gFb5GrrFB1YcqngVPZ7Nz466r4UWXFXE9A9FWsxgDbi fuqg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=0/rDAVFiY30/F7xfD+fR2MOJrR1IoDd86y6cHd2nyko=; b=sEeyVdYvY2K29KcLvBSMyBibffpTHzJ1uF/Yp8UaLbZeD1B+f34uVUDimnsCpSN8Oa +FswnIXMHKQJaDmr7O7QfffC/hs6sjOfGKqgwpmu7JUcBZMAbLka7Y8dF5F9bF6X53XN dUJXIfARUcx2uUbXRT8G7q+w5oFvloDmiYK7c7LoFpqUk7SAaO/Pk9Jg4JVOPt+MdSU4 tAL3YXPSvYGQs/tPRr/Ty8II4vm/lpYFFu9B9zsy5YS1iUyWF/j6k2yIdlocq/DuVNb0 o5yrz3hKEJYnhtNc6f2n8DyDz6iXCWK6vXclldBCRWyiLk8HUx5wXFA2H7fBYqygV0vP p/Pw== X-Gm-Message-State: AOAM530xLUfusQ5vPdU0wCHG+jGkQb4p04s/1Ec3uM+t+NfPX+fPbb9K BOr61sz8KqXS+nVk9rdaI2o= X-Google-Smtp-Source: ABdhPJzJm0k1bViw8FfTiZBj8bt9LF28qDqjJ2YfXJSbPVBEjDTVT43j+zdWWe40jILSZnC8E2jFJA== X-Received: by 2002:a05:600c:224a:: with SMTP id a10mr3512024wmm.154.1637677284299; Tue, 23 Nov 2021 06:21:24 -0800 (PST) Received: from abel.fritz.box (p57b0b77b.dip0.t-ipconnect.de. [87.176.183.123]) by smtp.gmail.com with ESMTPSA id t8sm1645928wmq.32.2021.11.23.06.21.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 23 Nov 2021 06:21:23 -0800 (PST) From: "=?UTF-8?q?Christian=20K=C3=B6nig?=" X-Google-Original-From: =?UTF-8?q?Christian=20K=C3=B6nig?= To: sumit.semwal@linaro.org, daniel@ffwll.ch Cc: linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org Subject: [PATCH 05/26] dma-buf: add dma_resv_replace_fences Date: Tue, 23 Nov 2021 15:20:50 +0100 Message-Id: <20211123142111.3885-6-christian.koenig@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211123142111.3885-1-christian.koenig@amd.com> References: <20211123142111.3885-1-christian.koenig@amd.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org This function allows to replace fences from the shared fence list when we can gurantee that the operation represented by the original fence has finished or no accesses to the resources protected by the dma_resv object any more when the new fence finishes. Then use this function in the amdkfd code when BOs are unmapped from the process. Signed-off-by: Christian König --- drivers/dma-buf/dma-resv.c | 43 ++++++++++++++++ .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c | 49 +++---------------- include/linux/dma-resv.h | 2 + 3 files changed, 52 insertions(+), 42 deletions(-) diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c index e627a4274ff6..0daed67cab0e 100644 --- a/drivers/dma-buf/dma-resv.c +++ b/drivers/dma-buf/dma-resv.c @@ -312,6 +312,49 @@ void dma_resv_add_shared_fence(struct dma_resv *obj, struct dma_fence *fence) } EXPORT_SYMBOL(dma_resv_add_shared_fence); +/** + * dma_resv_replace_fences - replace fences in the dma_resv obj + * @obj: the reservation object + * @context: the context of the fences to replace + * @replacement: the new fence to use instead + * + * Replace fences with a specified context with a new fence. Only valid if the + * operation represented by the original fences is completed or has no longer + * access to the resources protected by the dma_resv object when the new fence + * completes. + */ +void dma_resv_replace_fences(struct dma_resv *obj, uint64_t context, + struct dma_fence *replacement) +{ + struct dma_resv_list *list; + struct dma_fence *old; + unsigned int i; + + dma_resv_assert_held(obj); + + write_seqcount_begin(&obj->seq); + + old = dma_resv_excl_fence(obj); + if (old->context == context) { + RCU_INIT_POINTER(obj->fence_excl, dma_fence_get(replacement)); + dma_fence_put(old); + } + + list = dma_resv_shared_list(obj); + for (i = 0; list && i < list->shared_count; ++i) { + old = rcu_dereference_protected(list->shared[i], + dma_resv_held(obj)); + if (old->context != context) + continue; + + rcu_assign_pointer(list->shared[i], dma_fence_get(replacement)); + dma_fence_put(old); + } + + write_seqcount_end(&obj->seq); +} +EXPORT_SYMBOL(dma_resv_replace_fences); + /** * dma_resv_add_excl_fence - Add an exclusive fence. * @obj: the reservation object diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c index 71acd577803e..b558ef0f8c4a 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c @@ -236,53 +236,18 @@ void amdgpu_amdkfd_release_notify(struct amdgpu_bo *bo) static int amdgpu_amdkfd_remove_eviction_fence(struct amdgpu_bo *bo, struct amdgpu_amdkfd_fence *ef) { - struct dma_resv *resv = bo->tbo.base.resv; - struct dma_resv_list *old, *new; - unsigned int i, j, k; + struct dma_fence *replacement; if (!ef) return -EINVAL; - old = dma_resv_shared_list(resv); - if (!old) - return 0; - - new = kmalloc(struct_size(new, shared, old->shared_max), GFP_KERNEL); - if (!new) - return -ENOMEM; - - /* Go through all the shared fences in the resevation object and sort - * the interesting ones to the end of the list. + /* TODO: Instead of block before we should use the fence of the page + * table update and TLB flush here directly. */ - for (i = 0, j = old->shared_count, k = 0; i < old->shared_count; ++i) { - struct dma_fence *f; - - f = rcu_dereference_protected(old->shared[i], - dma_resv_held(resv)); - - if (f->context == ef->base.context) - RCU_INIT_POINTER(new->shared[--j], f); - else - RCU_INIT_POINTER(new->shared[k++], f); - } - new->shared_max = old->shared_max; - new->shared_count = k; - - /* Install the new fence list, seqcount provides the barriers */ - write_seqcount_begin(&resv->seq); - RCU_INIT_POINTER(resv->fence, new); - write_seqcount_end(&resv->seq); - - /* Drop the references to the removed fences or move them to ef_list */ - for (i = j; i < old->shared_count; ++i) { - struct dma_fence *f; - - f = rcu_dereference_protected(new->shared[i], - dma_resv_held(resv)); - dma_fence_put(f); - } - kfree_rcu(old, rcu); - + replacement = dma_fence_get_stub(); + dma_resv_replace_fences(bo->tbo.base.resv, ef->base.context, + replacement); + dma_fence_put(replacement); return 0; } diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h index 2594fef75f51..0eb0c08c51c9 100644 --- a/include/linux/dma-resv.h +++ b/include/linux/dma-resv.h @@ -457,6 +457,8 @@ void dma_resv_init(struct dma_resv *obj); void dma_resv_fini(struct dma_resv *obj); int dma_resv_reserve_shared(struct dma_resv *obj, unsigned int num_fences); void dma_resv_add_shared_fence(struct dma_resv *obj, struct dma_fence *fence); +void dma_resv_replace_fences(struct dma_resv *obj, uint64_t context, + struct dma_fence *fence); void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence); void dma_resv_prune(struct dma_resv *obj); void dma_resv_prune_unlocked(struct dma_resv *obj); -- 2.25.1