All of lore.kernel.org
 help / color / mirror / Atom feed
From: Sergey Senozhatsky <senozhatsky@chromium.org>
To: Sumit Semwal <sumit.semwal@linaro.org>,
	Gustavo Padovan <gustavo@padovan.org>,
	Christian Konig <christian.koenig@amd.com>
Cc: linaro-mm-sig@lists.linaro.org, linux-kernel@vger.kernel.org,
	dri-devel@lists.freedesktop.org, Tomasz Figa <tfiga@chromium.org>,
	Christoph Hellwig <hch@infradead.org>,
	Sergey Senozhatsky <senozhatsky@chromium.org>,
	Ricardo Ribalda <ribalda@chromium.org>,
	linux-media@vger.kernel.org
Subject: [PATCH] dma-fence: allow dma fence to have their own lock
Date: Mon, 30 May 2022 23:22:32 +0900	[thread overview]
Message-ID: <20220530142232.2871634-1-senozhatsky@chromium.org> (raw)

	RFC

	I don't have a good name for this yet and I did not spend
	any time on documentataion (for that reason)

We create fences (out fences) as part of operations execution, which
are short-lived objects, we want to release all memory after operation
execution is completed or when operation gets cancelled/deleted via
ioctl().

This creates a bit of a problem. DMA fences are refcounted objects and
exporter never knows when importer imports a fence or puts its refcount,
so exporter never knows when fence will be destoyed, which should not
be a problem for refcounted objects, but here comes the twist...

	operation A - creates and exports out fence X
	... user-space imports fence X
	operation A - finishes execution, signals fence X
	kfree operation A, put dma_fence

DMA fences are designed to borrow spinlock that DMA fences use to
protect struct dma_fence members:

	struct dma_fence {
	        spinlock_t *lock;

	        const struct dma_fence_ops *ops;
		.....
	};

	void dma_fence_init(struct dma_fence *fence,
			const struct dma_fence_ops *ops,
			spinlock_t *lock,
			u64 context,
			u64 seqno);

So the `lock` should have at least same lifespan as the DMA fence
that borrows it, which is impossible to guarantee in our case. When
we kfree operation A struct we also kfree ->lock that operation
lends to DMA fence, which outlives operation A (depending on what
fence importers do and when they drop imported fence refcount).

This patch adds a new memnber to struct dma_fence: __lock_inplace.
Which is a lock that DMA fence will use to protect its own data when
it cannot reliably borrow a lock from the outside object.

I also had a patch that puts inplace and borrowed locks to an unnamed
uninon and adds one more dma_fence_flag_bits to distinguish between
fences with borrowed and inplace locks

	struct dma_fence {
		uninon {
			spinlock_t *lock;
			spinlock_t __lock_inplace;
		};
		...
	};

And then instead of locking/unlocking ->lock directly we would use
dma_fence_lock_irqsave()/dma_fence_unlock_irqrestore() macros which
would check fence flags and either use borrowed lock or inplace lock.
But after seeing how owten drivers directly access fence ->lock I
decided to scratch that approach and just add extra spinlock member.

Not-Yet-Signed-off-by: Sergey Senozhatsky <senozhatsky@chromium.org>
---
 drivers/dma-buf/dma-fence.c | 10 ++++++++++
 include/linux/dma-fence.h   |  6 ++++++
 2 files changed, 16 insertions(+)

diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c
index 066400ed8841..7ae40b8adb73 100644
--- a/drivers/dma-buf/dma-fence.c
+++ b/drivers/dma-buf/dma-fence.c
@@ -958,3 +958,13 @@ dma_fence_init(struct dma_fence *fence, const struct dma_fence_ops *ops,
 	trace_dma_fence_init(fence);
 }
 EXPORT_SYMBOL(dma_fence_init);
+
+void dma_fence_inplace_lock_init(struct dma_fence *fence,
+				 const struct dma_fence_ops *ops,
+				 u64 context, u64 seqno)
+{
+	spin_lock_init(&fence->__lock_inplace);
+
+	dma_fence_init(fence, ops, &fence->__lock_inplace, context, seqno);
+}
+EXPORT_SYMBOL(dma_fence_inplace_lock_init);
diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h
index 1ea691753bd3..6b15a0d2eccf 100644
--- a/include/linux/dma-fence.h
+++ b/include/linux/dma-fence.h
@@ -64,6 +64,8 @@ struct dma_fence_cb;
  */
 struct dma_fence {
 	spinlock_t *lock;
+	spinlock_t __lock_inplace;
+
 	const struct dma_fence_ops *ops;
 	/*
 	 * We clear the callback list on kref_put so that by the time we
@@ -262,6 +264,10 @@ struct dma_fence_ops {
 void dma_fence_init(struct dma_fence *fence, const struct dma_fence_ops *ops,
 		    spinlock_t *lock, u64 context, u64 seqno);
 
+void dma_fence_inplace_lock_init(struct dma_fence *fence,
+				 const struct dma_fence_ops *ops,
+				 u64 context, u64 seqno);
+
 void dma_fence_release(struct kref *kref);
 void dma_fence_free(struct dma_fence *fence);
 void dma_fence_describe(struct dma_fence *fence, struct seq_file *seq);
-- 
2.36.1.124.g0e6072fb45-goog


WARNING: multiple messages have this Message-ID (diff)
From: Sergey Senozhatsky <senozhatsky@chromium.org>
To: Sumit Semwal <sumit.semwal@linaro.org>,
	Gustavo Padovan <gustavo@padovan.org>,
	Christian Konig <christian.koenig@amd.com>
Cc: Tomasz Figa <tfiga@chromium.org>,
	Ricardo Ribalda <ribalda@chromium.org>,
	Christoph Hellwig <hch@infradead.org>,
	linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org,
	linaro-mm-sig@lists.linaro.org, linux-kernel@vger.kernel.org,
	Sergey Senozhatsky <senozhatsky@chromium.org>
Subject: [PATCH] dma-fence: allow dma fence to have their own lock
Date: Mon, 30 May 2022 23:22:32 +0900	[thread overview]
Message-ID: <20220530142232.2871634-1-senozhatsky@chromium.org> (raw)

	RFC

	I don't have a good name for this yet and I did not spend
	any time on documentataion (for that reason)

We create fences (out fences) as part of operations execution, which
are short-lived objects, we want to release all memory after operation
execution is completed or when operation gets cancelled/deleted via
ioctl().

This creates a bit of a problem. DMA fences are refcounted objects and
exporter never knows when importer imports a fence or puts its refcount,
so exporter never knows when fence will be destoyed, which should not
be a problem for refcounted objects, but here comes the twist...

	operation A - creates and exports out fence X
	... user-space imports fence X
	operation A - finishes execution, signals fence X
	kfree operation A, put dma_fence

DMA fences are designed to borrow spinlock that DMA fences use to
protect struct dma_fence members:

	struct dma_fence {
	        spinlock_t *lock;

	        const struct dma_fence_ops *ops;
		.....
	};

	void dma_fence_init(struct dma_fence *fence,
			const struct dma_fence_ops *ops,
			spinlock_t *lock,
			u64 context,
			u64 seqno);

So the `lock` should have at least same lifespan as the DMA fence
that borrows it, which is impossible to guarantee in our case. When
we kfree operation A struct we also kfree ->lock that operation
lends to DMA fence, which outlives operation A (depending on what
fence importers do and when they drop imported fence refcount).

This patch adds a new memnber to struct dma_fence: __lock_inplace.
Which is a lock that DMA fence will use to protect its own data when
it cannot reliably borrow a lock from the outside object.

I also had a patch that puts inplace and borrowed locks to an unnamed
uninon and adds one more dma_fence_flag_bits to distinguish between
fences with borrowed and inplace locks

	struct dma_fence {
		uninon {
			spinlock_t *lock;
			spinlock_t __lock_inplace;
		};
		...
	};

And then instead of locking/unlocking ->lock directly we would use
dma_fence_lock_irqsave()/dma_fence_unlock_irqrestore() macros which
would check fence flags and either use borrowed lock or inplace lock.
But after seeing how owten drivers directly access fence ->lock I
decided to scratch that approach and just add extra spinlock member.

Not-Yet-Signed-off-by: Sergey Senozhatsky <senozhatsky@chromium.org>
---
 drivers/dma-buf/dma-fence.c | 10 ++++++++++
 include/linux/dma-fence.h   |  6 ++++++
 2 files changed, 16 insertions(+)

diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c
index 066400ed8841..7ae40b8adb73 100644
--- a/drivers/dma-buf/dma-fence.c
+++ b/drivers/dma-buf/dma-fence.c
@@ -958,3 +958,13 @@ dma_fence_init(struct dma_fence *fence, const struct dma_fence_ops *ops,
 	trace_dma_fence_init(fence);
 }
 EXPORT_SYMBOL(dma_fence_init);
+
+void dma_fence_inplace_lock_init(struct dma_fence *fence,
+				 const struct dma_fence_ops *ops,
+				 u64 context, u64 seqno)
+{
+	spin_lock_init(&fence->__lock_inplace);
+
+	dma_fence_init(fence, ops, &fence->__lock_inplace, context, seqno);
+}
+EXPORT_SYMBOL(dma_fence_inplace_lock_init);
diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h
index 1ea691753bd3..6b15a0d2eccf 100644
--- a/include/linux/dma-fence.h
+++ b/include/linux/dma-fence.h
@@ -64,6 +64,8 @@ struct dma_fence_cb;
  */
 struct dma_fence {
 	spinlock_t *lock;
+	spinlock_t __lock_inplace;
+
 	const struct dma_fence_ops *ops;
 	/*
 	 * We clear the callback list on kref_put so that by the time we
@@ -262,6 +264,10 @@ struct dma_fence_ops {
 void dma_fence_init(struct dma_fence *fence, const struct dma_fence_ops *ops,
 		    spinlock_t *lock, u64 context, u64 seqno);
 
+void dma_fence_inplace_lock_init(struct dma_fence *fence,
+				 const struct dma_fence_ops *ops,
+				 u64 context, u64 seqno);
+
 void dma_fence_release(struct kref *kref);
 void dma_fence_free(struct dma_fence *fence);
 void dma_fence_describe(struct dma_fence *fence, struct seq_file *seq);
-- 
2.36.1.124.g0e6072fb45-goog


             reply	other threads:[~2022-05-30 14:22 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-05-30 14:22 Sergey Senozhatsky [this message]
2022-05-30 14:22 ` [PATCH] dma-fence: allow dma fence to have their own lock Sergey Senozhatsky
2022-05-30 14:55 ` Christian König
2022-05-30 14:55   ` Christian König
2022-05-30 15:09   ` Sergey Senozhatsky
2022-05-30 15:09     ` Sergey Senozhatsky
2022-05-30 15:45   ` Sergey Senozhatsky
2022-05-30 15:45     ` Sergey Senozhatsky
2022-06-01 13:50     ` [Linaro-mm-sig] " Christian König
2022-06-01 13:50       ` Christian König
2022-05-31  2:51   ` Sergey Senozhatsky
2022-05-31  2:51     ` Sergey Senozhatsky
2022-06-01 12:45     ` [Linaro-mm-sig] " Christian König
2022-06-01 12:45       ` Christian König
2022-06-01 13:22       ` Daniel Vetter
2022-06-01 13:22         ` Daniel Vetter
2022-06-01 13:52         ` Christian König
2022-06-01 14:27       ` Sergey Senozhatsky
2022-06-01 14:27         ` Sergey Senozhatsky
2022-06-01 14:38         ` Christian König
2022-06-01 14:38           ` Christian König
2022-06-01 14:52           ` Sergey Senozhatsky
2022-06-01 14:52             ` Sergey Senozhatsky
2022-06-01 15:06             ` Christian König
2022-06-01 15:06               ` Christian König

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220530142232.2871634-1-senozhatsky@chromium.org \
    --to=senozhatsky@chromium.org \
    --cc=christian.koenig@amd.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=gustavo@padovan.org \
    --cc=hch@infradead.org \
    --cc=linaro-mm-sig@lists.linaro.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-media@vger.kernel.org \
    --cc=ribalda@chromium.org \
    --cc=sumit.semwal@linaro.org \
    --cc=tfiga@chromium.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.