All of lore.kernel.org
 help / color / mirror / Atom feed
From: Kefeng Wang <wangkefeng.wang@huawei.com>
To: <elver@google.com>, <catalin.marinas@arm.com>, <will@kernel.org>,
	<linux-arm-kernel@lists.infradead.org>,
	<linux-kernel@vger.kernel.org>, <mark.rutland@arm.com>,
	Jonathan Corbet <corbet@lwn.net>
Cc: <linux-doc@vger.kernel.org>, Kefeng Wang <wangkefeng.wang@huawei.com>
Subject: [PATCH v2 1/2] Documentation/barriers: Add memory barrier dma_mb()
Date: Fri, 20 May 2022 11:15:47 +0800	[thread overview]
Message-ID: <20220520031548.175582-2-wangkefeng.wang@huawei.com> (raw)
In-Reply-To: <20220520031548.175582-1-wangkefeng.wang@huawei.com>

The memory barrier dma_mb() is introduced by commit a76a37777f2c
("iommu/arm-smmu-v3: Ensure queue is read after updating prod pointer"),
which is used to ensure that prior (both reads and writes) accesses to
memory by a CPU are ordered w.r.t. a subsequent MMIO write.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 Documentation/memory-barriers.txt | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt
index b12df9137e1c..1eabcc0e4eca 100644
--- a/Documentation/memory-barriers.txt
+++ b/Documentation/memory-barriers.txt
@@ -1894,10 +1894,13 @@ There are some more advanced barrier functions:
 
  (*) dma_wmb();
  (*) dma_rmb();
+ (*) dma_mb();
 
      These are for use with consistent memory to guarantee the ordering
      of writes or reads of shared memory accessible to both the CPU and a
-     DMA capable device.
+     DMA capable device, in the case of ensure the prior (both reads and
+     writes) accesses to memory by a CPU are ordered w.r.t. a subsequent
+     MMIO write, dma_mb().
 
      For example, consider a device driver that shares memory with a device
      and uses a descriptor status value to indicate if the descriptor belongs
-- 
2.35.3


WARNING: multiple messages have this Message-ID (diff)
From: Kefeng Wang <wangkefeng.wang@huawei.com>
To: <elver@google.com>, <catalin.marinas@arm.com>, <will@kernel.org>,
	<linux-arm-kernel@lists.infradead.org>,
	<linux-kernel@vger.kernel.org>, <mark.rutland@arm.com>,
	Jonathan Corbet <corbet@lwn.net>
Cc: <linux-doc@vger.kernel.org>, Kefeng Wang <wangkefeng.wang@huawei.com>
Subject: [PATCH v2 1/2] Documentation/barriers: Add memory barrier dma_mb()
Date: Fri, 20 May 2022 11:15:47 +0800	[thread overview]
Message-ID: <20220520031548.175582-2-wangkefeng.wang@huawei.com> (raw)
In-Reply-To: <20220520031548.175582-1-wangkefeng.wang@huawei.com>

The memory barrier dma_mb() is introduced by commit a76a37777f2c
("iommu/arm-smmu-v3: Ensure queue is read after updating prod pointer"),
which is used to ensure that prior (both reads and writes) accesses to
memory by a CPU are ordered w.r.t. a subsequent MMIO write.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 Documentation/memory-barriers.txt | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt
index b12df9137e1c..1eabcc0e4eca 100644
--- a/Documentation/memory-barriers.txt
+++ b/Documentation/memory-barriers.txt
@@ -1894,10 +1894,13 @@ There are some more advanced barrier functions:
 
  (*) dma_wmb();
  (*) dma_rmb();
+ (*) dma_mb();
 
      These are for use with consistent memory to guarantee the ordering
      of writes or reads of shared memory accessible to both the CPU and a
-     DMA capable device.
+     DMA capable device, in the case of ensure the prior (both reads and
+     writes) accesses to memory by a CPU are ordered w.r.t. a subsequent
+     MMIO write, dma_mb().
 
      For example, consider a device driver that shares memory with a device
      and uses a descriptor status value to indicate if the descriptor belongs
-- 
2.35.3


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2022-05-20  3:05 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-05-20  3:15 [PATCH v2 0/2] arm64: Fix kcsan test_barrier fail and panic Kefeng Wang
2022-05-20  3:15 ` Kefeng Wang
2022-05-20  3:15 ` Kefeng Wang [this message]
2022-05-20  3:15   ` [PATCH v2 1/2] Documentation/barriers: Add memory barrier dma_mb() Kefeng Wang
2022-05-20 10:08   ` Marco Elver
2022-05-20 10:08     ` Marco Elver
2022-05-20 11:14     ` Kefeng Wang
2022-05-20 11:14       ` Kefeng Wang
2022-05-20  3:15 ` [PATCH v2 2/2] arm64: kcsan: Support detecting more missing memory barriers Kefeng Wang
2022-05-20  3:15   ` Kefeng Wang
2022-05-20 10:14   ` Marco Elver
2022-05-20 10:14     ` Marco Elver

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220520031548.175582-2-wangkefeng.wang@huawei.com \
    --to=wangkefeng.wang@huawei.com \
    --cc=catalin.marinas@arm.com \
    --cc=corbet@lwn.net \
    --cc=elver@google.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mark.rutland@arm.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.