All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v11 0/2] blk-mq: Rework blk-mq timeout handling again
@ 2018-05-18 18:00 Bart Van Assche
  2018-05-18 18:00 ` [PATCH v11 1/2] arch/*: Add CONFIG_ARCH_HAVE_CMPXCHG64 Bart Van Assche
  2018-05-18 18:00 ` [PATCH v11 2/2] blk-mq: Rework blk-mq timeout handling again Bart Van Assche
  0 siblings, 2 replies; 7+ messages in thread
From: Bart Van Assche @ 2018-05-18 18:00 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, linux-kernel, Christoph Hellwig, Bart Van Assche

Hello Jens,

This patch series reworks blk-mq timeout handling by introducing a state
machine per request. Please consider this patch series for inclusion in the
upstream kernel.

Bart.

Changes compared to v10:
- In patch 1/2, added "default y if 64BIT" to the "config ARCH_HAVE_CMPXCHG64"
  entry in arch/Kconfig. Left out the "select ARCH_HAVE_CMPXCHG64" statements
  that became superfluous due to this change (alpha, arm64, powerpc and s390).
- Also in patch 1/2, only select ARCH_HAVE_CMPXCHG64 if X86_CMPXCHG64 has been
  selected.
- In patch 2/2, moved blk_mq_change_rq_state() from blk-mq.h to blk-mq.c.
- Added a comment header above __blk_mq_requeue_request() and
  blk_mq_requeue_request().
- Documented the MQ_RQ_* state transitions in block/blk-mq.h.
- Left out the fourth argument of blk_mq_rq_set_deadline().

Changes compared to v9:
- Addressed multiple comments related to patch 1/2: added
  CONFIG_ARCH_HAVE_CMPXCHG64 for riscv, modified
  features/locking/cmpxchg64/arch-support.txt as requested and made the
  order of the symbols in the arch/*/Kconfig alphabetical where possible.

Changes compared to v8:
- Split into two patches.
- Moved the spin_lock_init() call from blk_mq_rq_ctx_init() into
  blk_mq_init_request().
- Fixed the deadline set by blk_add_timer().
- Surrounded the das_lock member with #ifndef CONFIG_ARCH_HAVE_CMPXCHG64 /
  #endif.

Changes compared to v7:
- Fixed the generation number mechanism. Note: with this patch applied the
  behavior of the block layer does not depend on the generation number.
- Added more 32-bit architectures to the list of architectures on which
  cmpxchg64() should not be used.

Changes compared to v6:
- Used a union instead of bit manipulations to store multiple values into
  a single 64-bit field.
- Reduced the size of the timeout field from 64 to 32 bits.
- Made sure that the block layer still builds with this patch applied
  for the sh and mips architectures.
- Fixed two sparse warnings that were introduced by this patch in the
  WRITE_ONCE() calls.

Changes compared to v5:
- Restored the synchronize_rcu() call between marking a request for timeout
  handling and the actual timeout handling to avoid that timeout handling
  starts while .queue_rq() is still in progress if the timeout is very short.
- Only use cmpxchg() if another context could attempt to change the request
  state concurrently. Use WRITE_ONCE() otherwise.

Changes compared to v4:
- Addressed multiple review comments from Christoph. The most important are
  that atomic_long_cmpxchg() has been changed into cmpxchg() and also that
  there is now a nice and clean split between the legacy and blk-mq versions
  of blk_add_timer().
- Changed the patch name and modified the patch description because there is
  disagreement about whether or not the v4.16 blk-mq core can complete a
  single request twice. Kept the "Cc: stable" tag because of
  https://bugzilla.kernel.org/show_bug.cgi?id=199077.

Changes compared to v3 (see also https://www.mail-archive.com/linux-block@vger.kernel.org/msg20073.html):
- Removed the spinlock again that was introduced to protect the request state.
  v4 uses atomic_long_cmpxchg() instead.
- Split __deadline into two variables - one for the legacy block layer and one
  for blk-mq.

Changes compared to v2 (https://www.mail-archive.com/linux-block@vger.kernel.org/msg18338.html):
- Rebased and retested on top of kernel v4.16.

Changes compared to v1 (https://www.mail-archive.com/linux-block@vger.kernel.org/msg18089.html):
- Removed the gstate and aborted_gstate members of struct request and used
  the __deadline member to encode both the generation and state information.

Bart Van Assche (2):
  arch/*: Add CONFIG_ARCH_HAVE_CMPXCHG64
  blk-mq: Rework blk-mq timeout handling again

 .../features/locking/cmpxchg64/arch-support.txt    |  33 +++
 arch/Kconfig                                       |   4 +
 arch/arm/Kconfig                                   |   1 +
 arch/ia64/Kconfig                                  |   1 +
 arch/m68k/Kconfig                                  |   1 +
 arch/mips/Kconfig                                  |   1 +
 arch/parisc/Kconfig                                |   1 +
 arch/riscv/Kconfig                                 |   1 +
 arch/sparc/Kconfig                                 |   1 +
 arch/x86/Kconfig                                   |   1 +
 arch/xtensa/Kconfig                                |   1 +
 block/blk-core.c                                   |   6 -
 block/blk-mq-debugfs.c                             |   1 -
 block/blk-mq.c                                     | 238 ++++++++++-----------
 block/blk-mq.h                                     |  64 +++---
 block/blk-timeout.c                                | 133 ++++++++----
 block/blk.h                                        |  11 +-
 include/linux/blkdev.h                             |  47 ++--
 18 files changed, 308 insertions(+), 238 deletions(-)
 create mode 100644 Documentation/features/locking/cmpxchg64/arch-support.txt

-- 
2.16.3

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH v11 1/2] arch/*: Add CONFIG_ARCH_HAVE_CMPXCHG64
  2018-05-18 18:00 [PATCH v11 0/2] blk-mq: Rework blk-mq timeout handling again Bart Van Assche
@ 2018-05-18 18:00 ` Bart Van Assche
  2018-05-18 18:32     ` hpa
  2018-05-18 18:00 ` [PATCH v11 2/2] blk-mq: Rework blk-mq timeout handling again Bart Van Assche
  1 sibling, 1 reply; 7+ messages in thread
From: Bart Van Assche @ 2018-05-18 18:00 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, linux-kernel, Christoph Hellwig, Bart Van Assche,
	Catalin Marinas, Will Deacon, Tony Luck, Fenghua Yu,
	Geert Uytterhoeven, James E.J. Bottomley, Helge Deller,
	Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	Martin Schwidefsky, Heiko Carstens, David S . Miller,
	Thomas Gleixner, Ingo Molnar, H . Peter Anvin, Chris Zankel,
	Max Filippov, Arnd Bergmann, Jonathan Corbet

The next patch in this series introduces a call to cmpxchg64()
in the block layer core for those architectures on which this
functionality is available. Make it possible to test whether
cmpxchg64() is available by introducing CONFIG_ARCH_HAVE_CMPXCHG64.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
Cc: Helge Deller <deller@gmx.de>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Chris Zankel <chris@zankel.net>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Jonathan Corbet <corbet@lwn.net>
---
 .../features/locking/cmpxchg64/arch-support.txt    | 33 ++++++++++++++++++++++
 arch/Kconfig                                       |  4 +++
 arch/arm/Kconfig                                   |  1 +
 arch/ia64/Kconfig                                  |  1 +
 arch/m68k/Kconfig                                  |  1 +
 arch/mips/Kconfig                                  |  1 +
 arch/parisc/Kconfig                                |  1 +
 arch/riscv/Kconfig                                 |  1 +
 arch/sparc/Kconfig                                 |  1 +
 arch/x86/Kconfig                                   |  1 +
 arch/xtensa/Kconfig                                |  1 +
 11 files changed, 46 insertions(+)
 create mode 100644 Documentation/features/locking/cmpxchg64/arch-support.txt

diff --git a/Documentation/features/locking/cmpxchg64/arch-support.txt b/Documentation/features/locking/cmpxchg64/arch-support.txt
new file mode 100644
index 000000000000..84bfef7242b2
--- /dev/null
+++ b/Documentation/features/locking/cmpxchg64/arch-support.txt
@@ -0,0 +1,33 @@
+#
+# Feature name:          cmpxchg64
+#         Kconfig:       ARCH_HAVE_CMPXCHG64
+#         description:   arch supports the cmpxchg64() API
+#
+    -----------------------
+    |         arch |status|
+    -----------------------
+    |       alpha: |  ok  |
+    |         arc: |  ..  |
+    |         arm: |  ok  |
+    |       arm64: |  ok  |
+    |         c6x: |  ..  |
+    |       h8300: |  ..  |
+    |     hexagon: |  ..  |
+    |        ia64: |  ok  |
+    |        m68k: |  ok  |
+    |  microblaze: |  ..  |
+    |        mips: |  ok  |
+    |       nds32: |  ..  |
+    |       nios2: |  ..  |
+    |    openrisc: |  ..  |
+    |      parisc: |  ok  |
+    |     powerpc: |  ok  |
+    |       riscv: |  ok  |
+    |        s390: |  ok  |
+    |          sh: |  ..  |
+    |       sparc: |  ok  |
+    |          um: |  ..  |
+    |   unicore32: |  ..  |
+    |         x86: |  ok  |
+    |      xtensa: |  ok  |
+    -----------------------
diff --git a/arch/Kconfig b/arch/Kconfig
index 8e0d665c8d53..9840b2577af1 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -358,6 +358,10 @@ config HAVE_ALIGNED_STRUCT_PAGE
 	  on a struct page for better performance. However selecting this
 	  might increase the size of a struct page by a word.
 
+config ARCH_HAVE_CMPXCHG64
+	bool
+	default y if 64BIT
+
 config HAVE_CMPXCHG_LOCAL
 	bool
 
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index a7f8e7f4b88f..02c75697176e 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -13,6 +13,7 @@ config ARM
 	select ARCH_HAS_STRICT_KERNEL_RWX if MMU && !XIP_KERNEL
 	select ARCH_HAS_STRICT_MODULE_RWX if MMU
 	select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
+	select ARCH_HAVE_CMPXCHG64 if !THUMB2_KERNEL
 	select ARCH_HAVE_CUSTOM_GPIO_H
 	select ARCH_HAS_GCOV_PROFILE_ALL
 	select ARCH_MIGHT_HAVE_PC_PARPORT
diff --git a/arch/ia64/Kconfig b/arch/ia64/Kconfig
index bbe12a038d21..31c49e1482e2 100644
--- a/arch/ia64/Kconfig
+++ b/arch/ia64/Kconfig
@@ -41,6 +41,7 @@ config IA64
 	select GENERIC_PENDING_IRQ if SMP
 	select GENERIC_IRQ_SHOW
 	select GENERIC_IRQ_LEGACY
+	select ARCH_HAVE_CMPXCHG64
 	select ARCH_HAVE_NMI_SAFE_CMPXCHG
 	select GENERIC_IOMAP
 	select GENERIC_SMP_IDLE_THREAD
diff --git a/arch/m68k/Kconfig b/arch/m68k/Kconfig
index 785612b576f7..7b87cda3bbed 100644
--- a/arch/m68k/Kconfig
+++ b/arch/m68k/Kconfig
@@ -11,6 +11,7 @@ config M68K
 	select GENERIC_ATOMIC64
 	select HAVE_UID16
 	select VIRT_TO_BUS
+	select ARCH_HAVE_CMPXCHG64
 	select ARCH_HAVE_NMI_SAFE_CMPXCHG if RMW_INSNS
 	select GENERIC_CPU_DEVICES
 	select GENERIC_IOMAP
diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
index 225c95da23ce..088bca0fd9f2 100644
--- a/arch/mips/Kconfig
+++ b/arch/mips/Kconfig
@@ -7,6 +7,7 @@ config MIPS
 	select ARCH_DISCARD_MEMBLOCK
 	select ARCH_HAS_ELF_RANDOMIZE
 	select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
+	select ARCH_HAVE_CMPXCHG64 if 64BIT
 	select ARCH_SUPPORTS_UPROBES
 	select ARCH_USE_BUILTIN_BSWAP
 	select ARCH_USE_CMPXCHG_LOCKREF if 64BIT
diff --git a/arch/parisc/Kconfig b/arch/parisc/Kconfig
index fc5a574c3482..166c30865255 100644
--- a/arch/parisc/Kconfig
+++ b/arch/parisc/Kconfig
@@ -30,6 +30,7 @@ config PARISC
 	select GENERIC_ATOMIC64 if !64BIT
 	select GENERIC_IRQ_PROBE
 	select GENERIC_PCI_IOMAP
+	select ARCH_HAVE_CMPXCHG64
 	select ARCH_HAVE_NMI_SAFE_CMPXCHG
 	select GENERIC_SMP_IDLE_THREAD
 	select GENERIC_CPU_DEVICES
diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index cd4fd85fde84..4f886a055ff6 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -8,6 +8,7 @@ config RISCV
 	select OF
 	select OF_EARLY_FLATTREE
 	select OF_IRQ
+	select ARCH_HAVE_CMPXCHG64
 	select ARCH_WANT_FRAME_POINTERS
 	select CLONE_BACKWARDS
 	select COMMON_CLK
diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig
index 8767e45f1b2b..e3429b78c491 100644
--- a/arch/sparc/Kconfig
+++ b/arch/sparc/Kconfig
@@ -75,6 +75,7 @@ config SPARC64
 	select HAVE_PERF_EVENTS
 	select PERF_USE_VMALLOC
 	select IRQ_PREFLOW_FASTEOI
+	select ARCH_HAVE_CMPXCHG64
 	select ARCH_HAVE_NMI_SAFE_CMPXCHG
 	select HAVE_C_RECORDMCOUNT
 	select NO_BOOTMEM
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index c07f492b871a..52331f395bf4 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -67,6 +67,7 @@ config X86
 	select ARCH_HAS_SYNC_CORE_BEFORE_USERMODE
 	select ARCH_HAS_UBSAN_SANITIZE_ALL
 	select ARCH_HAS_ZONE_DEVICE		if X86_64
+	select ARCH_HAVE_CMPXCHG64		if X86_CMPXCHG64
 	select ARCH_HAVE_NMI_SAFE_CMPXCHG
 	select ARCH_MIGHT_HAVE_ACPI_PDC		if ACPI
 	select ARCH_MIGHT_HAVE_PC_PARPORT
diff --git a/arch/xtensa/Kconfig b/arch/xtensa/Kconfig
index c921e8bccdc8..0e5c77958fa3 100644
--- a/arch/xtensa/Kconfig
+++ b/arch/xtensa/Kconfig
@@ -4,6 +4,7 @@ config ZONE_DMA
 
 config XTENSA
 	def_bool y
+	select ARCH_HAVE_CMPXCHG64
 	select ARCH_NO_COHERENT_DMA_MMAP if !MMU
 	select ARCH_WANT_FRAME_POINTERS
 	select ARCH_WANT_IPC_PARSE_VERSION
-- 
2.16.3

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v11 2/2] blk-mq: Rework blk-mq timeout handling again
  2018-05-18 18:00 [PATCH v11 0/2] blk-mq: Rework blk-mq timeout handling again Bart Van Assche
  2018-05-18 18:00 ` [PATCH v11 1/2] arch/*: Add CONFIG_ARCH_HAVE_CMPXCHG64 Bart Van Assche
@ 2018-05-18 18:00 ` Bart Van Assche
  1 sibling, 0 replies; 7+ messages in thread
From: Bart Van Assche @ 2018-05-18 18:00 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, linux-kernel, Christoph Hellwig, Bart Van Assche,
	Tejun Heo, Jianchao Wang, Ming Lei, Sebastian Ott, Sagi Grimberg,
	Israel Rukshin, Max Gurtovoy

Recently the blk-mq timeout handling code was reworked. See also Tejun
Heo, "[PATCHSET v4] blk-mq: reimplement timeout handling", 08 Jan 2018
(https://www.mail-archive.com/linux-block@vger.kernel.org/msg16985.html).
This patch reworks the blk-mq timeout handling code again. The timeout
handling code is simplified by introducing a state machine per request.
This change avoids that the blk-mq timeout handling code ignores
completions that occur after blk_mq_check_expired() has been called and
before blk_mq_rq_timed_out() has been called.

Fix this race as follows:
- Replace the __deadline member of struct request by a new member
  called das that contains the generation number, state and deadline.
  Only 32 bits are used for the deadline field such that all three
  fields occupy only 64 bits. This change reduces the maximum supported
  request timeout value from (2**63/HZ) to (2**31/HZ).
- Remove all request member variables that became superfluous due to
  this change: gstate, gstate_seq and aborted_gstate_sync.
- Remove the request state information that became superfluous due to
  this patch, namely RQF_MQ_TIMEOUT_EXPIRED.
- Remove the code that became superfluous due to this change, namely
  the RCU lock and unlock statements in blk_mq_complete_request() and
  also the synchronize_rcu() call in the timeout handler.

Notes:
- A spinlock is used to protect atomic changes of rq->das on those
  architectures that do not provide a cmpxchg64() implementation.
- Atomic instructions are only used to update the request state if
  a concurrent request state change could be in progress.
- blk_add_timer() has been split into two functions - one for the
  legacy block layer and one for blk-mq.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Jianchao Wang <jianchao.w.wang@oracle.com>
Cc: Ming Lei <ming.lei@redhat.com>
Cc: Sebastian Ott <sebott@linux.ibm.com>
Cc: Sagi Grimberg <sagi@grimberg.me>
Cc: Israel Rukshin <israelr@mellanox.com>,
Cc: Max Gurtovoy <maxg@mellanox.com>
---
 block/blk-core.c       |   6 --
 block/blk-mq-debugfs.c |   1 -
 block/blk-mq.c         | 238 ++++++++++++++++++++++---------------------------
 block/blk-mq.h         |  64 ++++++-------
 block/blk-timeout.c    | 133 ++++++++++++++++++---------
 block/blk.h            |  11 +--
 include/linux/blkdev.h |  47 +++++-----
 7 files changed, 262 insertions(+), 238 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 43370faee935..cee03cad99f2 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -198,12 +198,6 @@ void blk_rq_init(struct request_queue *q, struct request *rq)
 	rq->internal_tag = -1;
 	rq->start_time_ns = ktime_get_ns();
 	rq->part = NULL;
-	seqcount_init(&rq->gstate_seq);
-	u64_stats_init(&rq->aborted_gstate_sync);
-	/*
-	 * See comment of blk_mq_init_request
-	 */
-	WRITE_ONCE(rq->gstate, MQ_RQ_GEN_INC);
 }
 EXPORT_SYMBOL(blk_rq_init);
 
diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
index 3080e18cb859..ffa622366922 100644
--- a/block/blk-mq-debugfs.c
+++ b/block/blk-mq-debugfs.c
@@ -344,7 +344,6 @@ static const char *const rqf_name[] = {
 	RQF_NAME(STATS),
 	RQF_NAME(SPECIAL_PAYLOAD),
 	RQF_NAME(ZONE_WRITE_LOCKED),
-	RQF_NAME(MQ_TIMEOUT_EXPIRED),
 	RQF_NAME(MQ_POLL_SLEPT),
 };
 #undef RQF_NAME
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 4cbfd784e837..e7dfa6ed7a44 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -318,7 +318,7 @@ static struct request *blk_mq_rq_ctx_init(struct blk_mq_alloc_data *data,
 	rq->special = NULL;
 	/* tag was already set */
 	rq->extra_len = 0;
-	rq->__deadline = 0;
+	WARN_ON_ONCE(blk_mq_rq_state(rq) != MQ_RQ_IDLE);
 
 	INIT_LIST_HEAD(&rq->timeout_list);
 	rq->timeout = 0;
@@ -465,6 +465,39 @@ struct request *blk_mq_alloc_request_hctx(struct request_queue *q,
 }
 EXPORT_SYMBOL_GPL(blk_mq_alloc_request_hctx);
 
+/**
+ * blk_mq_change_rq_state - atomically test and set request state
+ * @rq: Request pointer.
+ * @old_state: Old request state.
+ * @new_state: New request state.
+ *
+ * Returns %true if and only if the old state was @old and if the state has
+ * been changed into @new.
+ */
+static bool blk_mq_change_rq_state(struct request *rq,
+				   enum mq_rq_state old_state,
+				   enum mq_rq_state new_state)
+{
+	union blk_deadline_and_state das = READ_ONCE(rq->das);
+	union blk_deadline_and_state old_val = das;
+	union blk_deadline_and_state new_val = das;
+
+	WARN_ON_ONCE(new_state == MQ_RQ_IN_FLIGHT);
+
+	old_val.state = old_state;
+	new_val.state = new_state;
+	/*
+	 * For transitions from state in-flight to another state cmpxchg64()
+	 * must be used. For other state transitions it is safe to use
+	 * WRITE_ONCE().
+	 */
+	if (old_state != MQ_RQ_IN_FLIGHT) {
+		WRITE_ONCE(rq->das.val, new_val.val);
+		return true;
+	}
+	return blk_mq_set_rq_state(rq, old_val, new_val);
+}
+
 void blk_mq_free_request(struct request *rq)
 {
 	struct request_queue *q = rq->q;
@@ -494,7 +527,8 @@ void blk_mq_free_request(struct request *rq)
 	if (blk_rq_rl(rq))
 		blk_put_rl(blk_rq_rl(rq));
 
-	blk_mq_rq_update_state(rq, MQ_RQ_IDLE);
+	if (!blk_mq_change_rq_state(rq, blk_mq_rq_state(rq), MQ_RQ_IDLE))
+		WARN_ON_ONCE(true);
 	if (rq->tag != -1)
 		blk_mq_put_tag(hctx, hctx->tags, ctx, rq->tag);
 	if (sched_tag != -1)
@@ -547,8 +581,7 @@ static void __blk_mq_complete_request(struct request *rq)
 	bool shared = false;
 	int cpu;
 
-	WARN_ON_ONCE(blk_mq_rq_state(rq) != MQ_RQ_IN_FLIGHT);
-	blk_mq_rq_update_state(rq, MQ_RQ_COMPLETE);
+	WARN_ON_ONCE(blk_mq_rq_state(rq) != MQ_RQ_COMPLETE);
 
 	if (rq->internal_tag != -1)
 		blk_mq_sched_completed_request(rq);
@@ -593,36 +626,6 @@ static void hctx_lock(struct blk_mq_hw_ctx *hctx, int *srcu_idx)
 		*srcu_idx = srcu_read_lock(hctx->srcu);
 }
 
-static void blk_mq_rq_update_aborted_gstate(struct request *rq, u64 gstate)
-{
-	unsigned long flags;
-
-	/*
-	 * blk_mq_rq_aborted_gstate() is used from the completion path and
-	 * can thus be called from irq context.  u64_stats_fetch in the
-	 * middle of update on the same CPU leads to lockup.  Disable irq
-	 * while updating.
-	 */
-	local_irq_save(flags);
-	u64_stats_update_begin(&rq->aborted_gstate_sync);
-	rq->aborted_gstate = gstate;
-	u64_stats_update_end(&rq->aborted_gstate_sync);
-	local_irq_restore(flags);
-}
-
-static u64 blk_mq_rq_aborted_gstate(struct request *rq)
-{
-	unsigned int start;
-	u64 aborted_gstate;
-
-	do {
-		start = u64_stats_fetch_begin(&rq->aborted_gstate_sync);
-		aborted_gstate = rq->aborted_gstate;
-	} while (u64_stats_fetch_retry(&rq->aborted_gstate_sync, start));
-
-	return aborted_gstate;
-}
-
 /**
  * blk_mq_complete_request - end I/O on a request
  * @rq:		the request being processed
@@ -634,27 +637,12 @@ static u64 blk_mq_rq_aborted_gstate(struct request *rq)
 void blk_mq_complete_request(struct request *rq)
 {
 	struct request_queue *q = rq->q;
-	struct blk_mq_hw_ctx *hctx = blk_mq_map_queue(q, rq->mq_ctx->cpu);
-	int srcu_idx;
 
 	if (unlikely(blk_should_fake_timeout(q)))
 		return;
 
-	/*
-	 * If @rq->aborted_gstate equals the current instance, timeout is
-	 * claiming @rq and we lost.  This is synchronized through
-	 * hctx_lock().  See blk_mq_timeout_work() for details.
-	 *
-	 * Completion path never blocks and we can directly use RCU here
-	 * instead of hctx_lock() which can be either RCU or SRCU.
-	 * However, that would complicate paths which want to synchronize
-	 * against us.  Let stay in sync with the issue path so that
-	 * hctx_lock() covers both issue and completion paths.
-	 */
-	hctx_lock(hctx, &srcu_idx);
-	if (blk_mq_rq_aborted_gstate(rq) != rq->gstate)
+	if (blk_mq_change_rq_state(rq, MQ_RQ_IN_FLIGHT, MQ_RQ_COMPLETE))
 		__blk_mq_complete_request(rq);
-	hctx_unlock(hctx, srcu_idx);
 }
 EXPORT_SYMBOL(blk_mq_complete_request);
 
@@ -681,27 +669,7 @@ void blk_mq_start_request(struct request *rq)
 		wbt_issue(q->rq_wb, rq);
 	}
 
-	WARN_ON_ONCE(blk_mq_rq_state(rq) != MQ_RQ_IDLE);
-
-	/*
-	 * Mark @rq in-flight which also advances the generation number,
-	 * and register for timeout.  Protect with a seqcount to allow the
-	 * timeout path to read both @rq->gstate and @rq->deadline
-	 * coherently.
-	 *
-	 * This is the only place where a request is marked in-flight.  If
-	 * the timeout path reads an in-flight @rq->gstate, the
-	 * @rq->deadline it reads together under @rq->gstate_seq is
-	 * guaranteed to be the matching one.
-	 */
-	preempt_disable();
-	write_seqcount_begin(&rq->gstate_seq);
-
-	blk_mq_rq_update_state(rq, MQ_RQ_IN_FLIGHT);
-	blk_add_timer(rq);
-
-	write_seqcount_end(&rq->gstate_seq);
-	preempt_enable();
+	blk_mq_add_timer(rq, MQ_RQ_IDLE);
 
 	if (q->dma_drain_size && blk_rq_bytes(rq)) {
 		/*
@@ -714,27 +682,46 @@ void blk_mq_start_request(struct request *rq)
 }
 EXPORT_SYMBOL(blk_mq_start_request);
 
-/*
- * When we reach here because queue is busy, it's safe to change the state
- * to IDLE without checking @rq->aborted_gstate because we should still be
- * holding the RCU read lock and thus protected against timeout.
+/**
+ * __blk_mq_requeue_request - requeue a request
+ * @rq: request to be requeued
+ *
+ * This function is either called by blk_mq_requeue_request() or by the block
+ * layer core if .queue_rq() returned BLK_STS_RESOURCE or BLK_STS_DEV_RESOURCE.
+ * If the request state is MQ_RQ_IN_FLIGHT and if this function is called from
+ * inside .queue_rq() then it is guaranteed that the timeout code won't try to
+ * modify the request state while this function is in progress because an RCU
+ * read lock is held around .queue_rq() and because the timeout code calls
+ * synchronize_rcu() after having marked requests and before processing
+ * time-outs.
  */
 static void __blk_mq_requeue_request(struct request *rq)
 {
 	struct request_queue *q = rq->q;
+	enum mq_rq_state old_state = blk_mq_rq_state(rq);
 
 	blk_mq_put_driver_tag(rq);
 
 	trace_block_rq_requeue(q, rq);
 	wbt_requeue(q->rq_wb, rq);
 
-	if (blk_mq_rq_state(rq) != MQ_RQ_IDLE) {
-		blk_mq_rq_update_state(rq, MQ_RQ_IDLE);
+	if (old_state != MQ_RQ_IDLE) {
+		if (!blk_mq_change_rq_state(rq, old_state, MQ_RQ_IDLE))
+			WARN_ON_ONCE(true);
 		if (q->dma_drain_size && blk_rq_bytes(rq))
 			rq->nr_phys_segments--;
 	}
 }
 
+/**
+ * blk_mq_requeue_request - requeue a request
+ * @rq: request to be requeued
+ * @kick_requeue_list: whether or not to kick the requeue_list
+ *
+ * This function is called after a request has completed, after a request has
+ * timed out or from inside .queue_rq(). In the latter case the request may
+ * already have been started.
+ */
 void blk_mq_requeue_request(struct request *rq, bool kick_requeue_list)
 {
 	__blk_mq_requeue_request(rq);
@@ -838,8 +825,6 @@ static void blk_mq_rq_timed_out(struct request *req, bool reserved)
 	const struct blk_mq_ops *ops = req->q->mq_ops;
 	enum blk_eh_timer_return ret = BLK_EH_RESET_TIMER;
 
-	req->rq_flags |= RQF_MQ_TIMEOUT_EXPIRED;
-
 	if (ops->timeout)
 		ret = ops->timeout(req, reserved);
 
@@ -848,13 +833,7 @@ static void blk_mq_rq_timed_out(struct request *req, bool reserved)
 		__blk_mq_complete_request(req);
 		break;
 	case BLK_EH_RESET_TIMER:
-		/*
-		 * As nothing prevents from completion happening while
-		 * ->aborted_gstate is set, this may lead to ignored
-		 * completions and further spurious timeouts.
-		 */
-		blk_mq_rq_update_aborted_gstate(req, 0);
-		blk_add_timer(req);
+		blk_mq_add_timer(req, MQ_RQ_COMPLETE);
 		break;
 	case BLK_EH_NOT_HANDLED:
 		break;
@@ -868,48 +847,50 @@ static void blk_mq_check_expired(struct blk_mq_hw_ctx *hctx,
 		struct request *rq, void *priv, bool reserved)
 {
 	struct blk_mq_timeout_data *data = priv;
-	unsigned long gstate, deadline;
-	int start;
+	union blk_deadline_and_state das = READ_ONCE(rq->das);
+	unsigned long now = jiffies;
+	int32_t diff_jiffies = das.deadline - now;
+	int32_t diff_next = das.deadline - data->next;
+	enum mq_rq_state rq_state = das.state;
 
-	might_sleep();
-
-	if (rq->rq_flags & RQF_MQ_TIMEOUT_EXPIRED)
-		return;
-
-	/* read coherent snapshots of @rq->state_gen and @rq->deadline */
-	while (true) {
-		start = read_seqcount_begin(&rq->gstate_seq);
-		gstate = READ_ONCE(rq->gstate);
-		deadline = blk_rq_deadline(rq);
-		if (!read_seqcount_retry(&rq->gstate_seq, start))
-			break;
-		cond_resched();
-	}
-
-	/* if in-flight && overdue, mark for abortion */
-	if ((gstate & MQ_RQ_STATE_MASK) == MQ_RQ_IN_FLIGHT &&
-	    time_after_eq(jiffies, deadline)) {
-		blk_mq_rq_update_aborted_gstate(rq, gstate);
+	/*
+	 * Make sure that rq->aborted_gstate != rq->das if rq->deadline has not
+	 * yet been reached even if a request gets recycled before
+	 * blk_mq_terminate_expired() is called and the value of rq->deadline
+	 * is not modified due to the request recycling.
+	 */
+	rq->aborted_gstate = das;
+	rq->aborted_gstate.generation ^= (1UL << 29);
+	if (diff_jiffies <= 0 && rq_state == MQ_RQ_IN_FLIGHT) {
+		/* request timed out */
+		rq->aborted_gstate = das;
 		data->nr_expired++;
 		hctx->nr_expired++;
-	} else if (!data->next_set || time_after(data->next, deadline)) {
-		data->next = deadline;
+	} else if (!data->next_set) {
+		/* data->next is not yet set; set it to deadline. */
+		data->next = now + diff_jiffies;
 		data->next_set = 1;
+	} else if (diff_next < 0) {
+		/* data->next is later than deadline; reduce data->next. */
+		data->next += diff_next;
 	}
+
 }
 
 static void blk_mq_terminate_expired(struct blk_mq_hw_ctx *hctx,
 		struct request *rq, void *priv, bool reserved)
 {
+	union blk_deadline_and_state old_val = rq->aborted_gstate;
+	union blk_deadline_and_state new_val = old_val;
+
+	new_val.state = MQ_RQ_COMPLETE;
+
 	/*
-	 * We marked @rq->aborted_gstate and waited for RCU.  If there were
-	 * completions that we lost to, they would have finished and
-	 * updated @rq->gstate by now; otherwise, the completion path is
-	 * now guaranteed to see @rq->aborted_gstate and yield.  If
-	 * @rq->aborted_gstate still matches @rq->gstate, @rq is ours.
+	 * We marked @rq->aborted_gstate and waited for ongoing .queue_rq()
+	 * calls. If rq->das has not changed that means that it
+	 * is now safe to change the request state and to handle the timeout.
 	 */
-	if (!(rq->rq_flags & RQF_MQ_TIMEOUT_EXPIRED) &&
-	    READ_ONCE(rq->gstate) == rq->aborted_gstate)
+	if (blk_mq_set_rq_state(rq, old_val, new_val))
 		blk_mq_rq_timed_out(rq, reserved);
 }
 
@@ -948,10 +929,12 @@ static void blk_mq_timeout_work(struct work_struct *work)
 		bool has_rcu = false;
 
 		/*
-		 * Wait till everyone sees ->aborted_gstate.  The
-		 * sequential waits for SRCUs aren't ideal.  If this ever
-		 * becomes a problem, we can add per-hw_ctx rcu_head and
-		 * wait in parallel.
+		 * For very short timeouts it can happen that
+		 * blk_mq_check_expired() modifies the state of a request
+		 * while .queue_rq() is still in progress. Hence wait until
+		 * these .queue_rq() calls have finished. This is also
+		 * necessary to avoid races with blk_mq_requeue_request() for
+		 * requests that have already been started.
 		 */
 		queue_for_each_hw_ctx(q, hctx, i) {
 			if (!hctx->nr_expired)
@@ -967,7 +950,7 @@ static void blk_mq_timeout_work(struct work_struct *work)
 		if (has_rcu)
 			synchronize_rcu();
 
-		/* terminate the ones we won */
+		/* Terminate the requests marked by blk_mq_check_expired(). */
 		blk_mq_queue_tag_busy_iter(q, blk_mq_terminate_expired, NULL);
 	}
 
@@ -2043,21 +2026,16 @@ static int blk_mq_init_request(struct blk_mq_tag_set *set, struct request *rq,
 {
 	int ret;
 
+#ifndef CONFIG_ARCH_HAVE_CMPXCHG64
+	spin_lock_init(&rq->das_lock);
+#endif
+
 	if (set->ops->init_request) {
 		ret = set->ops->init_request(set, rq, hctx_idx, node);
 		if (ret)
 			return ret;
 	}
 
-	seqcount_init(&rq->gstate_seq);
-	u64_stats_init(&rq->aborted_gstate_sync);
-	/*
-	 * start gstate with gen 1 instead of 0, otherwise it will be equal
-	 * to aborted_gstate, and be identified timed out by
-	 * blk_mq_terminate_expired.
-	 */
-	WRITE_ONCE(rq->gstate, MQ_RQ_GEN_INC);
-
 	return 0;
 }
 
diff --git a/block/blk-mq.h b/block/blk-mq.h
index e1bb420dc5d6..80dc06810ecf 100644
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -2,6 +2,7 @@
 #ifndef INT_BLK_MQ_H
 #define INT_BLK_MQ_H
 
+#include <asm/cmpxchg.h>
 #include "blk-stat.h"
 #include "blk-mq-tag.h"
 
@@ -30,18 +31,22 @@ struct blk_mq_ctx {
 	struct kobject		kobj;
 } ____cacheline_aligned_in_smp;
 
-/*
- * Bits for request->gstate.  The lower two bits carry MQ_RQ_* state value
- * and the upper bits the generation number.
+/**
+ * enum mq_rq_state - blk-mq request state
+ *
+ * The legal state transitions are:
+ * - idle      -> in-flight: blk_mq_start_request()
+ * - in-flight -> complete:  blk_mq_complete_request() or request times out
+ * - complete  -> idle:      blk_mq_requeue_request() or blk_mq_free_request()
+ * - in-flight -> idle:      blk_mq_requeue_request() or blk_mq_free_request()
+ * - complete  -> in-flight: request restart due to BLK_EH_RESET_TIMER
+ *
+ * See also blk_deadline_and_state.state.
  */
 enum mq_rq_state {
 	MQ_RQ_IDLE		= 0,
 	MQ_RQ_IN_FLIGHT		= 1,
 	MQ_RQ_COMPLETE		= 2,
-
-	MQ_RQ_STATE_BITS	= 2,
-	MQ_RQ_STATE_MASK	= (1 << MQ_RQ_STATE_BITS) - 1,
-	MQ_RQ_GEN_INC		= 1 << MQ_RQ_STATE_BITS,
 };
 
 void blk_mq_freeze_queue(struct request_queue *q);
@@ -103,37 +108,34 @@ extern void blk_mq_hctx_kobj_init(struct blk_mq_hw_ctx *hctx);
 
 void blk_mq_release(struct request_queue *q);
 
-/**
- * blk_mq_rq_state() - read the current MQ_RQ_* state of a request
- * @rq: target request.
- */
-static inline int blk_mq_rq_state(struct request *rq)
+static inline bool blk_mq_set_rq_state(struct request *rq,
+				       union blk_deadline_and_state old_val,
+				       union blk_deadline_and_state new_val)
 {
-	return READ_ONCE(rq->gstate) & MQ_RQ_STATE_MASK;
+#ifdef CONFIG_ARCH_HAVE_CMPXCHG64
+	return cmpxchg64(&rq->das.val, old_val.val, new_val.val) == old_val.val;
+#else
+	unsigned long flags;
+	bool res = false;
+
+	spin_lock_irqsave(&rq->das_lock, flags);
+	if (rq->das.val == old_val.val) {
+		rq->das = new_val;
+		res = true;
+	}
+	spin_unlock_irqrestore(&rq->das_lock, flags);
+
+	return res;
+#endif
 }
 
 /**
- * blk_mq_rq_update_state() - set the current MQ_RQ_* state of a request
+ * blk_mq_rq_state() - read the current MQ_RQ_* state of a request
  * @rq: target request.
- * @state: new state to set.
- *
- * Set @rq's state to @state.  The caller is responsible for ensuring that
- * there are no other updaters.  A request can transition into IN_FLIGHT
- * only from IDLE and doing so increments the generation number.
  */
-static inline void blk_mq_rq_update_state(struct request *rq,
-					  enum mq_rq_state state)
+static inline enum mq_rq_state blk_mq_rq_state(struct request *rq)
 {
-	u64 old_val = READ_ONCE(rq->gstate);
-	u64 new_val = (old_val & ~MQ_RQ_STATE_MASK) | state;
-
-	if (state == MQ_RQ_IN_FLIGHT) {
-		WARN_ON_ONCE((old_val & MQ_RQ_STATE_MASK) != MQ_RQ_IDLE);
-		new_val += MQ_RQ_GEN_INC;
-	}
-
-	/* avoid exposing interim values */
-	WRITE_ONCE(rq->gstate, new_val);
+	return READ_ONCE(rq->das).state;
 }
 
 static inline struct blk_mq_ctx *__blk_mq_get_ctx(struct request_queue *q,
diff --git a/block/blk-timeout.c b/block/blk-timeout.c
index 652d4d4d3e97..4b016886f42d 100644
--- a/block/blk-timeout.c
+++ b/block/blk-timeout.c
@@ -145,6 +145,42 @@ void blk_timeout_work(struct work_struct *work)
 	spin_unlock_irqrestore(q->queue_lock, flags);
 }
 
+/*
+ * If the state of request @rq equals @old_state, update deadline and request
+ * state atomically to @time and @new_state. blk-mq only. cmpxchg64() is only
+ * used if there could be a concurrent update attempt from another context.
+ */
+static bool blk_mq_rq_set_deadline(struct request *rq, unsigned long new_time,
+				   enum mq_rq_state old_state)
+{
+	union blk_deadline_and_state old_val, new_val;
+	const enum mq_rq_state new_state = MQ_RQ_IN_FLIGHT;
+
+	if (old_state != MQ_RQ_IN_FLIGHT) {
+		old_val = READ_ONCE(rq->das);
+		if (old_val.state != old_state)
+			return false;
+		new_val = old_val;
+		new_val.deadline = new_time;
+		new_val.state = new_state;
+		new_val.generation++;
+		WRITE_ONCE(rq->das.val, new_val.val);
+		return true;
+	}
+
+	do {
+		old_val = READ_ONCE(rq->das);
+		if (old_val.state != old_state)
+			return false;
+		new_val = old_val;
+		new_val.deadline = new_time;
+		new_val.state = new_state;
+		new_val.generation++;
+	} while (!blk_mq_set_rq_state(rq, old_val, new_val));
+
+	return true;
+}
+
 /**
  * blk_abort_request -- Request request recovery for the specified command
  * @req:	pointer to the request of interest
@@ -158,12 +194,11 @@ void blk_abort_request(struct request *req)
 {
 	if (req->q->mq_ops) {
 		/*
-		 * All we need to ensure is that timeout scan takes place
-		 * immediately and that scan sees the new timeout value.
-		 * No need for fancy synchronizations.
+		 * Ensure that a timeout scan takes place immediately and that
+		 * that scan sees the new timeout value.
 		 */
-		blk_rq_set_deadline(req, jiffies);
-		kblockd_schedule_work(&req->q->timeout_work);
+		if (blk_mq_rq_set_deadline(req, jiffies, MQ_RQ_IN_FLIGHT))
+			kblockd_schedule_work(&req->q->timeout_work);
 	} else {
 		if (blk_mark_rq_complete(req))
 			return;
@@ -184,52 +219,17 @@ unsigned long blk_rq_timeout(unsigned long timeout)
 	return timeout;
 }
 
-/**
- * blk_add_timer - Start timeout timer for a single request
- * @req:	request that is about to start running.
- *
- * Notes:
- *    Each request has its own timer, and as it is added to the queue, we
- *    set up the timer. When the request completes, we cancel the timer.
- */
-void blk_add_timer(struct request *req)
+static void __blk_add_timer(struct request *req, unsigned long deadline)
 {
 	struct request_queue *q = req->q;
 	unsigned long expiry;
 
-	if (!q->mq_ops)
-		lockdep_assert_held(q->queue_lock);
-
-	/* blk-mq has its own handler, so we don't need ->rq_timed_out_fn */
-	if (!q->mq_ops && !q->rq_timed_out_fn)
-		return;
-
-	BUG_ON(!list_empty(&req->timeout_list));
-
-	/*
-	 * Some LLDs, like scsi, peek at the timeout to prevent a
-	 * command from being retried forever.
-	 */
-	if (!req->timeout)
-		req->timeout = q->rq_timeout;
-
-	blk_rq_set_deadline(req, jiffies + req->timeout);
-	req->rq_flags &= ~RQF_MQ_TIMEOUT_EXPIRED;
-
-	/*
-	 * Only the non-mq case needs to add the request to a protected list.
-	 * For the mq case we simply scan the tag map.
-	 */
-	if (!q->mq_ops)
-		list_add_tail(&req->timeout_list, &req->q->timeout_list);
-
 	/*
 	 * If the timer isn't already pending or this timeout is earlier
 	 * than an existing one, modify the timer. Round up to next nearest
 	 * second.
 	 */
-	expiry = blk_rq_timeout(round_jiffies_up(blk_rq_deadline(req)));
-
+	expiry = blk_rq_timeout(round_jiffies_up(deadline));
 	if (!timer_pending(&q->timeout) ||
 	    time_before(expiry, q->timeout.expires)) {
 		unsigned long diff = q->timeout.expires - expiry;
@@ -244,5 +244,54 @@ void blk_add_timer(struct request *req)
 		if (!timer_pending(&q->timeout) || (diff >= HZ / 2))
 			mod_timer(&q->timeout, expiry);
 	}
+}
 
+/**
+ * blk_add_timer - Start timeout timer for a single request
+ * @req:	request that is about to start running.
+ *
+ * Notes:
+ *    Each request has its own timer, and as it is added to the queue, we
+ *    set up the timer. When the request completes, we cancel the timer.
+ */
+void blk_add_timer(struct request *req)
+{
+	struct request_queue *q = req->q;
+	unsigned long deadline;
+
+	lockdep_assert_held(q->queue_lock);
+
+	if (!q->rq_timed_out_fn)
+		return;
+	if (!req->timeout)
+		req->timeout = q->rq_timeout;
+
+	deadline = jiffies + req->timeout;
+	blk_rq_set_deadline(req, deadline);
+	list_add_tail(&req->timeout_list, &req->q->timeout_list);
+
+	return __blk_add_timer(req, deadline);
+}
+
+/**
+ * blk_mq_add_timer - set the deadline for a single request
+ * @req:	request for which to set the deadline.
+ * @old:	current request state.
+ *
+ * Sets the deadline of a request if and only if it has state @old and
+ * at the same time changes the request state from @old into @new. The caller
+ * must guarantee that the request state won't be modified while this function
+ * is in progress.
+ */
+void blk_mq_add_timer(struct request *req, enum mq_rq_state old)
+{
+	struct request_queue *q = req->q;
+	unsigned long deadline;
+
+	if (!req->timeout)
+		req->timeout = q->rq_timeout;
+	deadline = jiffies + req->timeout;
+	if (!blk_mq_rq_set_deadline(req, deadline, old))
+		WARN_ON_ONCE(true);
+	return __blk_add_timer(req, deadline);
 }
diff --git a/block/blk.h b/block/blk.h
index eaf1a8e87d11..204a0345996c 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -170,6 +170,7 @@ static inline bool bio_integrity_endio(struct bio *bio)
 void blk_timeout_work(struct work_struct *work);
 unsigned long blk_rq_timeout(unsigned long timeout);
 void blk_add_timer(struct request *req);
+void blk_mq_add_timer(struct request *req, enum mq_rq_state old);
 void blk_delete_timer(struct request *);
 
 
@@ -195,17 +196,17 @@ void blk_account_io_done(struct request *req, u64 now);
  */
 static inline int blk_mark_rq_complete(struct request *rq)
 {
-	return test_and_set_bit(0, &rq->__deadline);
+	return test_and_set_bit(0, &rq->das.legacy_deadline);
 }
 
 static inline void blk_clear_rq_complete(struct request *rq)
 {
-	clear_bit(0, &rq->__deadline);
+	clear_bit(0, &rq->das.legacy_deadline);
 }
 
 static inline bool blk_rq_is_complete(struct request *rq)
 {
-	return test_bit(0, &rq->__deadline);
+	return test_bit(0, &rq->das.legacy_deadline);
 }
 
 /*
@@ -314,12 +315,12 @@ static inline void req_set_nomerge(struct request_queue *q, struct request *req)
  */
 static inline void blk_rq_set_deadline(struct request *rq, unsigned long time)
 {
-	rq->__deadline = time & ~0x1UL;
+	rq->das.legacy_deadline = time & ~0x1UL;
 }
 
 static inline unsigned long blk_rq_deadline(struct request *rq)
 {
-	return rq->__deadline & ~0x1UL;
+	return rq->das.legacy_deadline & ~0x1UL;
 }
 
 /*
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index f3999719f828..02078f53d636 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -27,8 +27,6 @@
 #include <linux/percpu-refcount.h>
 #include <linux/scatterlist.h>
 #include <linux/blkzoned.h>
-#include <linux/seqlock.h>
-#include <linux/u64_stats_sync.h>
 
 struct module;
 struct scsi_ioctl_command;
@@ -125,15 +123,23 @@ typedef __u32 __bitwise req_flags_t;
 #define RQF_SPECIAL_PAYLOAD	((__force req_flags_t)(1 << 18))
 /* The per-zone write lock is held for this request */
 #define RQF_ZONE_WRITE_LOCKED	((__force req_flags_t)(1 << 19))
-/* timeout is expired */
-#define RQF_MQ_TIMEOUT_EXPIRED	((__force req_flags_t)(1 << 20))
 /* already slept for hybrid poll */
-#define RQF_MQ_POLL_SLEPT	((__force req_flags_t)(1 << 21))
+#define RQF_MQ_POLL_SLEPT	((__force req_flags_t)(1 << 20))
 
 /* flags that prevent us from merging requests: */
 #define RQF_NOMERGE_FLAGS \
 	(RQF_STARTED | RQF_SOFTBARRIER | RQF_FLUSH_SEQ | RQF_SPECIAL_PAYLOAD)
 
+union blk_deadline_and_state {
+	struct {
+		uint32_t generation:30;
+		uint32_t state:2;
+		uint32_t deadline;
+	};
+	uint64_t val;
+	unsigned long legacy_deadline;
+};
+
 /*
  * Try to put the fields that are referenced together in the same cacheline.
  *
@@ -236,29 +242,24 @@ struct request {
 
 	unsigned int extra_len;	/* length of alignment and padding */
 
-	/*
-	 * On blk-mq, the lower bits of ->gstate (generation number and
-	 * state) carry the MQ_RQ_* state value and the upper bits the
-	 * generation number which is monotonically incremented and used to
-	 * distinguish the reuse instances.
-	 *
-	 * ->gstate_seq allows updates to ->gstate and other fields
-	 * (currently ->deadline) during request start to be read
-	 * atomically from the timeout path, so that it can operate on a
-	 * coherent set of information.
-	 */
-	seqcount_t gstate_seq;
-	u64 gstate;
-
 	/*
 	 * ->aborted_gstate is used by the timeout to claim a specific
 	 * recycle instance of this request.  See blk_mq_timeout_work().
 	 */
-	struct u64_stats_sync aborted_gstate_sync;
-	u64 aborted_gstate;
+	union blk_deadline_and_state aborted_gstate;
+
+#ifndef CONFIG_ARCH_HAVE_CMPXCHG64
+	spinlock_t das_lock;
+#endif
 
-	/* access through blk_rq_set_deadline, blk_rq_deadline */
-	unsigned long __deadline;
+	/*
+	 * Access through blk_rq_deadline() and blk_rq_set_deadline(),
+	 * blk_mark_rq_complete(), blk_clear_rq_complete() and
+	 * blk_rq_is_complete() for legacy queues or blk_mq_rq_state(),
+	 * blk_mq_change_rq_state() and blk_mq_rq_set_deadline() for
+	 * blk-mq queues.
+	 */
+	union blk_deadline_and_state das;
 
 	struct list_head timeout_list;
 
-- 
2.16.3

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH v11 1/2] arch/*: Add CONFIG_ARCH_HAVE_CMPXCHG64
  2018-05-18 18:00 ` [PATCH v11 1/2] arch/*: Add CONFIG_ARCH_HAVE_CMPXCHG64 Bart Van Assche
@ 2018-05-18 18:32     ` hpa
  0 siblings, 0 replies; 7+ messages in thread
From: hpa @ 2018-05-18 18:32 UTC (permalink / raw)
  To: Bart Van Assche, Jens Axboe
  Cc: linux-block, linux-kernel, Christoph Hellwig, Catalin Marinas,
	Will Deacon, Tony Luck, Fenghua Yu, Geert Uytterhoeven,
	James E.J. Bottomley, Helge Deller, Benjamin Herrenschmidt,
	Paul Mackerras, Michael Ellerman, Martin Schwidefsky,
	Heiko Carstens, David S . Miller, Thomas Gleixner, Ingo Molnar,
	Chris Zankel, Max Filippov, Arnd Bergmann, Jonathan Corbet

On May 18, 2018 11:00:05 AM PDT, Bart Van Assche <bart=2Evanassche@wdc=2Eco=
m> wrote:
>The next patch in this series introduces a call to cmpxchg64()
>in the block layer core for those architectures on which this
>functionality is available=2E Make it possible to test whether
>cmpxchg64() is available by introducing CONFIG_ARCH_HAVE_CMPXCHG64=2E
>
>Signed-off-by: Bart Van Assche <bart=2Evanassche@wdc=2Ecom>
>Cc: Catalin Marinas <catalin=2Emarinas@arm=2Ecom>
>Cc: Will Deacon <will=2Edeacon@arm=2Ecom>
>Cc: Tony Luck <tony=2Eluck@intel=2Ecom>
>Cc: Fenghua Yu <fenghua=2Eyu@intel=2Ecom>
>Cc: Geert Uytterhoeven <geert@linux-m68k=2Eorg>
>Cc: "James E=2EJ=2E Bottomley" <jejb@parisc-linux=2Eorg>
>Cc: Helge Deller <deller@gmx=2Ede>
>Cc: Benjamin Herrenschmidt <benh@kernel=2Ecrashing=2Eorg>
>Cc: Paul Mackerras <paulus@samba=2Eorg>
>Cc: Michael Ellerman <mpe@ellerman=2Eid=2Eau>
>Cc: Martin Schwidefsky <schwidefsky@de=2Eibm=2Ecom>
>Cc: Heiko Carstens <heiko=2Ecarstens@de=2Eibm=2Ecom>
>Cc: David S=2E Miller <davem@davemloft=2Enet>
>Cc: Thomas Gleixner <tglx@linutronix=2Ede>
>Cc: Ingo Molnar <mingo@redhat=2Ecom>
>Cc: H=2E Peter Anvin <hpa@zytor=2Ecom>
>Cc: Chris Zankel <chris@zankel=2Enet>
>Cc: Max Filippov <jcmvbkbc@gmail=2Ecom>
>Cc: Arnd Bergmann <arnd@arndb=2Ede>
>Cc: Jonathan Corbet <corbet@lwn=2Enet>
>---
>=2E=2E=2E/features/locking/cmpxchg64/arch-support=2Etxt    | 33
>++++++++++++++++++++++
> arch/Kconfig                                       |  4 +++
> arch/arm/Kconfig                                   |  1 +
> arch/ia64/Kconfig                                  |  1 +
> arch/m68k/Kconfig                                  |  1 +
> arch/mips/Kconfig                                  |  1 +
> arch/parisc/Kconfig                                |  1 +
> arch/riscv/Kconfig                                 |  1 +
> arch/sparc/Kconfig                                 |  1 +
> arch/x86/Kconfig                                   |  1 +
> arch/xtensa/Kconfig                                |  1 +
> 11 files changed, 46 insertions(+)
>create mode 100644
>Documentation/features/locking/cmpxchg64/arch-support=2Etxt
>
>diff --git a/Documentation/features/locking/cmpxchg64/arch-support=2Etxt
>b/Documentation/features/locking/cmpxchg64/arch-support=2Etxt
>new file mode 100644
>index 000000000000=2E=2E84bfef7242b2
>--- /dev/null
>+++ b/Documentation/features/locking/cmpxchg64/arch-support=2Etxt
>@@ -0,0 +1,33 @@
>+#
>+# Feature name:          cmpxchg64
>+#         Kconfig:       ARCH_HAVE_CMPXCHG64
>+#         description:   arch supports the cmpxchg64() API
>+#
>+    -----------------------
>+    |         arch |status|
>+    -----------------------
>+    |       alpha: |  ok  |
>+    |         arc: |  =2E=2E  |
>+    |         arm: |  ok  |
>+    |       arm64: |  ok  |
>+    |         c6x: |  =2E=2E  |
>+    |       h8300: |  =2E=2E  |
>+    |     hexagon: |  =2E=2E  |
>+    |        ia64: |  ok  |
>+    |        m68k: |  ok  |
>+    |  microblaze: |  =2E=2E  |
>+    |        mips: |  ok  |
>+    |       nds32: |  =2E=2E  |
>+    |       nios2: |  =2E=2E  |
>+    |    openrisc: |  =2E=2E  |
>+    |      parisc: |  ok  |
>+    |     powerpc: |  ok  |
>+    |       riscv: |  ok  |
>+    |        s390: |  ok  |
>+    |          sh: |  =2E=2E  |
>+    |       sparc: |  ok  |
>+    |          um: |  =2E=2E  |
>+    |   unicore32: |  =2E=2E  |
>+    |         x86: |  ok  |
>+    |      xtensa: |  ok  |
>+    -----------------------
>diff --git a/arch/Kconfig b/arch/Kconfig
>index 8e0d665c8d53=2E=2E9840b2577af1 100644
>--- a/arch/Kconfig
>+++ b/arch/Kconfig
>@@ -358,6 +358,10 @@ config HAVE_ALIGNED_STRUCT_PAGE
> 	  on a struct page for better performance=2E However selecting this
> 	  might increase the size of a struct page by a word=2E
>=20
>+config ARCH_HAVE_CMPXCHG64
>+	bool
>+	default y if 64BIT
>+
> config HAVE_CMPXCHG_LOCAL
> 	bool
>=20
>diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
>index a7f8e7f4b88f=2E=2E02c75697176e 100644
>--- a/arch/arm/Kconfig
>+++ b/arch/arm/Kconfig
>@@ -13,6 +13,7 @@ config ARM
> 	select ARCH_HAS_STRICT_KERNEL_RWX if MMU && !XIP_KERNEL
> 	select ARCH_HAS_STRICT_MODULE_RWX if MMU
> 	select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
>+	select ARCH_HAVE_CMPXCHG64 if !THUMB2_KERNEL
> 	select ARCH_HAVE_CUSTOM_GPIO_H
> 	select ARCH_HAS_GCOV_PROFILE_ALL
> 	select ARCH_MIGHT_HAVE_PC_PARPORT
>diff --git a/arch/ia64/Kconfig b/arch/ia64/Kconfig
>index bbe12a038d21=2E=2E31c49e1482e2 100644
>--- a/arch/ia64/Kconfig
>+++ b/arch/ia64/Kconfig
>@@ -41,6 +41,7 @@ config IA64
> 	select GENERIC_PENDING_IRQ if SMP
> 	select GENERIC_IRQ_SHOW
> 	select GENERIC_IRQ_LEGACY
>+	select ARCH_HAVE_CMPXCHG64
> 	select ARCH_HAVE_NMI_SAFE_CMPXCHG
> 	select GENERIC_IOMAP
> 	select GENERIC_SMP_IDLE_THREAD
>diff --git a/arch/m68k/Kconfig b/arch/m68k/Kconfig
>index 785612b576f7=2E=2E7b87cda3bbed 100644
>--- a/arch/m68k/Kconfig
>+++ b/arch/m68k/Kconfig
>@@ -11,6 +11,7 @@ config M68K
> 	select GENERIC_ATOMIC64
> 	select HAVE_UID16
> 	select VIRT_TO_BUS
>+	select ARCH_HAVE_CMPXCHG64
> 	select ARCH_HAVE_NMI_SAFE_CMPXCHG if RMW_INSNS
> 	select GENERIC_CPU_DEVICES
> 	select GENERIC_IOMAP
>diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
>index 225c95da23ce=2E=2E088bca0fd9f2 100644
>--- a/arch/mips/Kconfig
>+++ b/arch/mips/Kconfig
>@@ -7,6 +7,7 @@ config MIPS
> 	select ARCH_DISCARD_MEMBLOCK
> 	select ARCH_HAS_ELF_RANDOMIZE
> 	select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
>+	select ARCH_HAVE_CMPXCHG64 if 64BIT
> 	select ARCH_SUPPORTS_UPROBES
> 	select ARCH_USE_BUILTIN_BSWAP
> 	select ARCH_USE_CMPXCHG_LOCKREF if 64BIT
>diff --git a/arch/parisc/Kconfig b/arch/parisc/Kconfig
>index fc5a574c3482=2E=2E166c30865255 100644
>--- a/arch/parisc/Kconfig
>+++ b/arch/parisc/Kconfig
>@@ -30,6 +30,7 @@ config PARISC
> 	select GENERIC_ATOMIC64 if !64BIT
> 	select GENERIC_IRQ_PROBE
> 	select GENERIC_PCI_IOMAP
>+	select ARCH_HAVE_CMPXCHG64
> 	select ARCH_HAVE_NMI_SAFE_CMPXCHG
> 	select GENERIC_SMP_IDLE_THREAD
> 	select GENERIC_CPU_DEVICES
>diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
>index cd4fd85fde84=2E=2E4f886a055ff6 100644
>--- a/arch/riscv/Kconfig
>+++ b/arch/riscv/Kconfig
>@@ -8,6 +8,7 @@ config RISCV
> 	select OF
> 	select OF_EARLY_FLATTREE
> 	select OF_IRQ
>+	select ARCH_HAVE_CMPXCHG64
> 	select ARCH_WANT_FRAME_POINTERS
> 	select CLONE_BACKWARDS
> 	select COMMON_CLK
>diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig
>index 8767e45f1b2b=2E=2Ee3429b78c491 100644
>--- a/arch/sparc/Kconfig
>+++ b/arch/sparc/Kconfig
>@@ -75,6 +75,7 @@ config SPARC64
> 	select HAVE_PERF_EVENTS
> 	select PERF_USE_VMALLOC
> 	select IRQ_PREFLOW_FASTEOI
>+	select ARCH_HAVE_CMPXCHG64
> 	select ARCH_HAVE_NMI_SAFE_CMPXCHG
> 	select HAVE_C_RECORDMCOUNT
> 	select NO_BOOTMEM
>diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
>index c07f492b871a=2E=2E52331f395bf4 100644
>--- a/arch/x86/Kconfig
>+++ b/arch/x86/Kconfig
>@@ -67,6 +67,7 @@ config X86
> 	select ARCH_HAS_SYNC_CORE_BEFORE_USERMODE
> 	select ARCH_HAS_UBSAN_SANITIZE_ALL
> 	select ARCH_HAS_ZONE_DEVICE		if X86_64
>+	select ARCH_HAVE_CMPXCHG64		if X86_CMPXCHG64
> 	select ARCH_HAVE_NMI_SAFE_CMPXCHG
> 	select ARCH_MIGHT_HAVE_ACPI_PDC		if ACPI
> 	select ARCH_MIGHT_HAVE_PC_PARPORT
>diff --git a/arch/xtensa/Kconfig b/arch/xtensa/Kconfig
>index c921e8bccdc8=2E=2E0e5c77958fa3 100644
>--- a/arch/xtensa/Kconfig
>+++ b/arch/xtensa/Kconfig
>@@ -4,6 +4,7 @@ config ZONE_DMA
>=20
> config XTENSA
> 	def_bool y
>+	select ARCH_HAVE_CMPXCHG64
> 	select ARCH_NO_COHERENT_DMA_MMAP if !MMU
> 	select ARCH_WANT_FRAME_POINTERS
> 	select ARCH_WANT_IPC_PARSE_VERSION

Perhaps it would be better to define cmpxchg64 as a macro (which can be #d=
efine cmpxchg64 cmpxchg64) rather putting this in Kconfig? Putting it in Kc=
onfig makes sense if it affects config options=2E
--=20
Sent from my Android device with K-9 Mail=2E Please excuse my brevity=2E

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v11 1/2] arch/*: Add CONFIG_ARCH_HAVE_CMPXCHG64
@ 2018-05-18 18:32     ` hpa
  0 siblings, 0 replies; 7+ messages in thread
From: hpa @ 2018-05-18 18:32 UTC (permalink / raw)
  To: Bart Van Assche, Jens Axboe
  Cc: linux-block, linux-kernel, Christoph Hellwig, Catalin Marinas,
	Will Deacon, Tony Luck, Fenghua Yu, Geert Uytterhoeven,
	James E.J. Bottomley, Helge Deller, Benjamin Herrenschmidt,
	Paul Mackerras, Michael Ellerman, Martin Schwidefsky,
	Heiko Carstens, David S . Miller, Thomas Gleixner, Ingo Molnar,
	Chris Zankel, Max Filippov, Arnd Bergmann, Jonathan Corbet

On May 18, 2018 11:00:05 AM PDT, Bart Van Assche <bart.vanassche@wdc.com> wrote:
>The next patch in this series introduces a call to cmpxchg64()
>in the block layer core for those architectures on which this
>functionality is available. Make it possible to test whether
>cmpxchg64() is available by introducing CONFIG_ARCH_HAVE_CMPXCHG64.
>
>Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
>Cc: Catalin Marinas <catalin.marinas@arm.com>
>Cc: Will Deacon <will.deacon@arm.com>
>Cc: Tony Luck <tony.luck@intel.com>
>Cc: Fenghua Yu <fenghua.yu@intel.com>
>Cc: Geert Uytterhoeven <geert@linux-m68k.org>
>Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
>Cc: Helge Deller <deller@gmx.de>
>Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
>Cc: Paul Mackerras <paulus@samba.org>
>Cc: Michael Ellerman <mpe@ellerman.id.au>
>Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
>Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
>Cc: David S. Miller <davem@davemloft.net>
>Cc: Thomas Gleixner <tglx@linutronix.de>
>Cc: Ingo Molnar <mingo@redhat.com>
>Cc: H. Peter Anvin <hpa@zytor.com>
>Cc: Chris Zankel <chris@zankel.net>
>Cc: Max Filippov <jcmvbkbc@gmail.com>
>Cc: Arnd Bergmann <arnd@arndb.de>
>Cc: Jonathan Corbet <corbet@lwn.net>
>---
>.../features/locking/cmpxchg64/arch-support.txt    | 33
>++++++++++++++++++++++
> arch/Kconfig                                       |  4 +++
> arch/arm/Kconfig                                   |  1 +
> arch/ia64/Kconfig                                  |  1 +
> arch/m68k/Kconfig                                  |  1 +
> arch/mips/Kconfig                                  |  1 +
> arch/parisc/Kconfig                                |  1 +
> arch/riscv/Kconfig                                 |  1 +
> arch/sparc/Kconfig                                 |  1 +
> arch/x86/Kconfig                                   |  1 +
> arch/xtensa/Kconfig                                |  1 +
> 11 files changed, 46 insertions(+)
>create mode 100644
>Documentation/features/locking/cmpxchg64/arch-support.txt
>
>diff --git a/Documentation/features/locking/cmpxchg64/arch-support.txt
>b/Documentation/features/locking/cmpxchg64/arch-support.txt
>new file mode 100644
>index 000000000000..84bfef7242b2
>--- /dev/null
>+++ b/Documentation/features/locking/cmpxchg64/arch-support.txt
>@@ -0,0 +1,33 @@
>+#
>+# Feature name:          cmpxchg64
>+#         Kconfig:       ARCH_HAVE_CMPXCHG64
>+#         description:   arch supports the cmpxchg64() API
>+#
>+    -----------------------
>+    |         arch |status|
>+    -----------------------
>+    |       alpha: |  ok  |
>+    |         arc: |  ..  |
>+    |         arm: |  ok  |
>+    |       arm64: |  ok  |
>+    |         c6x: |  ..  |
>+    |       h8300: |  ..  |
>+    |     hexagon: |  ..  |
>+    |        ia64: |  ok  |
>+    |        m68k: |  ok  |
>+    |  microblaze: |  ..  |
>+    |        mips: |  ok  |
>+    |       nds32: |  ..  |
>+    |       nios2: |  ..  |
>+    |    openrisc: |  ..  |
>+    |      parisc: |  ok  |
>+    |     powerpc: |  ok  |
>+    |       riscv: |  ok  |
>+    |        s390: |  ok  |
>+    |          sh: |  ..  |
>+    |       sparc: |  ok  |
>+    |          um: |  ..  |
>+    |   unicore32: |  ..  |
>+    |         x86: |  ok  |
>+    |      xtensa: |  ok  |
>+    -----------------------
>diff --git a/arch/Kconfig b/arch/Kconfig
>index 8e0d665c8d53..9840b2577af1 100644
>--- a/arch/Kconfig
>+++ b/arch/Kconfig
>@@ -358,6 +358,10 @@ config HAVE_ALIGNED_STRUCT_PAGE
> 	  on a struct page for better performance. However selecting this
> 	  might increase the size of a struct page by a word.
> 
>+config ARCH_HAVE_CMPXCHG64
>+	bool
>+	default y if 64BIT
>+
> config HAVE_CMPXCHG_LOCAL
> 	bool
> 
>diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
>index a7f8e7f4b88f..02c75697176e 100644
>--- a/arch/arm/Kconfig
>+++ b/arch/arm/Kconfig
>@@ -13,6 +13,7 @@ config ARM
> 	select ARCH_HAS_STRICT_KERNEL_RWX if MMU && !XIP_KERNEL
> 	select ARCH_HAS_STRICT_MODULE_RWX if MMU
> 	select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
>+	select ARCH_HAVE_CMPXCHG64 if !THUMB2_KERNEL
> 	select ARCH_HAVE_CUSTOM_GPIO_H
> 	select ARCH_HAS_GCOV_PROFILE_ALL
> 	select ARCH_MIGHT_HAVE_PC_PARPORT
>diff --git a/arch/ia64/Kconfig b/arch/ia64/Kconfig
>index bbe12a038d21..31c49e1482e2 100644
>--- a/arch/ia64/Kconfig
>+++ b/arch/ia64/Kconfig
>@@ -41,6 +41,7 @@ config IA64
> 	select GENERIC_PENDING_IRQ if SMP
> 	select GENERIC_IRQ_SHOW
> 	select GENERIC_IRQ_LEGACY
>+	select ARCH_HAVE_CMPXCHG64
> 	select ARCH_HAVE_NMI_SAFE_CMPXCHG
> 	select GENERIC_IOMAP
> 	select GENERIC_SMP_IDLE_THREAD
>diff --git a/arch/m68k/Kconfig b/arch/m68k/Kconfig
>index 785612b576f7..7b87cda3bbed 100644
>--- a/arch/m68k/Kconfig
>+++ b/arch/m68k/Kconfig
>@@ -11,6 +11,7 @@ config M68K
> 	select GENERIC_ATOMIC64
> 	select HAVE_UID16
> 	select VIRT_TO_BUS
>+	select ARCH_HAVE_CMPXCHG64
> 	select ARCH_HAVE_NMI_SAFE_CMPXCHG if RMW_INSNS
> 	select GENERIC_CPU_DEVICES
> 	select GENERIC_IOMAP
>diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
>index 225c95da23ce..088bca0fd9f2 100644
>--- a/arch/mips/Kconfig
>+++ b/arch/mips/Kconfig
>@@ -7,6 +7,7 @@ config MIPS
> 	select ARCH_DISCARD_MEMBLOCK
> 	select ARCH_HAS_ELF_RANDOMIZE
> 	select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
>+	select ARCH_HAVE_CMPXCHG64 if 64BIT
> 	select ARCH_SUPPORTS_UPROBES
> 	select ARCH_USE_BUILTIN_BSWAP
> 	select ARCH_USE_CMPXCHG_LOCKREF if 64BIT
>diff --git a/arch/parisc/Kconfig b/arch/parisc/Kconfig
>index fc5a574c3482..166c30865255 100644
>--- a/arch/parisc/Kconfig
>+++ b/arch/parisc/Kconfig
>@@ -30,6 +30,7 @@ config PARISC
> 	select GENERIC_ATOMIC64 if !64BIT
> 	select GENERIC_IRQ_PROBE
> 	select GENERIC_PCI_IOMAP
>+	select ARCH_HAVE_CMPXCHG64
> 	select ARCH_HAVE_NMI_SAFE_CMPXCHG
> 	select GENERIC_SMP_IDLE_THREAD
> 	select GENERIC_CPU_DEVICES
>diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
>index cd4fd85fde84..4f886a055ff6 100644
>--- a/arch/riscv/Kconfig
>+++ b/arch/riscv/Kconfig
>@@ -8,6 +8,7 @@ config RISCV
> 	select OF
> 	select OF_EARLY_FLATTREE
> 	select OF_IRQ
>+	select ARCH_HAVE_CMPXCHG64
> 	select ARCH_WANT_FRAME_POINTERS
> 	select CLONE_BACKWARDS
> 	select COMMON_CLK
>diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig
>index 8767e45f1b2b..e3429b78c491 100644
>--- a/arch/sparc/Kconfig
>+++ b/arch/sparc/Kconfig
>@@ -75,6 +75,7 @@ config SPARC64
> 	select HAVE_PERF_EVENTS
> 	select PERF_USE_VMALLOC
> 	select IRQ_PREFLOW_FASTEOI
>+	select ARCH_HAVE_CMPXCHG64
> 	select ARCH_HAVE_NMI_SAFE_CMPXCHG
> 	select HAVE_C_RECORDMCOUNT
> 	select NO_BOOTMEM
>diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
>index c07f492b871a..52331f395bf4 100644
>--- a/arch/x86/Kconfig
>+++ b/arch/x86/Kconfig
>@@ -67,6 +67,7 @@ config X86
> 	select ARCH_HAS_SYNC_CORE_BEFORE_USERMODE
> 	select ARCH_HAS_UBSAN_SANITIZE_ALL
> 	select ARCH_HAS_ZONE_DEVICE		if X86_64
>+	select ARCH_HAVE_CMPXCHG64		if X86_CMPXCHG64
> 	select ARCH_HAVE_NMI_SAFE_CMPXCHG
> 	select ARCH_MIGHT_HAVE_ACPI_PDC		if ACPI
> 	select ARCH_MIGHT_HAVE_PC_PARPORT
>diff --git a/arch/xtensa/Kconfig b/arch/xtensa/Kconfig
>index c921e8bccdc8..0e5c77958fa3 100644
>--- a/arch/xtensa/Kconfig
>+++ b/arch/xtensa/Kconfig
>@@ -4,6 +4,7 @@ config ZONE_DMA
> 
> config XTENSA
> 	def_bool y
>+	select ARCH_HAVE_CMPXCHG64
> 	select ARCH_NO_COHERENT_DMA_MMAP if !MMU
> 	select ARCH_WANT_FRAME_POINTERS
> 	select ARCH_WANT_IPC_PARSE_VERSION

Perhaps it would be better to define cmpxchg64 as a macro (which can be #define cmpxchg64 cmpxchg64) rather putting this in Kconfig? Putting it in Kconfig makes sense if it affects config options.
-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v11 1/2] arch/*: Add CONFIG_ARCH_HAVE_CMPXCHG64
  2018-05-18 18:32     ` hpa
@ 2018-05-21 15:35       ` Bart Van Assche
  -1 siblings, 0 replies; 7+ messages in thread
From: Bart Van Assche @ 2018-05-21 15:35 UTC (permalink / raw)
  To: hpa, axboe
  Cc: schwidefsky, linux-kernel, linux-block, jcmvbkbc, corbet, tglx,
	deller, heiko.carstens, hch, fenghua.yu, catalin.marinas,
	will.deacon, mingo, jejb, tony.luck, chris, davem, arnd, geert,
	benh, mpe, paulus

T24gRnJpLCAyMDE4LTA1LTE4IGF0IDExOjMyIC0wNzAwLCBocGFAenl0b3IuY29tIHdyb3RlOg0K
PiBPbiBNYXkgMTgsIDIwMTggMTE6MDA6MDUgQU0gUERULCBCYXJ0IFZhbiBBc3NjaGUgPGJhcnQu
dmFuYXNzY2hlQHdkYy5jb20+IHdyb3RlOg0KPiA+IFRoZSBuZXh0IHBhdGNoIGluIHRoaXMgc2Vy
aWVzIGludHJvZHVjZXMgYSBjYWxsIHRvIGNtcHhjaGc2NCgpDQo+ID4gaW4gdGhlIGJsb2NrIGxh
eWVyIGNvcmUgZm9yIHRob3NlIGFyY2hpdGVjdHVyZXMgb24gd2hpY2ggdGhpcw0KPiA+IGZ1bmN0
aW9uYWxpdHkgaXMgYXZhaWxhYmxlLiBNYWtlIGl0IHBvc3NpYmxlIHRvIHRlc3Qgd2hldGhlcg0K
PiA+IGNtcHhjaGc2NCgpIGlzIGF2YWlsYWJsZSBieSBpbnRyb2R1Y2luZyBDT05GSUdfQVJDSF9I
QVZFX0NNUFhDSEc2NC4NCj4gPiANCj4gPiBTaWduZWQtb2ZmLWJ5OiBCYXJ0IFZhbiBBc3NjaGUg
PGJhcnQudmFuYXNzY2hlQHdkYy5jb20+DQo+ID4gQ2M6IENhdGFsaW4gTWFyaW5hcyA8Y2F0YWxp
bi5tYXJpbmFzQGFybS5jb20+DQo+ID4gQ2M6IFdpbGwgRGVhY29uIDx3aWxsLmRlYWNvbkBhcm0u
Y29tPg0KPiA+IENjOiBUb255IEx1Y2sgPHRvbnkubHVja0BpbnRlbC5jb20+DQo+ID4gQ2M6IEZl
bmdodWEgWXUgPGZlbmdodWEueXVAaW50ZWwuY29tPg0KPiA+IENjOiBHZWVydCBVeXR0ZXJob2V2
ZW4gPGdlZXJ0QGxpbnV4LW02OGsub3JnPg0KPiA+IENjOiAiSmFtZXMgRS5KLiBCb3R0b21sZXki
IDxqZWpiQHBhcmlzYy1saW51eC5vcmc+DQo+ID4gQ2M6IEhlbGdlIERlbGxlciA8ZGVsbGVyQGdt
eC5kZT4NCj4gPiBDYzogQmVuamFtaW4gSGVycmVuc2NobWlkdCA8YmVuaEBrZXJuZWwuY3Jhc2hp
bmcub3JnPg0KPiA+IENjOiBQYXVsIE1hY2tlcnJhcyA8cGF1bHVzQHNhbWJhLm9yZz4NCj4gPiBD
YzogTWljaGFlbCBFbGxlcm1hbiA8bXBlQGVsbGVybWFuLmlkLmF1Pg0KPiA+IENjOiBNYXJ0aW4g
U2Nod2lkZWZza3kgPHNjaHdpZGVmc2t5QGRlLmlibS5jb20+DQo+ID4gQ2M6IEhlaWtvIENhcnN0
ZW5zIDxoZWlrby5jYXJzdGVuc0BkZS5pYm0uY29tPg0KPiA+IENjOiBEYXZpZCBTLiBNaWxsZXIg
PGRhdmVtQGRhdmVtbG9mdC5uZXQ+DQo+ID4gQ2M6IFRob21hcyBHbGVpeG5lciA8dGdseEBsaW51
dHJvbml4LmRlPg0KPiA+IENjOiBJbmdvIE1vbG5hciA8bWluZ29AcmVkaGF0LmNvbT4NCj4gPiBD
YzogSC4gUGV0ZXIgQW52aW4gPGhwYUB6eXRvci5jb20+DQo+ID4gQ2M6IENocmlzIFphbmtlbCA8
Y2hyaXNAemFua2VsLm5ldD4NCj4gPiBDYzogTWF4IEZpbGlwcG92IDxqY212YmtiY0BnbWFpbC5j
b20+DQo+ID4gQ2M6IEFybmQgQmVyZ21hbm4gPGFybmRAYXJuZGIuZGU+DQo+ID4gQ2M6IEpvbmF0
aGFuIENvcmJldCA8Y29yYmV0QGx3bi5uZXQ+DQo+ID4gLS0tDQo+ID4gLi4uL2ZlYXR1cmVzL2xv
Y2tpbmcvY21weGNoZzY0L2FyY2gtc3VwcG9ydC50eHQgICAgfCAzMw0KPiA+ICsrKysrKysrKysr
KysrKysrKysrKysNCj4gPiBhcmNoL0tjb25maWcgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICB8ICA0ICsrKw0KPiA+IGFyY2gvYXJtL0tjb25maWcgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgIHwgIDEgKw0KPiA+IGFyY2gvaWE2NC9LY29uZmlnICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgIHwgIDEgKw0KPiA+IGFyY2gvbTY4ay9LY29uZmlnICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHwgIDEgKw0KPiA+IGFyY2gvbWlwcy9LY29u
ZmlnICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHwgIDEgKw0KPiA+IGFyY2gvcGFy
aXNjL0tjb25maWcgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHwgIDEgKw0KPiA+IGFy
Y2gvcmlzY3YvS2NvbmZpZyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHwgIDEgKw0K
PiA+IGFyY2gvc3BhcmMvS2NvbmZpZyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHwg
IDEgKw0KPiA+IGFyY2gveDg2L0tjb25maWcgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgIHwgIDEgKw0KPiA+IGFyY2gveHRlbnNhL0tjb25maWcgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgIHwgIDEgKw0KPiA+IDExIGZpbGVzIGNoYW5nZWQsIDQ2IGluc2VydGlvbnMoKykN
Cj4gPiBjcmVhdGUgbW9kZSAxMDA2NDQNCj4gPiBEb2N1bWVudGF0aW9uL2ZlYXR1cmVzL2xvY2tp
bmcvY21weGNoZzY0L2FyY2gtc3VwcG9ydC50eHQNCj4gPiANCj4gPiBkaWZmIC0tZ2l0IGEvRG9j
dW1lbnRhdGlvbi9mZWF0dXJlcy9sb2NraW5nL2NtcHhjaGc2NC9hcmNoLXN1cHBvcnQudHh0DQo+
ID4gYi9Eb2N1bWVudGF0aW9uL2ZlYXR1cmVzL2xvY2tpbmcvY21weGNoZzY0L2FyY2gtc3VwcG9y
dC50eHQNCj4gPiBuZXcgZmlsZSBtb2RlIDEwMDY0NA0KPiA+IGluZGV4IDAwMDAwMDAwMDAwMC4u
ODRiZmVmNzI0MmIyDQo+ID4gLS0tIC9kZXYvbnVsbA0KPiA+ICsrKyBiL0RvY3VtZW50YXRpb24v
ZmVhdHVyZXMvbG9ja2luZy9jbXB4Y2hnNjQvYXJjaC1zdXBwb3J0LnR4dA0KPiA+IEBAIC0wLDAg
KzEsMzMgQEANCj4gPiArIw0KPiA+ICsjIEZlYXR1cmUgbmFtZTogICAgICAgICAgY21weGNoZzY0
DQo+ID4gKyMgICAgICAgICBLY29uZmlnOiAgICAgICBBUkNIX0hBVkVfQ01QWENIRzY0DQo+ID4g
KyMgICAgICAgICBkZXNjcmlwdGlvbjogICBhcmNoIHN1cHBvcnRzIHRoZSBjbXB4Y2hnNjQoKSBB
UEkNCj4gPiArIw0KPiA+ICsgICAgLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0NCj4gPiArICAgIHwg
ICAgICAgICBhcmNoIHxzdGF0dXN8DQo+ID4gKyAgICAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQ0K
PiA+ICsgICAgfCAgICAgICBhbHBoYTogfCAgb2sgIHwNCj4gPiArICAgIHwgICAgICAgICBhcmM6
IHwgIC4uICB8DQo+ID4gKyAgICB8ICAgICAgICAgYXJtOiB8ICBvayAgfA0KPiA+ICsgICAgfCAg
ICAgICBhcm02NDogfCAgb2sgIHwNCj4gPiArICAgIHwgICAgICAgICBjNng6IHwgIC4uICB8DQo+
ID4gKyAgICB8ICAgICAgIGg4MzAwOiB8ICAuLiAgfA0KPiA+ICsgICAgfCAgICAgaGV4YWdvbjog
fCAgLi4gIHwNCj4gPiArICAgIHwgICAgICAgIGlhNjQ6IHwgIG9rICB8DQo+ID4gKyAgICB8ICAg
ICAgICBtNjhrOiB8ICBvayAgfA0KPiA+ICsgICAgfCAgbWljcm9ibGF6ZTogfCAgLi4gIHwNCj4g
PiArICAgIHwgICAgICAgIG1pcHM6IHwgIG9rICB8DQo+ID4gKyAgICB8ICAgICAgIG5kczMyOiB8
ICAuLiAgfA0KPiA+ICsgICAgfCAgICAgICBuaW9zMjogfCAgLi4gIHwNCj4gPiArICAgIHwgICAg
b3BlbnJpc2M6IHwgIC4uICB8DQo+ID4gKyAgICB8ICAgICAgcGFyaXNjOiB8ICBvayAgfA0KPiA+
ICsgICAgfCAgICAgcG93ZXJwYzogfCAgb2sgIHwNCj4gPiArICAgIHwgICAgICAgcmlzY3Y6IHwg
IG9rICB8DQo+ID4gKyAgICB8ICAgICAgICBzMzkwOiB8ICBvayAgfA0KPiA+ICsgICAgfCAgICAg
ICAgICBzaDogfCAgLi4gIHwNCj4gPiArICAgIHwgICAgICAgc3BhcmM6IHwgIG9rICB8DQo+ID4g
KyAgICB8ICAgICAgICAgIHVtOiB8ICAuLiAgfA0KPiA+ICsgICAgfCAgIHVuaWNvcmUzMjogfCAg
Li4gIHwNCj4gPiArICAgIHwgICAgICAgICB4ODY6IHwgIG9rICB8DQo+ID4gKyAgICB8ICAgICAg
eHRlbnNhOiB8ICBvayAgfA0KPiA+ICsgICAgLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0NCj4gPiBk
aWZmIC0tZ2l0IGEvYXJjaC9LY29uZmlnIGIvYXJjaC9LY29uZmlnDQo+ID4gaW5kZXggOGUwZDY2
NWM4ZDUzLi45ODQwYjI1NzdhZjEgMTAwNjQ0DQo+ID4gLS0tIGEvYXJjaC9LY29uZmlnDQo+ID4g
KysrIGIvYXJjaC9LY29uZmlnDQo+ID4gQEAgLTM1OCw2ICszNTgsMTAgQEAgY29uZmlnIEhBVkVf
QUxJR05FRF9TVFJVQ1RfUEFHRQ0KPiA+IAkgIG9uIGEgc3RydWN0IHBhZ2UgZm9yIGJldHRlciBw
ZXJmb3JtYW5jZS4gSG93ZXZlciBzZWxlY3RpbmcgdGhpcw0KPiA+IAkgIG1pZ2h0IGluY3JlYXNl
IHRoZSBzaXplIG9mIGEgc3RydWN0IHBhZ2UgYnkgYSB3b3JkLg0KPiA+IA0KPiA+ICtjb25maWcg
QVJDSF9IQVZFX0NNUFhDSEc2NA0KPiA+ICsJYm9vbA0KPiA+ICsJZGVmYXVsdCB5IGlmIDY0QklU
DQo+ID4gKw0KPiA+IGNvbmZpZyBIQVZFX0NNUFhDSEdfTE9DQUwNCj4gPiAJYm9vbA0KPiA+IA0K
PiA+IGRpZmYgLS1naXQgYS9hcmNoL2FybS9LY29uZmlnIGIvYXJjaC9hcm0vS2NvbmZpZw0KPiA+
IGluZGV4IGE3ZjhlN2Y0Yjg4Zi4uMDJjNzU2OTcxNzZlIDEwMDY0NA0KPiA+IC0tLSBhL2FyY2gv
YXJtL0tjb25maWcNCj4gPiArKysgYi9hcmNoL2FybS9LY29uZmlnDQo+ID4gQEAgLTEzLDYgKzEz
LDcgQEAgY29uZmlnIEFSTQ0KPiA+IAlzZWxlY3QgQVJDSF9IQVNfU1RSSUNUX0tFUk5FTF9SV1gg
aWYgTU1VICYmICFYSVBfS0VSTkVMDQo+ID4gCXNlbGVjdCBBUkNIX0hBU19TVFJJQ1RfTU9EVUxF
X1JXWCBpZiBNTVUNCj4gPiAJc2VsZWN0IEFSQ0hfSEFTX1RJQ0tfQlJPQURDQVNUIGlmIEdFTkVS
SUNfQ0xPQ0tFVkVOVFNfQlJPQURDQVNUDQo+ID4gKwlzZWxlY3QgQVJDSF9IQVZFX0NNUFhDSEc2
NCBpZiAhVEhVTUIyX0tFUk5FTA0KPiA+IAlzZWxlY3QgQVJDSF9IQVZFX0NVU1RPTV9HUElPX0gN
Cj4gPiAJc2VsZWN0IEFSQ0hfSEFTX0dDT1ZfUFJPRklMRV9BTEwNCj4gPiAJc2VsZWN0IEFSQ0hf
TUlHSFRfSEFWRV9QQ19QQVJQT1JUDQo+ID4gZGlmZiAtLWdpdCBhL2FyY2gvaWE2NC9LY29uZmln
IGIvYXJjaC9pYTY0L0tjb25maWcNCj4gPiBpbmRleCBiYmUxMmEwMzhkMjEuLjMxYzQ5ZTE0ODJl
MiAxMDA2NDQNCj4gPiAtLS0gYS9hcmNoL2lhNjQvS2NvbmZpZw0KPiA+ICsrKyBiL2FyY2gvaWE2
NC9LY29uZmlnDQo+ID4gQEAgLTQxLDYgKzQxLDcgQEAgY29uZmlnIElBNjQNCj4gPiAJc2VsZWN0
IEdFTkVSSUNfUEVORElOR19JUlEgaWYgU01QDQo+ID4gCXNlbGVjdCBHRU5FUklDX0lSUV9TSE9X
DQo+ID4gCXNlbGVjdCBHRU5FUklDX0lSUV9MRUdBQ1kNCj4gPiArCXNlbGVjdCBBUkNIX0hBVkVf
Q01QWENIRzY0DQo+ID4gCXNlbGVjdCBBUkNIX0hBVkVfTk1JX1NBRkVfQ01QWENIRw0KPiA+IAlz
ZWxlY3QgR0VORVJJQ19JT01BUA0KPiA+IAlzZWxlY3QgR0VORVJJQ19TTVBfSURMRV9USFJFQUQN
Cj4gPiBkaWZmIC0tZ2l0IGEvYXJjaC9tNjhrL0tjb25maWcgYi9hcmNoL202OGsvS2NvbmZpZw0K
PiA+IGluZGV4IDc4NTYxMmI1NzZmNy4uN2I4N2NkYTNiYmVkIDEwMDY0NA0KPiA+IC0tLSBhL2Fy
Y2gvbTY4ay9LY29uZmlnDQo+ID4gKysrIGIvYXJjaC9tNjhrL0tjb25maWcNCj4gPiBAQCAtMTEs
NiArMTEsNyBAQCBjb25maWcgTTY4Sw0KPiA+IAlzZWxlY3QgR0VORVJJQ19BVE9NSUM2NA0KPiA+
IAlzZWxlY3QgSEFWRV9VSUQxNg0KPiA+IAlzZWxlY3QgVklSVF9UT19CVVMNCj4gPiArCXNlbGVj
dCBBUkNIX0hBVkVfQ01QWENIRzY0DQo+ID4gCXNlbGVjdCBBUkNIX0hBVkVfTk1JX1NBRkVfQ01Q
WENIRyBpZiBSTVdfSU5TTlMNCj4gPiAJc2VsZWN0IEdFTkVSSUNfQ1BVX0RFVklDRVMNCj4gPiAJ
c2VsZWN0IEdFTkVSSUNfSU9NQVANCj4gPiBkaWZmIC0tZ2l0IGEvYXJjaC9taXBzL0tjb25maWcg
Yi9hcmNoL21pcHMvS2NvbmZpZw0KPiA+IGluZGV4IDIyNWM5NWRhMjNjZS4uMDg4YmNhMGZkOWYy
IDEwMDY0NA0KPiA+IC0tLSBhL2FyY2gvbWlwcy9LY29uZmlnDQo+ID4gKysrIGIvYXJjaC9taXBz
L0tjb25maWcNCj4gPiBAQCAtNyw2ICs3LDcgQEAgY29uZmlnIE1JUFMNCj4gPiAJc2VsZWN0IEFS
Q0hfRElTQ0FSRF9NRU1CTE9DSw0KPiA+IAlzZWxlY3QgQVJDSF9IQVNfRUxGX1JBTkRPTUlaRQ0K
PiA+IAlzZWxlY3QgQVJDSF9IQVNfVElDS19CUk9BRENBU1QgaWYgR0VORVJJQ19DTE9DS0VWRU5U
U19CUk9BRENBU1QNCj4gPiArCXNlbGVjdCBBUkNIX0hBVkVfQ01QWENIRzY0IGlmIDY0QklUDQo+
ID4gCXNlbGVjdCBBUkNIX1NVUFBPUlRTX1VQUk9CRVMNCj4gPiAJc2VsZWN0IEFSQ0hfVVNFX0JV
SUxUSU5fQlNXQVANCj4gPiAJc2VsZWN0IEFSQ0hfVVNFX0NNUFhDSEdfTE9DS1JFRiBpZiA2NEJJ
VA0KPiA+IGRpZmYgLS1naXQgYS9hcmNoL3BhcmlzYy9LY29uZmlnIGIvYXJjaC9wYXJpc2MvS2Nv
bmZpZw0KPiA+IGluZGV4IGZjNWE1NzRjMzQ4Mi4uMTY2YzMwODY1MjU1IDEwMDY0NA0KPiA+IC0t
LSBhL2FyY2gvcGFyaXNjL0tjb25maWcNCj4gPiArKysgYi9hcmNoL3BhcmlzYy9LY29uZmlnDQo+
ID4gQEAgLTMwLDYgKzMwLDcgQEAgY29uZmlnIFBBUklTQw0KPiA+IAlzZWxlY3QgR0VORVJJQ19B
VE9NSUM2NCBpZiAhNjRCSVQNCj4gPiAJc2VsZWN0IEdFTkVSSUNfSVJRX1BST0JFDQo+ID4gCXNl
bGVjdCBHRU5FUklDX1BDSV9JT01BUA0KPiA+ICsJc2VsZWN0IEFSQ0hfSEFWRV9DTVBYQ0hHNjQN
Cj4gPiAJc2VsZWN0IEFSQ0hfSEFWRV9OTUlfU0FGRV9DTVBYQ0hHDQo+ID4gCXNlbGVjdCBHRU5F
UklDX1NNUF9JRExFX1RIUkVBRA0KPiA+IAlzZWxlY3QgR0VORVJJQ19DUFVfREVWSUNFUw0KPiA+
IGRpZmYgLS1naXQgYS9hcmNoL3Jpc2N2L0tjb25maWcgYi9hcmNoL3Jpc2N2L0tjb25maWcNCj4g
PiBpbmRleCBjZDRmZDg1ZmRlODQuLjRmODg2YTA1NWZmNiAxMDA2NDQNCj4gPiAtLS0gYS9hcmNo
L3Jpc2N2L0tjb25maWcNCj4gPiArKysgYi9hcmNoL3Jpc2N2L0tjb25maWcNCj4gPiBAQCAtOCw2
ICs4LDcgQEAgY29uZmlnIFJJU0NWDQo+ID4gCXNlbGVjdCBPRg0KPiA+IAlzZWxlY3QgT0ZfRUFS
TFlfRkxBVFRSRUUNCj4gPiAJc2VsZWN0IE9GX0lSUQ0KPiA+ICsJc2VsZWN0IEFSQ0hfSEFWRV9D
TVBYQ0hHNjQNCj4gPiAJc2VsZWN0IEFSQ0hfV0FOVF9GUkFNRV9QT0lOVEVSUw0KPiA+IAlzZWxl
Y3QgQ0xPTkVfQkFDS1dBUkRTDQo+ID4gCXNlbGVjdCBDT01NT05fQ0xLDQo+ID4gZGlmZiAtLWdp
dCBhL2FyY2gvc3BhcmMvS2NvbmZpZyBiL2FyY2gvc3BhcmMvS2NvbmZpZw0KPiA+IGluZGV4IDg3
NjdlNDVmMWIyYi4uZTM0MjliNzhjNDkxIDEwMDY0NA0KPiA+IC0tLSBhL2FyY2gvc3BhcmMvS2Nv
bmZpZw0KPiA+ICsrKyBiL2FyY2gvc3BhcmMvS2NvbmZpZw0KPiA+IEBAIC03NSw2ICs3NSw3IEBA
IGNvbmZpZyBTUEFSQzY0DQo+ID4gCXNlbGVjdCBIQVZFX1BFUkZfRVZFTlRTDQo+ID4gCXNlbGVj
dCBQRVJGX1VTRV9WTUFMTE9DDQo+ID4gCXNlbGVjdCBJUlFfUFJFRkxPV19GQVNURU9JDQo+ID4g
KwlzZWxlY3QgQVJDSF9IQVZFX0NNUFhDSEc2NA0KPiA+IAlzZWxlY3QgQVJDSF9IQVZFX05NSV9T
QUZFX0NNUFhDSEcNCj4gPiAJc2VsZWN0IEhBVkVfQ19SRUNPUkRNQ09VTlQNCj4gPiAJc2VsZWN0
IE5PX0JPT1RNRU0NCj4gPiBkaWZmIC0tZ2l0IGEvYXJjaC94ODYvS2NvbmZpZyBiL2FyY2gveDg2
L0tjb25maWcNCj4gPiBpbmRleCBjMDdmNDkyYjg3MWEuLjUyMzMxZjM5NWJmNCAxMDA2NDQNCj4g
PiAtLS0gYS9hcmNoL3g4Ni9LY29uZmlnDQo+ID4gKysrIGIvYXJjaC94ODYvS2NvbmZpZw0KPiA+
IEBAIC02Nyw2ICs2Nyw3IEBAIGNvbmZpZyBYODYNCj4gPiAJc2VsZWN0IEFSQ0hfSEFTX1NZTkNf
Q09SRV9CRUZPUkVfVVNFUk1PREUNCj4gPiAJc2VsZWN0IEFSQ0hfSEFTX1VCU0FOX1NBTklUSVpF
X0FMTA0KPiA+IAlzZWxlY3QgQVJDSF9IQVNfWk9ORV9ERVZJQ0UJCWlmIFg4Nl82NA0KPiA+ICsJ
c2VsZWN0IEFSQ0hfSEFWRV9DTVBYQ0hHNjQJCWlmIFg4Nl9DTVBYQ0hHNjQNCj4gPiAJc2VsZWN0
IEFSQ0hfSEFWRV9OTUlfU0FGRV9DTVBYQ0hHDQo+ID4gCXNlbGVjdCBBUkNIX01JR0hUX0hBVkVf
QUNQSV9QREMJCWlmIEFDUEkNCj4gPiAJc2VsZWN0IEFSQ0hfTUlHSFRfSEFWRV9QQ19QQVJQT1JU
DQo+ID4gZGlmZiAtLWdpdCBhL2FyY2gveHRlbnNhL0tjb25maWcgYi9hcmNoL3h0ZW5zYS9LY29u
ZmlnDQo+ID4gaW5kZXggYzkyMWU4YmNjZGM4Li4wZTVjNzc5NThmYTMgMTAwNjQ0DQo+ID4gLS0t
IGEvYXJjaC94dGVuc2EvS2NvbmZpZw0KPiA+ICsrKyBiL2FyY2gveHRlbnNhL0tjb25maWcNCj4g
PiBAQCAtNCw2ICs0LDcgQEAgY29uZmlnIFpPTkVfRE1BDQo+ID4gDQo+ID4gY29uZmlnIFhURU5T
QQ0KPiA+IAlkZWZfYm9vbCB5DQo+ID4gKwlzZWxlY3QgQVJDSF9IQVZFX0NNUFhDSEc2NA0KPiA+
IAlzZWxlY3QgQVJDSF9OT19DT0hFUkVOVF9ETUFfTU1BUCBpZiAhTU1VDQo+ID4gCXNlbGVjdCBB
UkNIX1dBTlRfRlJBTUVfUE9JTlRFUlMNCj4gPiAJc2VsZWN0IEFSQ0hfV0FOVF9JUENfUEFSU0Vf
VkVSU0lPTg0KPiANCj4gUGVyaGFwcyBpdCB3b3VsZCBiZSBiZXR0ZXIgdG8gZGVmaW5lIGNtcHhj
aGc2NCBhcyBhIG1hY3JvICh3aGljaCBjYW4gYmUNCj4gI2RlZmluZSBjbXB4Y2hnNjQgY21weGNo
ZzY0KSByYXRoZXIgcHV0dGluZyB0aGlzIGluIEtjb25maWc/IFB1dHRpbmcgaXQNCj4gaW4gS2Nv
bmZpZyBtYWtlcyBzZW5zZSBpZiBpdCBhZmZlY3RzIGNvbmZpZyBvcHRpb25zLg0KDQpUaGF0J3Mg
YW4gaW50ZXJlc3Rpbmcgc3VnZ2VzdGlvbi4gVGhlIGZvbGxvd2luZyBzZWVtcyB0byBiZSBzdWZm
aWNpZW50IHRvDQphbGxvdyB0aGUgYmxvY2sgbGF5ZXIgdG8gY2hlY2sgd2l0aCBpZiBkZWZpbmVk
KGNtcHhjaGc2NCkgZm9yIGNtcHhjaGc2NCgpDQpzdXBwb3J0Og0KDQpkaWZmIC0tZ2l0IGEvYXJj
aC9hcm0vaW5jbHVkZS9hc20vY21weGNoZy5oIGIvYXJjaC9hcm0vaW5jbHVkZS9hc20vY21weGNo
Zy5oDQppbmRleCA4YjcwMWY4ZTE3NWMuLjQxYWI5OWJjMDRjYSAxMDA2NDQNCi0tLSBhL2FyY2gv
YXJtL2luY2x1ZGUvYXNtL2NtcHhjaGcuaA0KKysrIGIvYXJjaC9hcm0vaW5jbHVkZS9hc20vY21w
eGNoZy5oDQpAQCAtMTQxLDcgKzE0MSw5IEBAIHN0YXRpYyBpbmxpbmUgdW5zaWduZWQgbG9uZyBf
X3hjaGcodW5zaWduZWQgbG9uZyB4LCB2b2xhdGlsZSB2b2lkICpwdHIsIGludCBzaXplDQogCQkJ
CQkgICAgICAgIHNpemVvZigqKHB0cikpKTsJXA0KIH0pDQogDQorI2lmbmRlZiBDT05GSUdfVEhV
TUIyX0tFUk5FTA0KICNkZWZpbmUgY21weGNoZzY0X2xvY2FsKHB0ciwgbywgbikgX19jbXB4Y2hn
NjRfbG9jYWxfZ2VuZXJpYygocHRyKSwgKG8pLCAobikpDQorI2VuZGlmIC8qIENPTkZJR19USFVN
QjJfS0VSTkVMICovDQogDQogI2luY2x1ZGUgPGFzbS1nZW5lcmljL2NtcHhjaGcuaD4NCiANCkBA
IC0yNDEsNiArMjQzLDcgQEAgc3RhdGljIGlubGluZSB1bnNpZ25lZCBsb25nIF9fY21weGNoZ19s
b2NhbCh2b2xhdGlsZSB2b2lkICpwdHIsDQogCQkJCSAgICAgICAgc2l6ZW9mKCoocHRyKSkpOwkJ
XA0KIH0pDQogDQorI2lmbmRlZiBDT05GSUdfVEhVTUIyX0tFUk5FTA0KIHN0YXRpYyBpbmxpbmUg
dW5zaWduZWQgbG9uZyBsb25nIF9fY21weGNoZzY0KHVuc2lnbmVkIGxvbmcgbG9uZyAqcHRyLA0K
IAkJCQkJICAgICB1bnNpZ25lZCBsb25nIGxvbmcgb2xkLA0KIAkJCQkJICAgICB1bnNpZ25lZCBs
b25nIGxvbmcgbmV3KQ0KQEAgLTI3Myw2ICsyNzYsNyBAQCBzdGF0aWMgaW5saW5lIHVuc2lnbmVk
IGxvbmcgbG9uZyBfX2NtcHhjaGc2NCh1bnNpZ25lZCBsb25nIGxvbmcgKnB0ciwNCiB9KQ0KIA0K
ICNkZWZpbmUgY21weGNoZzY0X2xvY2FsKHB0ciwgbywgbikgY21weGNoZzY0X3JlbGF4ZWQoKHB0
ciksIChvKSwgKG4pKQ0KKyNlbmRpZiAvKiBDT05GSUdfVEhVTUIyX0tFUk5FTCAqLw0KIA0KICNl
bmRpZgkvKiBfX0xJTlVYX0FSTV9BUkNIX18gPj0gNiAqLw0KIA0K

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v11 1/2] arch/*: Add CONFIG_ARCH_HAVE_CMPXCHG64
@ 2018-05-21 15:35       ` Bart Van Assche
  0 siblings, 0 replies; 7+ messages in thread
From: Bart Van Assche @ 2018-05-21 15:35 UTC (permalink / raw)
  To: hpa, axboe
  Cc: schwidefsky, linux-kernel, linux-block, jcmvbkbc, corbet, tglx,
	deller, heiko.carstens, hch, fenghua.yu, catalin.marinas,
	will.deacon, mingo, jejb, tony.luck, chris, davem, arnd, geert,
	benh, mpe, paulus

On Fri, 2018-05-18 at 11:32 -0700, hpa@zytor.com wrote:
> On May 18, 2018 11:00:05 AM PDT, Bart Van Assche <bart.vanassche@wdc.com> wrote:
> > The next patch in this series introduces a call to cmpxchg64()
> > in the block layer core for those architectures on which this
> > functionality is available. Make it possible to test whether
> > cmpxchg64() is available by introducing CONFIG_ARCH_HAVE_CMPXCHG64.
> > 
> > Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
> > Cc: Catalin Marinas <catalin.marinas@arm.com>
> > Cc: Will Deacon <will.deacon@arm.com>
> > Cc: Tony Luck <tony.luck@intel.com>
> > Cc: Fenghua Yu <fenghua.yu@intel.com>
> > Cc: Geert Uytterhoeven <geert@linux-m68k.org>
> > Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
> > Cc: Helge Deller <deller@gmx.de>
> > Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
> > Cc: Paul Mackerras <paulus@samba.org>
> > Cc: Michael Ellerman <mpe@ellerman.id.au>
> > Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
> > Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
> > Cc: David S. Miller <davem@davemloft.net>
> > Cc: Thomas Gleixner <tglx@linutronix.de>
> > Cc: Ingo Molnar <mingo@redhat.com>
> > Cc: H. Peter Anvin <hpa@zytor.com>
> > Cc: Chris Zankel <chris@zankel.net>
> > Cc: Max Filippov <jcmvbkbc@gmail.com>
> > Cc: Arnd Bergmann <arnd@arndb.de>
> > Cc: Jonathan Corbet <corbet@lwn.net>
> > ---
> > .../features/locking/cmpxchg64/arch-support.txt    | 33
> > ++++++++++++++++++++++
> > arch/Kconfig                                       |  4 +++
> > arch/arm/Kconfig                                   |  1 +
> > arch/ia64/Kconfig                                  |  1 +
> > arch/m68k/Kconfig                                  |  1 +
> > arch/mips/Kconfig                                  |  1 +
> > arch/parisc/Kconfig                                |  1 +
> > arch/riscv/Kconfig                                 |  1 +
> > arch/sparc/Kconfig                                 |  1 +
> > arch/x86/Kconfig                                   |  1 +
> > arch/xtensa/Kconfig                                |  1 +
> > 11 files changed, 46 insertions(+)
> > create mode 100644
> > Documentation/features/locking/cmpxchg64/arch-support.txt
> > 
> > diff --git a/Documentation/features/locking/cmpxchg64/arch-support.txt
> > b/Documentation/features/locking/cmpxchg64/arch-support.txt
> > new file mode 100644
> > index 000000000000..84bfef7242b2
> > --- /dev/null
> > +++ b/Documentation/features/locking/cmpxchg64/arch-support.txt
> > @@ -0,0 +1,33 @@
> > +#
> > +# Feature name:          cmpxchg64
> > +#         Kconfig:       ARCH_HAVE_CMPXCHG64
> > +#         description:   arch supports the cmpxchg64() API
> > +#
> > +    -----------------------
> > +    |         arch |status|
> > +    -----------------------
> > +    |       alpha: |  ok  |
> > +    |         arc: |  ..  |
> > +    |         arm: |  ok  |
> > +    |       arm64: |  ok  |
> > +    |         c6x: |  ..  |
> > +    |       h8300: |  ..  |
> > +    |     hexagon: |  ..  |
> > +    |        ia64: |  ok  |
> > +    |        m68k: |  ok  |
> > +    |  microblaze: |  ..  |
> > +    |        mips: |  ok  |
> > +    |       nds32: |  ..  |
> > +    |       nios2: |  ..  |
> > +    |    openrisc: |  ..  |
> > +    |      parisc: |  ok  |
> > +    |     powerpc: |  ok  |
> > +    |       riscv: |  ok  |
> > +    |        s390: |  ok  |
> > +    |          sh: |  ..  |
> > +    |       sparc: |  ok  |
> > +    |          um: |  ..  |
> > +    |   unicore32: |  ..  |
> > +    |         x86: |  ok  |
> > +    |      xtensa: |  ok  |
> > +    -----------------------
> > diff --git a/arch/Kconfig b/arch/Kconfig
> > index 8e0d665c8d53..9840b2577af1 100644
> > --- a/arch/Kconfig
> > +++ b/arch/Kconfig
> > @@ -358,6 +358,10 @@ config HAVE_ALIGNED_STRUCT_PAGE
> > 	  on a struct page for better performance. However selecting this
> > 	  might increase the size of a struct page by a word.
> > 
> > +config ARCH_HAVE_CMPXCHG64
> > +	bool
> > +	default y if 64BIT
> > +
> > config HAVE_CMPXCHG_LOCAL
> > 	bool
> > 
> > diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
> > index a7f8e7f4b88f..02c75697176e 100644
> > --- a/arch/arm/Kconfig
> > +++ b/arch/arm/Kconfig
> > @@ -13,6 +13,7 @@ config ARM
> > 	select ARCH_HAS_STRICT_KERNEL_RWX if MMU && !XIP_KERNEL
> > 	select ARCH_HAS_STRICT_MODULE_RWX if MMU
> > 	select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
> > +	select ARCH_HAVE_CMPXCHG64 if !THUMB2_KERNEL
> > 	select ARCH_HAVE_CUSTOM_GPIO_H
> > 	select ARCH_HAS_GCOV_PROFILE_ALL
> > 	select ARCH_MIGHT_HAVE_PC_PARPORT
> > diff --git a/arch/ia64/Kconfig b/arch/ia64/Kconfig
> > index bbe12a038d21..31c49e1482e2 100644
> > --- a/arch/ia64/Kconfig
> > +++ b/arch/ia64/Kconfig
> > @@ -41,6 +41,7 @@ config IA64
> > 	select GENERIC_PENDING_IRQ if SMP
> > 	select GENERIC_IRQ_SHOW
> > 	select GENERIC_IRQ_LEGACY
> > +	select ARCH_HAVE_CMPXCHG64
> > 	select ARCH_HAVE_NMI_SAFE_CMPXCHG
> > 	select GENERIC_IOMAP
> > 	select GENERIC_SMP_IDLE_THREAD
> > diff --git a/arch/m68k/Kconfig b/arch/m68k/Kconfig
> > index 785612b576f7..7b87cda3bbed 100644
> > --- a/arch/m68k/Kconfig
> > +++ b/arch/m68k/Kconfig
> > @@ -11,6 +11,7 @@ config M68K
> > 	select GENERIC_ATOMIC64
> > 	select HAVE_UID16
> > 	select VIRT_TO_BUS
> > +	select ARCH_HAVE_CMPXCHG64
> > 	select ARCH_HAVE_NMI_SAFE_CMPXCHG if RMW_INSNS
> > 	select GENERIC_CPU_DEVICES
> > 	select GENERIC_IOMAP
> > diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
> > index 225c95da23ce..088bca0fd9f2 100644
> > --- a/arch/mips/Kconfig
> > +++ b/arch/mips/Kconfig
> > @@ -7,6 +7,7 @@ config MIPS
> > 	select ARCH_DISCARD_MEMBLOCK
> > 	select ARCH_HAS_ELF_RANDOMIZE
> > 	select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
> > +	select ARCH_HAVE_CMPXCHG64 if 64BIT
> > 	select ARCH_SUPPORTS_UPROBES
> > 	select ARCH_USE_BUILTIN_BSWAP
> > 	select ARCH_USE_CMPXCHG_LOCKREF if 64BIT
> > diff --git a/arch/parisc/Kconfig b/arch/parisc/Kconfig
> > index fc5a574c3482..166c30865255 100644
> > --- a/arch/parisc/Kconfig
> > +++ b/arch/parisc/Kconfig
> > @@ -30,6 +30,7 @@ config PARISC
> > 	select GENERIC_ATOMIC64 if !64BIT
> > 	select GENERIC_IRQ_PROBE
> > 	select GENERIC_PCI_IOMAP
> > +	select ARCH_HAVE_CMPXCHG64
> > 	select ARCH_HAVE_NMI_SAFE_CMPXCHG
> > 	select GENERIC_SMP_IDLE_THREAD
> > 	select GENERIC_CPU_DEVICES
> > diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
> > index cd4fd85fde84..4f886a055ff6 100644
> > --- a/arch/riscv/Kconfig
> > +++ b/arch/riscv/Kconfig
> > @@ -8,6 +8,7 @@ config RISCV
> > 	select OF
> > 	select OF_EARLY_FLATTREE
> > 	select OF_IRQ
> > +	select ARCH_HAVE_CMPXCHG64
> > 	select ARCH_WANT_FRAME_POINTERS
> > 	select CLONE_BACKWARDS
> > 	select COMMON_CLK
> > diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig
> > index 8767e45f1b2b..e3429b78c491 100644
> > --- a/arch/sparc/Kconfig
> > +++ b/arch/sparc/Kconfig
> > @@ -75,6 +75,7 @@ config SPARC64
> > 	select HAVE_PERF_EVENTS
> > 	select PERF_USE_VMALLOC
> > 	select IRQ_PREFLOW_FASTEOI
> > +	select ARCH_HAVE_CMPXCHG64
> > 	select ARCH_HAVE_NMI_SAFE_CMPXCHG
> > 	select HAVE_C_RECORDMCOUNT
> > 	select NO_BOOTMEM
> > diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> > index c07f492b871a..52331f395bf4 100644
> > --- a/arch/x86/Kconfig
> > +++ b/arch/x86/Kconfig
> > @@ -67,6 +67,7 @@ config X86
> > 	select ARCH_HAS_SYNC_CORE_BEFORE_USERMODE
> > 	select ARCH_HAS_UBSAN_SANITIZE_ALL
> > 	select ARCH_HAS_ZONE_DEVICE		if X86_64
> > +	select ARCH_HAVE_CMPXCHG64		if X86_CMPXCHG64
> > 	select ARCH_HAVE_NMI_SAFE_CMPXCHG
> > 	select ARCH_MIGHT_HAVE_ACPI_PDC		if ACPI
> > 	select ARCH_MIGHT_HAVE_PC_PARPORT
> > diff --git a/arch/xtensa/Kconfig b/arch/xtensa/Kconfig
> > index c921e8bccdc8..0e5c77958fa3 100644
> > --- a/arch/xtensa/Kconfig
> > +++ b/arch/xtensa/Kconfig
> > @@ -4,6 +4,7 @@ config ZONE_DMA
> > 
> > config XTENSA
> > 	def_bool y
> > +	select ARCH_HAVE_CMPXCHG64
> > 	select ARCH_NO_COHERENT_DMA_MMAP if !MMU
> > 	select ARCH_WANT_FRAME_POINTERS
> > 	select ARCH_WANT_IPC_PARSE_VERSION
> 
> Perhaps it would be better to define cmpxchg64 as a macro (which can be
> #define cmpxchg64 cmpxchg64) rather putting this in Kconfig? Putting it
> in Kconfig makes sense if it affects config options.

That's an interesting suggestion. The following seems to be sufficient to
allow the block layer to check with if defined(cmpxchg64) for cmpxchg64()
support:

diff --git a/arch/arm/include/asm/cmpxchg.h b/arch/arm/include/asm/cmpxchg.h
index 8b701f8e175c..41ab99bc04ca 100644
--- a/arch/arm/include/asm/cmpxchg.h
+++ b/arch/arm/include/asm/cmpxchg.h
@@ -141,7 +141,9 @@ static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int size
 					        sizeof(*(ptr)));	\
 })
 
+#ifndef CONFIG_THUMB2_KERNEL
 #define cmpxchg64_local(ptr, o, n) __cmpxchg64_local_generic((ptr), (o), (n))
+#endif /* CONFIG_THUMB2_KERNEL */
 
 #include <asm-generic/cmpxchg.h>
 
@@ -241,6 +243,7 @@ static inline unsigned long __cmpxchg_local(volatile void *ptr,
 				        sizeof(*(ptr)));		\
 })
 
+#ifndef CONFIG_THUMB2_KERNEL
 static inline unsigned long long __cmpxchg64(unsigned long long *ptr,
 					     unsigned long long old,
 					     unsigned long long new)
@@ -273,6 +276,7 @@ static inline unsigned long long __cmpxchg64(unsigned long long *ptr,
 })
 
 #define cmpxchg64_local(ptr, o, n) cmpxchg64_relaxed((ptr), (o), (n))
+#endif /* CONFIG_THUMB2_KERNEL */
 
 #endif	/* __LINUX_ARM_ARCH__ >= 6 */
 

^ permalink raw reply related	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2018-05-21 15:36 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-05-18 18:00 [PATCH v11 0/2] blk-mq: Rework blk-mq timeout handling again Bart Van Assche
2018-05-18 18:00 ` [PATCH v11 1/2] arch/*: Add CONFIG_ARCH_HAVE_CMPXCHG64 Bart Van Assche
2018-05-18 18:32   ` hpa
2018-05-18 18:32     ` hpa
2018-05-21 15:35     ` Bart Van Assche
2018-05-21 15:35       ` Bart Van Assche
2018-05-18 18:00 ` [PATCH v11 2/2] blk-mq: Rework blk-mq timeout handling again Bart Van Assche

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.