* [patch 0/3] block/mq: Convert to the new hotplug state machine
@ 2016-09-19 21:28 Thomas Gleixner
2016-09-19 21:28 ` Thomas Gleixner
` (2 more replies)
0 siblings, 3 replies; 9+ messages in thread
From: Thomas Gleixner @ 2016-09-19 21:28 UTC (permalink / raw)
To: LKML; +Cc: linux-block, Jens Axboe, Christoph Hellwing, Sebastian Siewior
The following series converts block/mq to the new hotplug state
machine. Patch 1/3 reserves the states for the block layer and is already applied to
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git smp/for-block
to avoid merge conflicts. This branch can be pulled into the block layer
instead of applying patch 1/3 manually,
Thanks,
tglx
---
include/linux/blk-mq.h | 2
block/blk-mq-cpu.c | 15 ++----
block/blk-mq.c | 108 ++++++++++++++++++++-------------------------
block/blk-mq.h | 2
include/linux/cpuhotplug.h | 2
5 files changed, 59 insertions(+), 70 deletions(-)
^ permalink raw reply [flat|nested] 9+ messages in thread
* [patch 1/3] blk/mq: Reserve hotplug states for block multiqueue
2016-09-19 21:28 [patch 0/3] block/mq: Convert to the new hotplug state machine Thomas Gleixner
@ 2016-09-19 21:28 ` Thomas Gleixner
2016-09-19 21:28 ` Thomas Gleixner
2016-09-19 21:28 ` Thomas Gleixner
2 siblings, 0 replies; 9+ messages in thread
From: Thomas Gleixner @ 2016-09-19 21:28 UTC (permalink / raw)
To: LKML
Cc: linux-block, Jens Axboe, Christoph Hellwing, Sebastian Siewior,
Peter Zijlstra, rt
This patch only reserves two CPU hotplug states for block/mq so the block tree
can apply the conversion patches.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: rt@linutronix.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
include/linux/cpuhotplug.h | 2 ++
1 file changed, 2 insertions(+)
--- a/include/linux/cpuhotplug.h
+++ b/include/linux/cpuhotplug.h
@@ -14,6 +14,7 @@ enum cpuhp_state {
CPUHP_PERF_SUPERH,
CPUHP_X86_HPET_DEAD,
CPUHP_X86_APB_DEAD,
+ CPUHP_BLK_MQ_DEAD,
CPUHP_WORKQUEUE_PREP,
CPUHP_POWER_NUMA_PREPARE,
CPUHP_HRTIMERS_PREPARE,
@@ -22,6 +23,7 @@ enum cpuhp_state {
CPUHP_SMPCFD_PREPARE,
CPUHP_RCUTREE_PREP,
CPUHP_NOTIFY_PREPARE,
+ CPUHP_BLK_MQ_PREPARE,
CPUHP_TIMERS_DEAD,
CPUHP_BRINGUP_CPU,
CPUHP_AP_IDLE_DEAD,
^ permalink raw reply [flat|nested] 9+ messages in thread
* [patch 1/3] blk/mq: Reserve hotplug states for block multiqueue
@ 2016-09-19 21:28 ` Thomas Gleixner
0 siblings, 0 replies; 9+ messages in thread
From: Thomas Gleixner @ 2016-09-19 21:28 UTC (permalink / raw)
To: LKML
Cc: linux-block, Jens Axboe, Christoph Hellwing, Sebastian Siewior,
Peter Zijlstra, rt
[-- Attachment #1: blkmq_Reserve_hotplug_ID_states_for_block.patch --]
[-- Type: text/plain, Size: 867 bytes --]
This patch only reserves two CPU hotplug states for block/mq so the block tree
can apply the conversion patches.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: rt@linutronix.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
include/linux/cpuhotplug.h | 2 ++
1 file changed, 2 insertions(+)
--- a/include/linux/cpuhotplug.h
+++ b/include/linux/cpuhotplug.h
@@ -14,6 +14,7 @@ enum cpuhp_state {
CPUHP_PERF_SUPERH,
CPUHP_X86_HPET_DEAD,
CPUHP_X86_APB_DEAD,
+ CPUHP_BLK_MQ_DEAD,
CPUHP_WORKQUEUE_PREP,
CPUHP_POWER_NUMA_PREPARE,
CPUHP_HRTIMERS_PREPARE,
@@ -22,6 +23,7 @@ enum cpuhp_state {
CPUHP_SMPCFD_PREPARE,
CPUHP_RCUTREE_PREP,
CPUHP_NOTIFY_PREPARE,
+ CPUHP_BLK_MQ_PREPARE,
CPUHP_TIMERS_DEAD,
CPUHP_BRINGUP_CPU,
CPUHP_AP_IDLE_DEAD,
^ permalink raw reply [flat|nested] 9+ messages in thread
* [patch 2/3] blk/mq/cpu-notif: Convert to hotplug state machine
2016-09-19 21:28 [patch 0/3] block/mq: Convert to the new hotplug state machine Thomas Gleixner
@ 2016-09-19 21:28 ` Thomas Gleixner
2016-09-19 21:28 ` Thomas Gleixner
2016-09-19 21:28 ` Thomas Gleixner
2 siblings, 0 replies; 9+ messages in thread
From: Thomas Gleixner @ 2016-09-19 21:28 UTC (permalink / raw)
To: LKML
Cc: linux-block, Jens Axboe, Christoph Hellwing, Sebastian Siewior,
Peter Zijlstra, rt
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Install the callbacks via the state machine so we can phase out the cpu
hotplug notifiers..
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: rt@linutronix.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
block/blk-mq-cpu.c | 15 +++++++--------
block/blk-mq.c | 21 +++++----------------
block/blk-mq.h | 2 +-
include/linux/blk-mq.h | 2 +-
4 files changed, 14 insertions(+), 26 deletions(-)
--- a/block/blk-mq-cpu.c
+++ b/block/blk-mq-cpu.c
@@ -18,18 +18,16 @@
static LIST_HEAD(blk_mq_cpu_notify_list);
static DEFINE_RAW_SPINLOCK(blk_mq_cpu_notify_lock);
-static int blk_mq_main_cpu_notify(struct notifier_block *self,
- unsigned long action, void *hcpu)
+static int blk_mq_cpu_dead(unsigned int cpu)
{
- unsigned int cpu = (unsigned long) hcpu;
struct blk_mq_cpu_notifier *notify;
- int ret = NOTIFY_OK;
+ int ret;
raw_spin_lock(&blk_mq_cpu_notify_lock);
list_for_each_entry(notify, &blk_mq_cpu_notify_list, list) {
- ret = notify->notify(notify->data, action, cpu);
- if (ret != NOTIFY_OK)
+ ret = notify->notify(notify->data, cpu);
+ if (ret)
break;
}
@@ -54,7 +52,7 @@ void blk_mq_unregister_cpu_notifier(stru
}
void blk_mq_init_cpu_notifier(struct blk_mq_cpu_notifier *notifier,
- int (*fn)(void *, unsigned long, unsigned int),
+ int (*fn)(void *, unsigned int),
void *data)
{
notifier->notify = fn;
@@ -63,5 +61,6 @@ void blk_mq_init_cpu_notifier(struct blk
void __init blk_mq_cpu_init(void)
{
- hotcpu_notifier(blk_mq_main_cpu_notify, 0);
+ cpuhp_setup_state_nocalls(CPUHP_BLK_MQ_DEAD, "block/mq:dead", NULL,
+ blk_mq_cpu_dead);
}
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1590,30 +1590,19 @@ static int blk_mq_hctx_cpu_offline(struc
spin_unlock(&ctx->lock);
if (list_empty(&tmp))
- return NOTIFY_OK;
+ return 0;
spin_lock(&hctx->lock);
list_splice_tail_init(&tmp, &hctx->dispatch);
spin_unlock(&hctx->lock);
blk_mq_run_hw_queue(hctx, true);
- return NOTIFY_OK;
+ return 0;
}
-static int blk_mq_hctx_notify(void *data, unsigned long action,
- unsigned int cpu)
+static int blk_mq_hctx_notify_dead(void *hctx, unsigned int cpu)
{
- struct blk_mq_hw_ctx *hctx = data;
-
- if (action == CPU_DEAD || action == CPU_DEAD_FROZEN)
- return blk_mq_hctx_cpu_offline(hctx, cpu);
-
- /*
- * In case of CPU online, tags may be reallocated
- * in blk_mq_map_swqueue() after mapping is updated.
- */
-
- return NOTIFY_OK;
+ return blk_mq_hctx_cpu_offline(hctx, cpu);
}
/* hctx->ctxs will be freed in queue's release handler */
@@ -1681,7 +1670,7 @@ static int blk_mq_init_hctx(struct reque
hctx->flags = set->flags & ~BLK_MQ_F_TAG_SHARED;
blk_mq_init_cpu_notifier(&hctx->cpu_notifier,
- blk_mq_hctx_notify, hctx);
+ blk_mq_hctx_notify_dead, hctx);
blk_mq_register_cpu_notifier(&hctx->cpu_notifier);
hctx->tags = set->tags[hctx_idx];
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -34,7 +34,7 @@ void blk_mq_wake_waiters(struct request_
*/
struct blk_mq_cpu_notifier;
void blk_mq_init_cpu_notifier(struct blk_mq_cpu_notifier *notifier,
- int (*fn)(void *, unsigned long, unsigned int),
+ int (*fn)(void *, unsigned int),
void *data);
void blk_mq_register_cpu_notifier(struct blk_mq_cpu_notifier *notifier);
void blk_mq_unregister_cpu_notifier(struct blk_mq_cpu_notifier *notifier);
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -10,7 +10,7 @@ struct blk_flush_queue;
struct blk_mq_cpu_notifier {
struct list_head list;
void *data;
- int (*notify)(void *data, unsigned long action, unsigned int cpu);
+ int (*notify)(void *data, unsigned int cpu);
};
struct blk_mq_hw_ctx {
^ permalink raw reply [flat|nested] 9+ messages in thread
* [patch 2/3] blk/mq/cpu-notif: Convert to hotplug state machine
@ 2016-09-19 21:28 ` Thomas Gleixner
0 siblings, 0 replies; 9+ messages in thread
From: Thomas Gleixner @ 2016-09-19 21:28 UTC (permalink / raw)
To: LKML
Cc: linux-block, Jens Axboe, Christoph Hellwing, Sebastian Siewior,
Peter Zijlstra, rt
[-- Attachment #1: blkmqcpu-notif_Convert_to_hotplug_state_machine.patch --]
[-- Type: text/plain, Size: 3857 bytes --]
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Install the callbacks via the state machine so we can phase out the cpu
hotplug notifiers..
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: rt@linutronix.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
block/blk-mq-cpu.c | 15 +++++++--------
block/blk-mq.c | 21 +++++----------------
block/blk-mq.h | 2 +-
include/linux/blk-mq.h | 2 +-
4 files changed, 14 insertions(+), 26 deletions(-)
--- a/block/blk-mq-cpu.c
+++ b/block/blk-mq-cpu.c
@@ -18,18 +18,16 @@
static LIST_HEAD(blk_mq_cpu_notify_list);
static DEFINE_RAW_SPINLOCK(blk_mq_cpu_notify_lock);
-static int blk_mq_main_cpu_notify(struct notifier_block *self,
- unsigned long action, void *hcpu)
+static int blk_mq_cpu_dead(unsigned int cpu)
{
- unsigned int cpu = (unsigned long) hcpu;
struct blk_mq_cpu_notifier *notify;
- int ret = NOTIFY_OK;
+ int ret;
raw_spin_lock(&blk_mq_cpu_notify_lock);
list_for_each_entry(notify, &blk_mq_cpu_notify_list, list) {
- ret = notify->notify(notify->data, action, cpu);
- if (ret != NOTIFY_OK)
+ ret = notify->notify(notify->data, cpu);
+ if (ret)
break;
}
@@ -54,7 +52,7 @@ void blk_mq_unregister_cpu_notifier(stru
}
void blk_mq_init_cpu_notifier(struct blk_mq_cpu_notifier *notifier,
- int (*fn)(void *, unsigned long, unsigned int),
+ int (*fn)(void *, unsigned int),
void *data)
{
notifier->notify = fn;
@@ -63,5 +61,6 @@ void blk_mq_init_cpu_notifier(struct blk
void __init blk_mq_cpu_init(void)
{
- hotcpu_notifier(blk_mq_main_cpu_notify, 0);
+ cpuhp_setup_state_nocalls(CPUHP_BLK_MQ_DEAD, "block/mq:dead", NULL,
+ blk_mq_cpu_dead);
}
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1590,30 +1590,19 @@ static int blk_mq_hctx_cpu_offline(struc
spin_unlock(&ctx->lock);
if (list_empty(&tmp))
- return NOTIFY_OK;
+ return 0;
spin_lock(&hctx->lock);
list_splice_tail_init(&tmp, &hctx->dispatch);
spin_unlock(&hctx->lock);
blk_mq_run_hw_queue(hctx, true);
- return NOTIFY_OK;
+ return 0;
}
-static int blk_mq_hctx_notify(void *data, unsigned long action,
- unsigned int cpu)
+static int blk_mq_hctx_notify_dead(void *hctx, unsigned int cpu)
{
- struct blk_mq_hw_ctx *hctx = data;
-
- if (action == CPU_DEAD || action == CPU_DEAD_FROZEN)
- return blk_mq_hctx_cpu_offline(hctx, cpu);
-
- /*
- * In case of CPU online, tags may be reallocated
- * in blk_mq_map_swqueue() after mapping is updated.
- */
-
- return NOTIFY_OK;
+ return blk_mq_hctx_cpu_offline(hctx, cpu);
}
/* hctx->ctxs will be freed in queue's release handler */
@@ -1681,7 +1670,7 @@ static int blk_mq_init_hctx(struct reque
hctx->flags = set->flags & ~BLK_MQ_F_TAG_SHARED;
blk_mq_init_cpu_notifier(&hctx->cpu_notifier,
- blk_mq_hctx_notify, hctx);
+ blk_mq_hctx_notify_dead, hctx);
blk_mq_register_cpu_notifier(&hctx->cpu_notifier);
hctx->tags = set->tags[hctx_idx];
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -34,7 +34,7 @@ void blk_mq_wake_waiters(struct request_
*/
struct blk_mq_cpu_notifier;
void blk_mq_init_cpu_notifier(struct blk_mq_cpu_notifier *notifier,
- int (*fn)(void *, unsigned long, unsigned int),
+ int (*fn)(void *, unsigned int),
void *data);
void blk_mq_register_cpu_notifier(struct blk_mq_cpu_notifier *notifier);
void blk_mq_unregister_cpu_notifier(struct blk_mq_cpu_notifier *notifier);
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -10,7 +10,7 @@ struct blk_flush_queue;
struct blk_mq_cpu_notifier {
struct list_head list;
void *data;
- int (*notify)(void *data, unsigned long action, unsigned int cpu);
+ int (*notify)(void *data, unsigned int cpu);
};
struct blk_mq_hw_ctx {
^ permalink raw reply [flat|nested] 9+ messages in thread
* [patch 3/3] blk/mq: Convert to hotplug state machine
2016-09-19 21:28 [patch 0/3] block/mq: Convert to the new hotplug state machine Thomas Gleixner
@ 2016-09-19 21:28 ` Thomas Gleixner
2016-09-19 21:28 ` Thomas Gleixner
2016-09-19 21:28 ` Thomas Gleixner
2 siblings, 0 replies; 9+ messages in thread
From: Thomas Gleixner @ 2016-09-19 21:28 UTC (permalink / raw)
To: LKML
Cc: linux-block, Jens Axboe, Christoph Hellwing, Sebastian Siewior,
Peter Zijlstra, rt
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Install the callbacks via the state machine so we can phase out the cpu
hotplug notifiers mess.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: rt@linutronix.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
block/blk-mq.c | 87 ++++++++++++++++++++++++++++-----------------------------
1 file changed, 43 insertions(+), 44 deletions(-)
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2090,50 +2090,18 @@ static void blk_mq_queue_reinit(struct r
blk_mq_sysfs_register(q);
}
-static int blk_mq_queue_reinit_notify(struct notifier_block *nb,
- unsigned long action, void *hcpu)
+/*
+ * New online cpumask which is going to be set in this hotplug event.
+ * Declare this cpumasks as global as cpu-hotplug operation is invoked
+ * one-by-one and dynamically allocating this could result in a failure.
+ */
+static struct cpumask cpuhp_online_new;
+
+static void blk_mq_queue_reinit_work(void)
{
struct request_queue *q;
- int cpu = (unsigned long)hcpu;
- /*
- * New online cpumask which is going to be set in this hotplug event.
- * Declare this cpumasks as global as cpu-hotplug operation is invoked
- * one-by-one and dynamically allocating this could result in a failure.
- */
- static struct cpumask online_new;
-
- /*
- * Before hotadded cpu starts handling requests, new mappings must
- * be established. Otherwise, these requests in hw queue might
- * never be dispatched.
- *
- * For example, there is a single hw queue (hctx) and two CPU queues
- * (ctx0 for CPU0, and ctx1 for CPU1).
- *
- * Now CPU1 is just onlined and a request is inserted into
- * ctx1->rq_list and set bit0 in pending bitmap as ctx1->index_hw is
- * still zero.
- *
- * And then while running hw queue, flush_busy_ctxs() finds bit0 is
- * set in pending bitmap and tries to retrieve requests in
- * hctx->ctxs[0]->rq_list. But htx->ctxs[0] is a pointer to ctx0,
- * so the request in ctx1->rq_list is ignored.
- */
- switch (action & ~CPU_TASKS_FROZEN) {
- case CPU_DEAD:
- case CPU_UP_CANCELED:
- cpumask_copy(&online_new, cpu_online_mask);
- break;
- case CPU_UP_PREPARE:
- cpumask_copy(&online_new, cpu_online_mask);
- cpumask_set_cpu(cpu, &online_new);
- break;
- default:
- return NOTIFY_OK;
- }
mutex_lock(&all_q_mutex);
-
/*
* We need to freeze and reinit all existing queues. Freezing
* involves synchronous wait for an RCU grace period and doing it
@@ -2154,13 +2122,43 @@ static int blk_mq_queue_reinit_notify(st
}
list_for_each_entry(q, &all_q_list, all_q_node)
- blk_mq_queue_reinit(q, &online_new);
+ blk_mq_queue_reinit(q, &cpuhp_online_new);
list_for_each_entry(q, &all_q_list, all_q_node)
blk_mq_unfreeze_queue(q);
mutex_unlock(&all_q_mutex);
- return NOTIFY_OK;
+}
+
+static int blk_mq_queue_reinit_dead(unsigned int cpu)
+{
+ cpumask_clear_cpu(cpu, &cpuhp_online_new);
+ blk_mq_queue_reinit_work();
+ return 0;
+}
+
+/*
+ * Before hotadded cpu starts handling requests, new mappings must be
+ * established. Otherwise, these requests in hw queue might never be
+ * dispatched.
+ *
+ * For example, there is a single hw queue (hctx) and two CPU queues (ctx0
+ * for CPU0, and ctx1 for CPU1).
+ *
+ * Now CPU1 is just onlined and a request is inserted into ctx1->rq_list
+ * and set bit0 in pending bitmap as ctx1->index_hw is still zero.
+ *
+ * And then while running hw queue, flush_busy_ctxs() finds bit0 is set in
+ * pending bitmap and tries to retrieve requests in hctx->ctxs[0]->rq_list.
+ * But htx->ctxs[0] is a pointer to ctx0, so the request in ctx1->rq_list
+ * is ignored.
+ */
+static int blk_mq_queue_reinit_prepare(unsigned int cpu)
+{
+ cpumask_copy(&cpuhp_online_new, cpu_online_mask);
+ cpumask_set_cpu(cpu, &cpuhp_online_new);
+ blk_mq_queue_reinit_work();
+ return 0;
}
static int __blk_mq_alloc_rq_maps(struct blk_mq_tag_set *set)
@@ -2381,8 +2379,9 @@ static int __init blk_mq_init(void)
{
blk_mq_cpu_init();
- hotcpu_notifier(blk_mq_queue_reinit_notify, 0);
-
+ cpuhp_setup_state_nocalls(CPUHP_BLK_MQ_PREPARE, "block/mq:prepare",
+ blk_mq_queue_reinit_prepare,
+ blk_mq_queue_reinit_dead);
return 0;
}
subsys_initcall(blk_mq_init);
^ permalink raw reply [flat|nested] 9+ messages in thread
* [patch 3/3] blk/mq: Convert to hotplug state machine
@ 2016-09-19 21:28 ` Thomas Gleixner
0 siblings, 0 replies; 9+ messages in thread
From: Thomas Gleixner @ 2016-09-19 21:28 UTC (permalink / raw)
To: LKML
Cc: linux-block, Jens Axboe, Christoph Hellwing, Sebastian Siewior,
Peter Zijlstra, rt
[-- Attachment #1: blkmq_Convert_to_hotplug_state_machine.patch --]
[-- Type: text/plain, Size: 4319 bytes --]
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Install the callbacks via the state machine so we can phase out the cpu
hotplug notifiers mess.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: rt@linutronix.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
block/blk-mq.c | 87 ++++++++++++++++++++++++++++-----------------------------
1 file changed, 43 insertions(+), 44 deletions(-)
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2090,50 +2090,18 @@ static void blk_mq_queue_reinit(struct r
blk_mq_sysfs_register(q);
}
-static int blk_mq_queue_reinit_notify(struct notifier_block *nb,
- unsigned long action, void *hcpu)
+/*
+ * New online cpumask which is going to be set in this hotplug event.
+ * Declare this cpumasks as global as cpu-hotplug operation is invoked
+ * one-by-one and dynamically allocating this could result in a failure.
+ */
+static struct cpumask cpuhp_online_new;
+
+static void blk_mq_queue_reinit_work(void)
{
struct request_queue *q;
- int cpu = (unsigned long)hcpu;
- /*
- * New online cpumask which is going to be set in this hotplug event.
- * Declare this cpumasks as global as cpu-hotplug operation is invoked
- * one-by-one and dynamically allocating this could result in a failure.
- */
- static struct cpumask online_new;
-
- /*
- * Before hotadded cpu starts handling requests, new mappings must
- * be established. Otherwise, these requests in hw queue might
- * never be dispatched.
- *
- * For example, there is a single hw queue (hctx) and two CPU queues
- * (ctx0 for CPU0, and ctx1 for CPU1).
- *
- * Now CPU1 is just onlined and a request is inserted into
- * ctx1->rq_list and set bit0 in pending bitmap as ctx1->index_hw is
- * still zero.
- *
- * And then while running hw queue, flush_busy_ctxs() finds bit0 is
- * set in pending bitmap and tries to retrieve requests in
- * hctx->ctxs[0]->rq_list. But htx->ctxs[0] is a pointer to ctx0,
- * so the request in ctx1->rq_list is ignored.
- */
- switch (action & ~CPU_TASKS_FROZEN) {
- case CPU_DEAD:
- case CPU_UP_CANCELED:
- cpumask_copy(&online_new, cpu_online_mask);
- break;
- case CPU_UP_PREPARE:
- cpumask_copy(&online_new, cpu_online_mask);
- cpumask_set_cpu(cpu, &online_new);
- break;
- default:
- return NOTIFY_OK;
- }
mutex_lock(&all_q_mutex);
-
/*
* We need to freeze and reinit all existing queues. Freezing
* involves synchronous wait for an RCU grace period and doing it
@@ -2154,13 +2122,43 @@ static int blk_mq_queue_reinit_notify(st
}
list_for_each_entry(q, &all_q_list, all_q_node)
- blk_mq_queue_reinit(q, &online_new);
+ blk_mq_queue_reinit(q, &cpuhp_online_new);
list_for_each_entry(q, &all_q_list, all_q_node)
blk_mq_unfreeze_queue(q);
mutex_unlock(&all_q_mutex);
- return NOTIFY_OK;
+}
+
+static int blk_mq_queue_reinit_dead(unsigned int cpu)
+{
+ cpumask_clear_cpu(cpu, &cpuhp_online_new);
+ blk_mq_queue_reinit_work();
+ return 0;
+}
+
+/*
+ * Before hotadded cpu starts handling requests, new mappings must be
+ * established. Otherwise, these requests in hw queue might never be
+ * dispatched.
+ *
+ * For example, there is a single hw queue (hctx) and two CPU queues (ctx0
+ * for CPU0, and ctx1 for CPU1).
+ *
+ * Now CPU1 is just onlined and a request is inserted into ctx1->rq_list
+ * and set bit0 in pending bitmap as ctx1->index_hw is still zero.
+ *
+ * And then while running hw queue, flush_busy_ctxs() finds bit0 is set in
+ * pending bitmap and tries to retrieve requests in hctx->ctxs[0]->rq_list.
+ * But htx->ctxs[0] is a pointer to ctx0, so the request in ctx1->rq_list
+ * is ignored.
+ */
+static int blk_mq_queue_reinit_prepare(unsigned int cpu)
+{
+ cpumask_copy(&cpuhp_online_new, cpu_online_mask);
+ cpumask_set_cpu(cpu, &cpuhp_online_new);
+ blk_mq_queue_reinit_work();
+ return 0;
}
static int __blk_mq_alloc_rq_maps(struct blk_mq_tag_set *set)
@@ -2381,8 +2379,9 @@ static int __init blk_mq_init(void)
{
blk_mq_cpu_init();
- hotcpu_notifier(blk_mq_queue_reinit_notify, 0);
-
+ cpuhp_setup_state_nocalls(CPUHP_BLK_MQ_PREPARE, "block/mq:prepare",
+ blk_mq_queue_reinit_prepare,
+ blk_mq_queue_reinit_dead);
return 0;
}
subsys_initcall(blk_mq_init);
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [patch 2/3] blk/mq/cpu-notif: Convert to hotplug state machine
2016-09-19 21:28 ` Thomas Gleixner
(?)
@ 2016-09-19 22:24 ` Christoph Hellwing
2016-09-19 23:37 ` Thomas Gleixner
-1 siblings, 1 reply; 9+ messages in thread
From: Christoph Hellwing @ 2016-09-19 22:24 UTC (permalink / raw)
To: Thomas Gleixner
Cc: LKML, linux-block, Jens Axboe, Christoph Hellwing,
Sebastian Siewior, Peter Zijlstra, rt
On Mon, Sep 19, 2016 at 09:28:20PM -0000, Thomas Gleixner wrote:
> From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
>
> Install the callbacks via the state machine so we can phase out the cpu
> hotplug notifiers..
Didn't Sebastian come up with a version of the hotplug state callbacks
that can be per-object? This seems to be a perfect candidate for that.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [patch 2/3] blk/mq/cpu-notif: Convert to hotplug state machine
2016-09-19 22:24 ` Christoph Hellwing
@ 2016-09-19 23:37 ` Thomas Gleixner
0 siblings, 0 replies; 9+ messages in thread
From: Thomas Gleixner @ 2016-09-19 23:37 UTC (permalink / raw)
To: Christoph Hellwing
Cc: LKML, linux-block, Jens Axboe, Sebastian Siewior, Peter Zijlstra, rt
On Tue, 20 Sep 2016, Christoph Hellwing wrote:
> On Mon, Sep 19, 2016 at 09:28:20PM -0000, Thomas Gleixner wrote:
> > From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> >
> > Install the callbacks via the state machine so we can phase out the cpu
> > hotplug notifiers..
>
> Didn't Sebastian come up with a version of the hotplug state callbacks
> that can be per-object? This seems to be a perfect candidate for that.
Indeed. I wrote that myself and forgot about it already..... :(
So yes, we can use that and get rid of blk-mq-cpu.c completely.
Thanks,
tglx
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2016-09-19 23:37 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-09-19 21:28 [patch 0/3] block/mq: Convert to the new hotplug state machine Thomas Gleixner
2016-09-19 21:28 ` [patch 1/3] blk/mq: Reserve hotplug states for block multiqueue Thomas Gleixner
2016-09-19 21:28 ` Thomas Gleixner
2016-09-19 21:28 ` [patch 2/3] blk/mq/cpu-notif: Convert to hotplug state machine Thomas Gleixner
2016-09-19 21:28 ` Thomas Gleixner
2016-09-19 22:24 ` Christoph Hellwing
2016-09-19 23:37 ` Thomas Gleixner
2016-09-19 21:28 ` [patch 3/3] blk/mq: " Thomas Gleixner
2016-09-19 21:28 ` Thomas Gleixner
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.