From: JeffleXu <jefflexu@linux.alibaba.com> To: ming.lei@redhat.com, snitzer@redhat.com, axboe@kernel.dk Cc: linux-block@vger.kernel.org, dm-devel@redhat.com Subject: Re: [PATCH] block: introduce QUEUE_FLAG_POLL_CAP flag Date: Fri, 16 Apr 2021 16:42:30 +0800 [thread overview] Message-ID: <d6139204-a299-61b5-f215-f65dda731739@linux.alibaba.com> (raw) In-Reply-To: <20210416080037.26335-1-jefflexu@linux.alibaba.com> On 4/16/21 4:00 PM, Jeffle Xu wrote: > Hi, > How about this patch to remove the extra poll_capable() method? > > And the following 'dm: support IO polling for bio-based dm device' needs > following change. > > ``` > + /* > + * Check for request-based device is remained to > + * dm_mq_init_request_queue()->blk_mq_init_allocated_queue(). > + * For bio-based device, only set QUEUE_FLAG_POLL when all underlying > + * devices supporting polling. > + */ > + if (__table_type_bio_based(t->type)) { > + if (dm_table_supports_poll(t)) { > + blk_queue_flag_set(QUEUE_FLAG_POLL_CAP, q); > + blk_queue_flag_set(QUEUE_FLAG_POLL, q); > + } > + else { > + blk_queue_flag_clear(QUEUE_FLAG_POLL, q); > + blk_queue_flag_clear(QUEUE_FLAG_POLL_CAP, q); > + } > + } > ``` > > > Introduce QUEUE_FLAG_POLL_CAP flag, indicating if the device supports IO > polling or not. Thus both blk-mq and bio-based device could set this > flag at the initialization phase, and then only this flag needs to be > checked instead of rechecking if the device has the ability of IO > polling when enabling IO polling via sysfs. > > For NVMe, the ability of IO polling may change after RESET, since > nvme.poll_queues module parameter may change. Thus the ability of IO > polling need to be rechecked after RESET. > The defect of this approach is that all device drivers that may change tag_set after initialization (e.g., NVMe RESET) need to update QUEUE_FLAG_POLL_CAP. Previous this patch, tag_set is checked directly in queue_poll_store, and thus device drivers don't need to care the queue_flags. > Signed-off-by: Jeffle Xu <jefflexu@linux.alibaba.com> > --- > block/blk-mq.c | 5 +++-- > block/blk-sysfs.c | 3 +-- > drivers/nvme/host/core.c | 2 ++ > include/linux/blk-mq.h | 12 ++++++++++++ > include/linux/blkdev.h | 2 ++ > 5 files changed, 20 insertions(+), 4 deletions(-) > > diff --git a/block/blk-mq.c b/block/blk-mq.c > index 414f5d99d9de..55ef6b975169 100644 > --- a/block/blk-mq.c > +++ b/block/blk-mq.c > @@ -3227,9 +3227,10 @@ struct request_queue *blk_mq_init_allocated_queue(struct blk_mq_tag_set *set, > q->tag_set = set; > > q->queue_flags |= QUEUE_FLAG_MQ_DEFAULT; > - if (set->nr_maps > HCTX_TYPE_POLL && > - set->map[HCTX_TYPE_POLL].nr_queues) > + if (blk_mq_poll_capable(set)) { > + blk_queue_flag_set(QUEUE_FLAG_POLL_CAP, q); > blk_queue_flag_set(QUEUE_FLAG_POLL, q); > + } > > q->sg_reserved_size = INT_MAX; > > diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c > index db3268d41274..64f0ab84b606 100644 > --- a/block/blk-sysfs.c > +++ b/block/blk-sysfs.c > @@ -426,8 +426,7 @@ static ssize_t queue_poll_store(struct request_queue *q, const char *page, > unsigned long poll_on; > ssize_t ret; > > - if (!q->tag_set || q->tag_set->nr_maps <= HCTX_TYPE_POLL || > - !q->tag_set->map[HCTX_TYPE_POLL].nr_queues) > + if(!blk_queue_poll_cap(q)) > return -EINVAL; > > ret = queue_var_store(&poll_on, page, count); > diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c > index bb7da34dd967..5344cc877b05 100644 > --- a/drivers/nvme/host/core.c > +++ b/drivers/nvme/host/core.c > @@ -2210,6 +2210,8 @@ static int nvme_update_ns_info(struct nvme_ns *ns, struct nvme_id_ns *id) > ns->lba_shift = id->lbaf[lbaf].ds; > nvme_set_queue_limits(ns->ctrl, ns->queue); > > + blk_mq_check_poll(ns->disk->queue, ns->disk->queue->tag_set); > + > ret = nvme_configure_metadata(ns, id); > if (ret) > goto out_unfreeze; > diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h > index 2c473c9b8990..ee4c89c8bebc 100644 > --- a/include/linux/blk-mq.h > +++ b/include/linux/blk-mq.h > @@ -618,4 +618,16 @@ blk_qc_t blk_mq_submit_bio(struct bio *bio); > void blk_mq_hctx_set_fq_lock_class(struct blk_mq_hw_ctx *hctx, > struct lock_class_key *key); > > +static inline bool blk_mq_poll_capable(struct blk_mq_tag_set *set) > +{ > + return set->nr_maps > HCTX_TYPE_POLL && > + set->map[HCTX_TYPE_POLL].nr_queues; > +} > + > +static inline void blk_mq_check_poll(struct request_queue *q, > + struct blk_mq_tag_set *set) > +{ > + if (blk_mq_poll_capable(set)) > + blk_queue_flag_set(QUEUE_FLAG_POLL_CAP, q); > +} Sorry it should be > +static inline void blk_mq_check_poll(struct request_queue *q, > + struct blk_mq_tag_set *set) > +{ > + if (blk_mq_poll_capable(set)) > + blk_queue_flag_set(QUEUE_FLAG_POLL_CAP, q); > +} > + else > + blk_queue_flag_clear(QUEUE_FLAG_POLL_CAP, q); > +} > #endif > diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h > index 1e88116dc070..d192a106bf40 100644 > --- a/include/linux/blkdev.h > +++ b/include/linux/blkdev.h > @@ -621,6 +621,7 @@ struct request_queue { > #define QUEUE_FLAG_RQ_ALLOC_TIME 27 /* record rq->alloc_time_ns */ > #define QUEUE_FLAG_HCTX_ACTIVE 28 /* at least one blk-mq hctx is active */ > #define QUEUE_FLAG_NOWAIT 29 /* device supports NOWAIT */ > +#define QUEUE_FLAG_POLL_CAP 30 /* device supports IO polling */ > > #define QUEUE_FLAG_MQ_DEFAULT ((1 << QUEUE_FLAG_IO_STAT) | \ > (1 << QUEUE_FLAG_SAME_COMP) | \ > @@ -668,6 +669,7 @@ bool blk_queue_flag_test_and_set(unsigned int flag, struct request_queue *q); > #define blk_queue_registered(q) test_bit(QUEUE_FLAG_REGISTERED, &(q)->queue_flags) > #define blk_queue_nowait(q) test_bit(QUEUE_FLAG_NOWAIT, &(q)->queue_flags) > #define blk_queue_poll(q) test_bit(QUEUE_FLAG_POLL, &(q)->queue_flags) > +#define blk_queue_poll_cap(q) test_bit(QUEUE_FLAG_POLL_CAP, &(q)->queue_flags) > > extern void blk_set_pm_only(struct request_queue *q); > extern void blk_clear_pm_only(struct request_queue *q); > -- Thanks, Jeffle
WARNING: multiple messages have this Message-ID (diff)
From: JeffleXu <jefflexu@linux.alibaba.com> To: ming.lei@redhat.com, snitzer@redhat.com, axboe@kernel.dk Cc: linux-block@vger.kernel.org, dm-devel@redhat.com Subject: Re: [dm-devel] [PATCH] block: introduce QUEUE_FLAG_POLL_CAP flag Date: Fri, 16 Apr 2021 16:42:30 +0800 [thread overview] Message-ID: <d6139204-a299-61b5-f215-f65dda731739@linux.alibaba.com> (raw) In-Reply-To: <20210416080037.26335-1-jefflexu@linux.alibaba.com> On 4/16/21 4:00 PM, Jeffle Xu wrote: > Hi, > How about this patch to remove the extra poll_capable() method? > > And the following 'dm: support IO polling for bio-based dm device' needs > following change. > > ``` > + /* > + * Check for request-based device is remained to > + * dm_mq_init_request_queue()->blk_mq_init_allocated_queue(). > + * For bio-based device, only set QUEUE_FLAG_POLL when all underlying > + * devices supporting polling. > + */ > + if (__table_type_bio_based(t->type)) { > + if (dm_table_supports_poll(t)) { > + blk_queue_flag_set(QUEUE_FLAG_POLL_CAP, q); > + blk_queue_flag_set(QUEUE_FLAG_POLL, q); > + } > + else { > + blk_queue_flag_clear(QUEUE_FLAG_POLL, q); > + blk_queue_flag_clear(QUEUE_FLAG_POLL_CAP, q); > + } > + } > ``` > > > Introduce QUEUE_FLAG_POLL_CAP flag, indicating if the device supports IO > polling or not. Thus both blk-mq and bio-based device could set this > flag at the initialization phase, and then only this flag needs to be > checked instead of rechecking if the device has the ability of IO > polling when enabling IO polling via sysfs. > > For NVMe, the ability of IO polling may change after RESET, since > nvme.poll_queues module parameter may change. Thus the ability of IO > polling need to be rechecked after RESET. > The defect of this approach is that all device drivers that may change tag_set after initialization (e.g., NVMe RESET) need to update QUEUE_FLAG_POLL_CAP. Previous this patch, tag_set is checked directly in queue_poll_store, and thus device drivers don't need to care the queue_flags. > Signed-off-by: Jeffle Xu <jefflexu@linux.alibaba.com> > --- > block/blk-mq.c | 5 +++-- > block/blk-sysfs.c | 3 +-- > drivers/nvme/host/core.c | 2 ++ > include/linux/blk-mq.h | 12 ++++++++++++ > include/linux/blkdev.h | 2 ++ > 5 files changed, 20 insertions(+), 4 deletions(-) > > diff --git a/block/blk-mq.c b/block/blk-mq.c > index 414f5d99d9de..55ef6b975169 100644 > --- a/block/blk-mq.c > +++ b/block/blk-mq.c > @@ -3227,9 +3227,10 @@ struct request_queue *blk_mq_init_allocated_queue(struct blk_mq_tag_set *set, > q->tag_set = set; > > q->queue_flags |= QUEUE_FLAG_MQ_DEFAULT; > - if (set->nr_maps > HCTX_TYPE_POLL && > - set->map[HCTX_TYPE_POLL].nr_queues) > + if (blk_mq_poll_capable(set)) { > + blk_queue_flag_set(QUEUE_FLAG_POLL_CAP, q); > blk_queue_flag_set(QUEUE_FLAG_POLL, q); > + } > > q->sg_reserved_size = INT_MAX; > > diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c > index db3268d41274..64f0ab84b606 100644 > --- a/block/blk-sysfs.c > +++ b/block/blk-sysfs.c > @@ -426,8 +426,7 @@ static ssize_t queue_poll_store(struct request_queue *q, const char *page, > unsigned long poll_on; > ssize_t ret; > > - if (!q->tag_set || q->tag_set->nr_maps <= HCTX_TYPE_POLL || > - !q->tag_set->map[HCTX_TYPE_POLL].nr_queues) > + if(!blk_queue_poll_cap(q)) > return -EINVAL; > > ret = queue_var_store(&poll_on, page, count); > diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c > index bb7da34dd967..5344cc877b05 100644 > --- a/drivers/nvme/host/core.c > +++ b/drivers/nvme/host/core.c > @@ -2210,6 +2210,8 @@ static int nvme_update_ns_info(struct nvme_ns *ns, struct nvme_id_ns *id) > ns->lba_shift = id->lbaf[lbaf].ds; > nvme_set_queue_limits(ns->ctrl, ns->queue); > > + blk_mq_check_poll(ns->disk->queue, ns->disk->queue->tag_set); > + > ret = nvme_configure_metadata(ns, id); > if (ret) > goto out_unfreeze; > diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h > index 2c473c9b8990..ee4c89c8bebc 100644 > --- a/include/linux/blk-mq.h > +++ b/include/linux/blk-mq.h > @@ -618,4 +618,16 @@ blk_qc_t blk_mq_submit_bio(struct bio *bio); > void blk_mq_hctx_set_fq_lock_class(struct blk_mq_hw_ctx *hctx, > struct lock_class_key *key); > > +static inline bool blk_mq_poll_capable(struct blk_mq_tag_set *set) > +{ > + return set->nr_maps > HCTX_TYPE_POLL && > + set->map[HCTX_TYPE_POLL].nr_queues; > +} > + > +static inline void blk_mq_check_poll(struct request_queue *q, > + struct blk_mq_tag_set *set) > +{ > + if (blk_mq_poll_capable(set)) > + blk_queue_flag_set(QUEUE_FLAG_POLL_CAP, q); > +} Sorry it should be > +static inline void blk_mq_check_poll(struct request_queue *q, > + struct blk_mq_tag_set *set) > +{ > + if (blk_mq_poll_capable(set)) > + blk_queue_flag_set(QUEUE_FLAG_POLL_CAP, q); > +} > + else > + blk_queue_flag_clear(QUEUE_FLAG_POLL_CAP, q); > +} > #endif > diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h > index 1e88116dc070..d192a106bf40 100644 > --- a/include/linux/blkdev.h > +++ b/include/linux/blkdev.h > @@ -621,6 +621,7 @@ struct request_queue { > #define QUEUE_FLAG_RQ_ALLOC_TIME 27 /* record rq->alloc_time_ns */ > #define QUEUE_FLAG_HCTX_ACTIVE 28 /* at least one blk-mq hctx is active */ > #define QUEUE_FLAG_NOWAIT 29 /* device supports NOWAIT */ > +#define QUEUE_FLAG_POLL_CAP 30 /* device supports IO polling */ > > #define QUEUE_FLAG_MQ_DEFAULT ((1 << QUEUE_FLAG_IO_STAT) | \ > (1 << QUEUE_FLAG_SAME_COMP) | \ > @@ -668,6 +669,7 @@ bool blk_queue_flag_test_and_set(unsigned int flag, struct request_queue *q); > #define blk_queue_registered(q) test_bit(QUEUE_FLAG_REGISTERED, &(q)->queue_flags) > #define blk_queue_nowait(q) test_bit(QUEUE_FLAG_NOWAIT, &(q)->queue_flags) > #define blk_queue_poll(q) test_bit(QUEUE_FLAG_POLL, &(q)->queue_flags) > +#define blk_queue_poll_cap(q) test_bit(QUEUE_FLAG_POLL_CAP, &(q)->queue_flags) > > extern void blk_set_pm_only(struct request_queue *q); > extern void blk_clear_pm_only(struct request_queue *q); > -- Thanks, Jeffle -- dm-devel mailing list dm-devel@redhat.com https://listman.redhat.com/mailman/listinfo/dm-devel
next prev parent reply other threads:[~2021-04-16 8:42 UTC|newest] Thread overview: 92+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-04-01 2:19 [PATCH V5 00/12] block: support bio based io polling Ming Lei 2021-04-01 2:19 ` [dm-devel] " Ming Lei 2021-04-01 2:19 ` [PATCH V5 01/12] block: add helper of blk_queue_poll Ming Lei 2021-04-01 2:19 ` [dm-devel] " Ming Lei 2021-04-01 2:19 ` [PATCH V5 02/12] block: add one helper to free io_context Ming Lei 2021-04-01 2:19 ` [dm-devel] " Ming Lei 2021-04-01 2:19 ` [PATCH V5 03/12] block: create io poll context for submission and poll task Ming Lei 2021-04-01 2:19 ` [dm-devel] " Ming Lei 2021-04-12 10:19 ` Christoph Hellwig 2021-04-12 10:19 ` [dm-devel] " Christoph Hellwig 2021-04-01 2:19 ` [PATCH V5 04/12] block: add req flag of REQ_POLL_CTX Ming Lei 2021-04-01 2:19 ` [dm-devel] " Ming Lei 2021-04-01 2:19 ` [PATCH V5 05/12] block: add new field into 'struct bvec_iter' Ming Lei 2021-04-01 2:19 ` [dm-devel] " Ming Lei 2021-04-12 9:26 ` Christoph Hellwig 2021-04-12 9:26 ` [dm-devel] " Christoph Hellwig 2021-04-13 9:36 ` Ming Lei 2021-04-13 9:36 ` [dm-devel] " Ming Lei 2021-04-01 2:19 ` [PATCH V5 06/12] block/mq: extract one helper function polling hw queue Ming Lei 2021-04-01 2:19 ` [dm-devel] " Ming Lei 2021-04-12 9:29 ` Christoph Hellwig 2021-04-12 9:29 ` [dm-devel] " Christoph Hellwig 2021-04-01 2:19 ` [PATCH V5 07/12] block: prepare for supporting bio_list via other link Ming Lei 2021-04-01 2:19 ` [dm-devel] " Ming Lei 2021-04-12 10:18 ` Christoph Hellwig 2021-04-12 10:18 ` [dm-devel] " Christoph Hellwig 2021-04-12 11:37 ` Ming Lei 2021-04-12 11:37 ` [dm-devel] " Ming Lei 2021-04-01 2:19 ` [PATCH V5 08/12] block: use per-task poll context to implement bio based io polling Ming Lei 2021-04-01 2:19 ` [dm-devel] " Ming Lei 2021-04-12 9:54 ` Christoph Hellwig 2021-04-12 9:54 ` [dm-devel] " Christoph Hellwig 2021-04-12 10:20 ` Ming Lei 2021-04-12 10:20 ` [dm-devel] " Ming Lei 2021-04-12 10:29 ` Christoph Hellwig 2021-04-12 10:29 ` [dm-devel] " Christoph Hellwig 2021-04-12 11:42 ` Ming Lei 2021-04-12 11:42 ` [dm-devel] " Ming Lei 2021-04-12 10:16 ` Christoph Hellwig 2021-04-12 10:16 ` [dm-devel] " Christoph Hellwig 2021-04-12 10:37 ` Ming Lei 2021-04-12 10:37 ` [dm-devel] " Ming Lei 2021-04-01 2:19 ` [PATCH V5 09/12] blk-mq: limit hw queues to be polled in each blk_poll() Ming Lei 2021-04-01 2:19 ` [dm-devel] " Ming Lei 2021-04-01 2:19 ` [PATCH V5 10/12] block: add queue_to_disk() to get gendisk from request_queue Ming Lei 2021-04-01 2:19 ` [dm-devel] " Ming Lei 2021-04-12 12:52 ` Jens Axboe 2021-04-12 12:52 ` [dm-devel] " Jens Axboe 2021-04-01 2:19 ` [PATCH V5 11/12] block: add poll_capable method to support bio-based IO polling Ming Lei 2021-04-01 2:19 ` [dm-devel] " Ming Lei 2021-04-12 9:38 ` Christoph Hellwig 2021-04-12 9:38 ` [dm-devel] " Christoph Hellwig 2021-04-14 8:38 ` JeffleXu 2021-04-14 8:38 ` [dm-devel] " JeffleXu 2021-04-14 11:24 ` Ming Lei 2021-04-14 11:24 ` [dm-devel] " Ming Lei 2021-04-15 1:34 ` JeffleXu 2021-04-15 1:34 ` [dm-devel] " JeffleXu 2021-04-15 7:43 ` Ming Lei 2021-04-15 7:43 ` [dm-devel] " Ming Lei 2021-04-15 9:21 ` JeffleXu 2021-04-15 9:21 ` [dm-devel] " JeffleXu 2021-04-15 10:06 ` Ming Lei 2021-04-15 10:06 ` [dm-devel] " Ming Lei 2021-04-15 11:21 ` JeffleXu 2021-04-15 11:21 ` [dm-devel] " JeffleXu 2021-04-15 13:08 ` Ming Lei 2021-04-15 13:08 ` [dm-devel] " Ming Lei 2021-04-16 8:00 ` [PATCH] block: introduce QUEUE_FLAG_POLL_CAP flag Jeffle Xu 2021-04-16 8:00 ` [dm-devel] " Jeffle Xu 2021-04-16 8:42 ` JeffleXu [this message] 2021-04-16 8:42 ` JeffleXu 2021-04-16 9:07 ` Ming Lei 2021-04-16 9:07 ` [dm-devel] " Ming Lei 2021-04-16 10:20 ` JeffleXu 2021-04-16 10:20 ` [dm-devel] " JeffleXu 2021-04-17 14:06 ` JeffleXu 2021-04-17 14:06 ` [dm-devel] " JeffleXu 2021-04-19 2:21 ` Ming Lei 2021-04-19 2:21 ` [dm-devel] " Ming Lei 2021-04-19 5:40 ` JeffleXu 2021-04-19 5:40 ` [dm-devel] " JeffleXu 2021-04-19 13:36 ` Ming Lei 2021-04-19 13:36 ` [dm-devel] " Ming Lei 2021-04-20 7:25 ` JeffleXu 2021-04-20 7:25 ` [dm-devel] " JeffleXu 2021-04-01 2:19 ` [PATCH V5 12/12] dm: support IO polling for bio-based dm device Ming Lei 2021-04-01 2:19 ` [dm-devel] " Ming Lei 2021-04-09 15:39 ` [PATCH V5 00/12] block: support bio based io polling Ming Lei 2021-04-09 15:39 ` [dm-devel] " Ming Lei 2021-04-12 9:46 ` Christoph Hellwig 2021-04-12 9:46 ` [dm-devel] " Christoph Hellwig
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=d6139204-a299-61b5-f215-f65dda731739@linux.alibaba.com \ --to=jefflexu@linux.alibaba.com \ --cc=axboe@kernel.dk \ --cc=dm-devel@redhat.com \ --cc=linux-block@vger.kernel.org \ --cc=ming.lei@redhat.com \ --cc=snitzer@redhat.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.