From: Christoph Hellwig <hch@lst.de> To: Jens Axboe <axboe@fb.com>, Keith Busch <keith.busch@intel.com>, Sagi Grimberg <sagi@grimberg.me> Cc: Max Gurtovoy <maxg@mellanox.com>, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org Subject: [PATCH 07/13] nvme-pci: don't poll from irq context when deleting queues Date: Sun, 2 Dec 2018 17:46:22 +0100 [thread overview] Message-ID: <20181202164628.1116-8-hch@lst.de> (raw) In-Reply-To: <20181202164628.1116-1-hch@lst.de> This is the last place outside of nvme_irq that handles CQEs from interrupt context, and thus is in the way of removing the cq_lock for normal queues, and avoiding lockdep warnings on the poll queues, for which we already take it without IRQ disabling. Signed-off-by: Christoph Hellwig <hch@lst.de> --- drivers/nvme/host/pci.c | 27 +++++++++++++++++++-------- 1 file changed, 19 insertions(+), 8 deletions(-) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index 9ceba9900ca3..2d5a468c35b1 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -122,7 +122,6 @@ struct nvme_dev { u32 cmbsz; u32 cmbloc; struct nvme_ctrl ctrl; - struct completion ioq_wait; mempool_t *iod_mempool; @@ -203,10 +202,12 @@ struct nvme_queue { unsigned long flags; #define NVMEQ_ENABLED 0 #define NVMEQ_SQ_CMB 1 +#define NVMEQ_DELETE_ERROR 2 u32 *dbbuf_sq_db; u32 *dbbuf_cq_db; u32 *dbbuf_sq_ei; u32 *dbbuf_cq_ei; + struct completion delete_done; }; /* @@ -1535,6 +1536,8 @@ static int nvme_create_queue(struct nvme_queue *nvmeq, int qid, bool polled) int result; s16 vector; + clear_bit(NVMEQ_DELETE_ERROR, &nvmeq->flags); + /* * A queue's vector matches the queue identifier unless the controller * has only one vector available. @@ -2208,15 +2211,15 @@ static void nvme_del_queue_end(struct request *req, blk_status_t error) struct nvme_queue *nvmeq = req->end_io_data; blk_mq_free_request(req); - complete(&nvmeq->dev->ioq_wait); + complete(&nvmeq->delete_done); } static void nvme_del_cq_end(struct request *req, blk_status_t error) { struct nvme_queue *nvmeq = req->end_io_data; - if (!error) - nvme_poll_irqdisable(nvmeq, -1); + if (error) + set_bit(NVMEQ_DELETE_ERROR, &nvmeq->flags); nvme_del_queue_end(req, error); } @@ -2238,6 +2241,7 @@ static int nvme_delete_queue(struct nvme_queue *nvmeq, u8 opcode) req->timeout = ADMIN_TIMEOUT; req->end_io_data = nvmeq; + init_completion(&nvmeq->delete_done); blk_execute_rq_nowait(q, NULL, req, false, opcode == nvme_admin_delete_cq ? nvme_del_cq_end : nvme_del_queue_end); @@ -2249,7 +2253,6 @@ static bool nvme_disable_io_queues(struct nvme_dev *dev, u8 opcode) int nr_queues = dev->online_queues - 1, sent = 0; unsigned long timeout; - reinit_completion(&dev->ioq_wait); retry: timeout = ADMIN_TIMEOUT; while (nr_queues > 0) { @@ -2258,11 +2261,20 @@ static bool nvme_disable_io_queues(struct nvme_dev *dev, u8 opcode) nr_queues--; sent++; } - while (sent--) { - timeout = wait_for_completion_io_timeout(&dev->ioq_wait, + while (sent) { + struct nvme_queue *nvmeq = &dev->queues[nr_queues + sent]; + + timeout = wait_for_completion_io_timeout(&nvmeq->delete_done, timeout); if (timeout == 0) return false; + + /* handle any remaining CQEs */ + if (opcode == nvme_admin_delete_cq && + !test_bit(NVMEQ_DELETE_ERROR, &nvmeq->flags)) + nvme_poll_irqdisable(nvmeq, -1); + + sent--; if (nr_queues) goto retry; } @@ -2746,7 +2758,6 @@ static int nvme_probe(struct pci_dev *pdev, const struct pci_device_id *id) INIT_WORK(&dev->ctrl.reset_work, nvme_reset_work); INIT_WORK(&dev->remove_work, nvme_remove_dead_ctrl_work); mutex_init(&dev->shutdown_lock); - init_completion(&dev->ioq_wait); result = nvme_setup_prp_pools(dev); if (result) -- 2.19.1
WARNING: multiple messages have this Message-ID (diff)
From: hch@lst.de (Christoph Hellwig) Subject: [PATCH 07/13] nvme-pci: don't poll from irq context when deleting queues Date: Sun, 2 Dec 2018 17:46:22 +0100 [thread overview] Message-ID: <20181202164628.1116-8-hch@lst.de> (raw) In-Reply-To: <20181202164628.1116-1-hch@lst.de> This is the last place outside of nvme_irq that handles CQEs from interrupt context, and thus is in the way of removing the cq_lock for normal queues, and avoiding lockdep warnings on the poll queues, for which we already take it without IRQ disabling. Signed-off-by: Christoph Hellwig <hch at lst.de> --- drivers/nvme/host/pci.c | 27 +++++++++++++++++++-------- 1 file changed, 19 insertions(+), 8 deletions(-) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index 9ceba9900ca3..2d5a468c35b1 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -122,7 +122,6 @@ struct nvme_dev { u32 cmbsz; u32 cmbloc; struct nvme_ctrl ctrl; - struct completion ioq_wait; mempool_t *iod_mempool; @@ -203,10 +202,12 @@ struct nvme_queue { unsigned long flags; #define NVMEQ_ENABLED 0 #define NVMEQ_SQ_CMB 1 +#define NVMEQ_DELETE_ERROR 2 u32 *dbbuf_sq_db; u32 *dbbuf_cq_db; u32 *dbbuf_sq_ei; u32 *dbbuf_cq_ei; + struct completion delete_done; }; /* @@ -1535,6 +1536,8 @@ static int nvme_create_queue(struct nvme_queue *nvmeq, int qid, bool polled) int result; s16 vector; + clear_bit(NVMEQ_DELETE_ERROR, &nvmeq->flags); + /* * A queue's vector matches the queue identifier unless the controller * has only one vector available. @@ -2208,15 +2211,15 @@ static void nvme_del_queue_end(struct request *req, blk_status_t error) struct nvme_queue *nvmeq = req->end_io_data; blk_mq_free_request(req); - complete(&nvmeq->dev->ioq_wait); + complete(&nvmeq->delete_done); } static void nvme_del_cq_end(struct request *req, blk_status_t error) { struct nvme_queue *nvmeq = req->end_io_data; - if (!error) - nvme_poll_irqdisable(nvmeq, -1); + if (error) + set_bit(NVMEQ_DELETE_ERROR, &nvmeq->flags); nvme_del_queue_end(req, error); } @@ -2238,6 +2241,7 @@ static int nvme_delete_queue(struct nvme_queue *nvmeq, u8 opcode) req->timeout = ADMIN_TIMEOUT; req->end_io_data = nvmeq; + init_completion(&nvmeq->delete_done); blk_execute_rq_nowait(q, NULL, req, false, opcode == nvme_admin_delete_cq ? nvme_del_cq_end : nvme_del_queue_end); @@ -2249,7 +2253,6 @@ static bool nvme_disable_io_queues(struct nvme_dev *dev, u8 opcode) int nr_queues = dev->online_queues - 1, sent = 0; unsigned long timeout; - reinit_completion(&dev->ioq_wait); retry: timeout = ADMIN_TIMEOUT; while (nr_queues > 0) { @@ -2258,11 +2261,20 @@ static bool nvme_disable_io_queues(struct nvme_dev *dev, u8 opcode) nr_queues--; sent++; } - while (sent--) { - timeout = wait_for_completion_io_timeout(&dev->ioq_wait, + while (sent) { + struct nvme_queue *nvmeq = &dev->queues[nr_queues + sent]; + + timeout = wait_for_completion_io_timeout(&nvmeq->delete_done, timeout); if (timeout == 0) return false; + + /* handle any remaining CQEs */ + if (opcode == nvme_admin_delete_cq && + !test_bit(NVMEQ_DELETE_ERROR, &nvmeq->flags)) + nvme_poll_irqdisable(nvmeq, -1); + + sent--; if (nr_queues) goto retry; } @@ -2746,7 +2758,6 @@ static int nvme_probe(struct pci_dev *pdev, const struct pci_device_id *id) INIT_WORK(&dev->ctrl.reset_work, nvme_reset_work); INIT_WORK(&dev->remove_work, nvme_remove_dead_ctrl_work); mutex_init(&dev->shutdown_lock); - init_completion(&dev->ioq_wait); result = nvme_setup_prp_pools(dev); if (result) -- 2.19.1
next prev parent reply other threads:[~2018-12-02 16:48 UTC|newest] Thread overview: 92+ messages / expand[flat|nested] mbox.gz Atom feed top 2018-12-02 16:46 block and nvme polling improvements V3 Christoph Hellwig 2018-12-02 16:46 ` Christoph Hellwig 2018-12-02 16:46 ` [PATCH 01/13] block: move queues types to the block layer Christoph Hellwig 2018-12-02 16:46 ` Christoph Hellwig 2018-12-04 0:49 ` Sagi Grimberg 2018-12-04 0:49 ` Sagi Grimberg 2018-12-04 15:00 ` Christoph Hellwig 2018-12-04 15:00 ` Christoph Hellwig 2018-12-04 17:08 ` Sagi Grimberg 2018-12-04 17:08 ` Sagi Grimberg 2018-12-02 16:46 ` [PATCH 02/13] nvme-pci: use atomic bitops to mark a queue enabled Christoph Hellwig 2018-12-02 16:46 ` Christoph Hellwig 2018-12-04 0:54 ` Sagi Grimberg 2018-12-04 0:54 ` Sagi Grimberg 2018-12-04 15:04 ` Christoph Hellwig 2018-12-04 15:04 ` Christoph Hellwig 2018-12-04 17:11 ` Sagi Grimberg 2018-12-04 17:11 ` Sagi Grimberg 2018-12-02 16:46 ` [PATCH 03/13] nvme-pci: cleanup SQ allocation a bit Christoph Hellwig 2018-12-02 16:46 ` Christoph Hellwig 2018-12-04 0:55 ` Sagi Grimberg 2018-12-04 0:55 ` Sagi Grimberg 2018-12-02 16:46 ` [PATCH 04/13] nvme-pci: only allow polling with separate poll queues Christoph Hellwig 2018-12-02 16:46 ` Christoph Hellwig 2018-12-03 18:23 ` Keith Busch 2018-12-03 18:23 ` Keith Busch 2018-12-04 0:56 ` Sagi Grimberg 2018-12-04 0:56 ` Sagi Grimberg 2018-12-02 16:46 ` [PATCH 05/13] nvme-pci: consolidate code for polling non-dedicated queues Christoph Hellwig 2018-12-02 16:46 ` Christoph Hellwig 2018-12-04 0:58 ` Sagi Grimberg 2018-12-04 0:58 ` Sagi Grimberg 2018-12-04 15:04 ` Christoph Hellwig 2018-12-04 15:04 ` Christoph Hellwig 2018-12-04 17:13 ` Sagi Grimberg 2018-12-04 17:13 ` Sagi Grimberg 2018-12-04 18:19 ` Jens Axboe 2018-12-04 18:19 ` Jens Axboe 2018-12-02 16:46 ` [PATCH 06/13] nvme-pci: refactor nvme_disable_io_queues Christoph Hellwig 2018-12-02 16:46 ` Christoph Hellwig 2018-12-04 1:00 ` Sagi Grimberg 2018-12-04 1:00 ` Sagi Grimberg 2018-12-04 15:05 ` Christoph Hellwig 2018-12-04 15:05 ` Christoph Hellwig 2018-12-04 18:19 ` Jens Axboe 2018-12-04 18:19 ` Jens Axboe 2018-12-02 16:46 ` Christoph Hellwig [this message] 2018-12-02 16:46 ` [PATCH 07/13] nvme-pci: don't poll from irq context when deleting queues Christoph Hellwig 2018-12-03 18:15 ` Keith Busch 2018-12-03 18:15 ` Keith Busch 2018-12-04 1:05 ` Sagi Grimberg 2018-12-04 1:05 ` Sagi Grimberg 2018-12-02 16:46 ` [PATCH 08/13] nvme-pci: remove the CQ lock for interrupt driven queues Christoph Hellwig 2018-12-02 16:46 ` Christoph Hellwig 2018-12-04 1:08 ` Sagi Grimberg 2018-12-04 1:08 ` Sagi Grimberg 2018-12-02 16:46 ` [PATCH 09/13] nvme-rdma: remove I/O polling support Christoph Hellwig 2018-12-02 16:46 ` Christoph Hellwig 2018-12-02 16:46 ` [PATCH 10/13] nvme-mpath: " Christoph Hellwig 2018-12-02 16:46 ` Christoph Hellwig 2018-12-03 18:22 ` Keith Busch 2018-12-03 18:22 ` Keith Busch 2018-12-04 1:11 ` Sagi Grimberg 2018-12-04 1:11 ` Sagi Grimberg 2018-12-04 15:07 ` Christoph Hellwig 2018-12-04 15:07 ` Christoph Hellwig 2018-12-04 17:18 ` Sagi Grimberg 2018-12-04 17:18 ` Sagi Grimberg 2018-12-02 16:46 ` [PATCH 11/13] block: remove ->poll_fn Christoph Hellwig 2018-12-02 16:46 ` Christoph Hellwig 2018-12-04 1:11 ` Sagi Grimberg 2018-12-04 1:11 ` Sagi Grimberg 2018-12-02 16:46 ` [PATCH 12/13] block: only allow polling if a poll queue_map exists Christoph Hellwig 2018-12-02 16:46 ` Christoph Hellwig 2018-12-04 1:14 ` Sagi Grimberg 2018-12-04 1:14 ` Sagi Grimberg 2018-12-02 16:46 ` [PATCH 13/13] block: enable polling by default if a poll map is initalized Christoph Hellwig 2018-12-02 16:46 ` Christoph Hellwig 2018-12-04 1:14 ` Sagi Grimberg 2018-12-04 1:14 ` Sagi Grimberg 2018-12-04 18:40 ` block and nvme polling improvements V3 Jens Axboe 2018-12-04 18:40 ` Jens Axboe -- strict thread matches above, loose matches on Subject: below -- 2018-11-29 19:12 block and nvme polling improvements V2 Christoph Hellwig 2018-11-29 19:13 ` [PATCH 07/13] nvme-pci: don't poll from irq context when deleting queues Christoph Hellwig 2018-11-29 19:13 ` Christoph Hellwig 2018-11-29 20:36 ` Keith Busch 2018-11-29 20:36 ` Keith Busch 2018-11-30 8:08 ` Christoph Hellwig 2018-11-30 8:08 ` Christoph Hellwig 2018-11-30 14:45 ` Keith Busch 2018-11-30 14:45 ` Keith Busch 2018-11-21 16:23 block and nvme polling improvements Christoph Hellwig 2018-11-21 16:23 ` [PATCH 07/13] nvme-pci: don't poll from irq context when deleting queues Christoph Hellwig 2018-11-21 16:23 ` Christoph Hellwig
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20181202164628.1116-8-hch@lst.de \ --to=hch@lst.de \ --cc=axboe@fb.com \ --cc=keith.busch@intel.com \ --cc=linux-block@vger.kernel.org \ --cc=linux-nvme@lists.infradead.org \ --cc=maxg@mellanox.com \ --cc=sagi@grimberg.me \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.