From: Ming Lei <ming.lei@redhat.com> To: Sagi Grimberg <sagi@grimberg.me> Cc: Jens Axboe <axboe@kernel.dk>, linux-block@vger.kernel.org, Bart Van Assche <bvanassche@acm.org>, linux-nvme@lists.infradead.org, Christoph Hellwig <hch@lst.de> Subject: Re: [PATCH 1/2] blk-mq: introduce blk_mq_complete_request_sync() Date: Thu, 21 Mar 2019 10:32:36 +0800 [thread overview] Message-ID: <20190321023235.GB15115@ming.t460p> (raw) In-Reply-To: <95da080a-7fb4-33a9-1dc3-4452c565c83a@grimberg.me> On Wed, Mar 20, 2019 at 07:04:09PM -0700, Sagi Grimberg wrote: > > > > Hi Bart, > > > > > > If I understand the race correctly, its not between the requests > > > completion and the queue pairs removal nor the timeout handler > > > necessarily, but rather it is between the async requests completion and > > > the tagset deallocation. > > > > > > Think of surprise removal (or disconnect) during I/O, drivers > > > usually stop/quiesce/freeze the queues, terminate/abort inflight > > > I/Os and then teardown the hw queues and the tagset. > > > > > > IIRC, the same race holds for srp if this happens during I/O: > > > 1. srp_rport_delete() -> srp_remove_target() -> srp_stop_rport_timers() -> > > > __rport_fail_io_fast() > > > > > > 2. complete all I/Os (async remotely via smp) > > > > > > Then continue.. > > > > > > 3. scsi_host_put() -> scsi_host_dev_release() -> scsi_mq_destroy_tags() > > > > > > What is preventing (3) from happening before (2) if its async? I would > > > think that scsi drivers need the exact same thing... > > > > blk_cleanup_queue() will do that, but it can't be used in device recovery > > obviously. > > But in device recovery we never free the tagset... I might be missing > the race here then... For example, nvme_rdma_complete_rq ->nvme_rdma_unmap_data ->ib_mr_pool_put But the ib queue pair may has been destroyed by nvme_rdma_destroy_io_queues() before request's remote completion. nvme_rdma_teardown_io_queues: nvme_stop_queues(&ctrl->ctrl); nvme_rdma_stop_io_queues(ctrl); blk_mq_tagset_busy_iter(&ctrl->tag_set, nvme_cancel_request, &ctrl->ctrl); if (remove) nvme_start_queues(&ctrl->ctrl); nvme_rdma_destroy_io_queues(ctrl, remove); > > > BTW, blk_mq_complete_request_sync() is a bit misleading, maybe > > blk_mq_complete_request_locally() is better. > > Not really... Naming is always the hard part... Thanks, Ming
WARNING: multiple messages have this Message-ID (diff)
From: ming.lei@redhat.com (Ming Lei) Subject: [PATCH 1/2] blk-mq: introduce blk_mq_complete_request_sync() Date: Thu, 21 Mar 2019 10:32:36 +0800 [thread overview] Message-ID: <20190321023235.GB15115@ming.t460p> (raw) In-Reply-To: <95da080a-7fb4-33a9-1dc3-4452c565c83a@grimberg.me> On Wed, Mar 20, 2019@07:04:09PM -0700, Sagi Grimberg wrote: > > > > Hi Bart, > > > > > > If I understand the race correctly, its not between the requests > > > completion and the queue pairs removal nor the timeout handler > > > necessarily, but rather it is between the async requests completion and > > > the tagset deallocation. > > > > > > Think of surprise removal (or disconnect) during I/O, drivers > > > usually stop/quiesce/freeze the queues, terminate/abort inflight > > > I/Os and then teardown the hw queues and the tagset. > > > > > > IIRC, the same race holds for srp if this happens during I/O: > > > 1. srp_rport_delete() -> srp_remove_target() -> srp_stop_rport_timers() -> > > > __rport_fail_io_fast() > > > > > > 2. complete all I/Os (async remotely via smp) > > > > > > Then continue.. > > > > > > 3. scsi_host_put() -> scsi_host_dev_release() -> scsi_mq_destroy_tags() > > > > > > What is preventing (3) from happening before (2) if its async? I would > > > think that scsi drivers need the exact same thing... > > > > blk_cleanup_queue() will do that, but it can't be used in device recovery > > obviously. > > But in device recovery we never free the tagset... I might be missing > the race here then... For example, nvme_rdma_complete_rq ->nvme_rdma_unmap_data ->ib_mr_pool_put But the ib queue pair may has been destroyed by nvme_rdma_destroy_io_queues() before request's remote completion. nvme_rdma_teardown_io_queues: nvme_stop_queues(&ctrl->ctrl); nvme_rdma_stop_io_queues(ctrl); blk_mq_tagset_busy_iter(&ctrl->tag_set, nvme_cancel_request, &ctrl->ctrl); if (remove) nvme_start_queues(&ctrl->ctrl); nvme_rdma_destroy_io_queues(ctrl, remove); > > > BTW, blk_mq_complete_request_sync() is a bit misleading, maybe > > blk_mq_complete_request_locally() is better. > > Not really... Naming is always the hard part... Thanks, Ming
next prev parent reply other threads:[~2019-03-21 2:32 UTC|newest] Thread overview: 58+ messages / expand[flat|nested] mbox.gz Atom feed top 2019-03-18 3:29 [PATCH 0/2] blk-mq/nvme: cancel request synchronously Ming Lei 2019-03-18 3:29 ` Ming Lei 2019-03-18 3:29 ` [PATCH 1/2] blk-mq: introduce blk_mq_complete_request_sync() Ming Lei 2019-03-18 3:29 ` Ming Lei 2019-03-18 4:09 ` Bart Van Assche 2019-03-18 4:09 ` Bart Van Assche 2019-03-18 7:38 ` Ming Lei 2019-03-18 7:38 ` Ming Lei 2019-03-18 15:04 ` Bart Van Assche 2019-03-18 15:04 ` Bart Van Assche 2019-03-18 15:16 ` Ming Lei 2019-03-18 15:16 ` Ming Lei 2019-03-18 15:49 ` Bart Van Assche 2019-03-18 15:49 ` Bart Van Assche 2019-03-18 16:06 ` Ming Lei 2019-03-18 16:06 ` Ming Lei 2019-03-21 0:47 ` Sagi Grimberg 2019-03-21 0:47 ` Sagi Grimberg 2019-03-21 1:39 ` Ming Lei 2019-03-21 1:39 ` Ming Lei 2019-03-21 2:04 ` Sagi Grimberg 2019-03-21 2:04 ` Sagi Grimberg 2019-03-21 2:32 ` Ming Lei [this message] 2019-03-21 2:32 ` Ming Lei 2019-03-21 21:40 ` Sagi Grimberg 2019-03-21 21:40 ` Sagi Grimberg 2019-03-27 8:27 ` Christoph Hellwig 2019-03-27 8:27 ` Christoph Hellwig 2019-03-21 2:15 ` Bart Van Assche 2019-03-21 2:15 ` Bart Van Assche 2019-03-21 2:13 ` Sagi Grimberg 2019-03-21 2:13 ` Sagi Grimberg 2019-03-18 14:40 ` Keith Busch 2019-03-18 14:40 ` Keith Busch 2019-03-18 17:30 ` James Smart 2019-03-18 17:30 ` James Smart 2019-03-18 17:37 ` James Smart 2019-03-18 17:37 ` James Smart 2019-03-19 1:06 ` Ming Lei 2019-03-19 1:06 ` Ming Lei 2019-03-19 3:37 ` James Smart 2019-03-19 3:37 ` James Smart 2019-03-19 3:50 ` Ming Lei 2019-03-19 3:50 ` Ming Lei 2019-03-19 1:31 ` Ming Lei 2019-03-19 1:31 ` Ming Lei 2019-03-19 4:04 ` James Smart 2019-03-19 4:04 ` James Smart 2019-03-19 4:28 ` Ming Lei 2019-03-19 4:28 ` Ming Lei 2019-03-27 8:30 ` Christoph Hellwig 2019-03-27 8:30 ` Christoph Hellwig 2019-03-18 3:29 ` [PATCH 2/2] nvme: cancel request synchronously Ming Lei 2019-03-18 3:29 ` Ming Lei 2019-03-27 8:30 ` Christoph Hellwig 2019-03-27 8:30 ` Christoph Hellwig 2019-03-27 2:06 ` [PATCH 0/2] blk-mq/nvme: " Ming Lei 2019-03-27 2:06 ` Ming Lei
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20190321023235.GB15115@ming.t460p \ --to=ming.lei@redhat.com \ --cc=axboe@kernel.dk \ --cc=bvanassche@acm.org \ --cc=hch@lst.de \ --cc=linux-block@vger.kernel.org \ --cc=linux-nvme@lists.infradead.org \ --cc=sagi@grimberg.me \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.