From: Nitesh Shetty <nj.shetty@samsung.com> To: Jens Axboe <axboe@kernel.dk>, Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>, dm-devel@redhat.com, Keith Busch <kbusch@kernel.org>, Christoph Hellwig <hch@lst.de>, Sagi Grimberg <sagi@grimberg.me>, James Smart <james.smart@broadcom.com>, Chaitanya Kulkarni <kch@nvidia.com>, Alexander Viro <viro@zeniv.linux.org.uk> Cc: anuj20.g@samsung.com, joshi.k@samsung.com, p.raghav@samsung.com, nitheshshetty@gmail.com, gost.dev@samsung.com, Nitesh Shetty <nj.shetty@samsung.com>, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v6 9/9] dm kcopyd: use copy offload support Date: Thu, 12 Jan 2023 17:29:03 +0530 [thread overview] Message-ID: <20230112115908.23662-10-nj.shetty@samsung.com> (raw) In-Reply-To: <20230112115908.23662-1-nj.shetty@samsung.com> Introduce copy_jobs to use copy-offload, if supported by underlying devices otherwise fall back to existing method. run_copy_jobs() calls block layer copy offload API, if both source and destination request queue are same and support copy offload. On successful completion, destination regions copied count is made zero, failed regions are processed via existing method. Signed-off-by: Nitesh Shetty <nj.shetty@samsung.com> Signed-off-by: Anuj Gupta <anuj20.g@samsung.com> --- drivers/md/dm-kcopyd.c | 56 +++++++++++++++++++++++++++++++++++++----- 1 file changed, 50 insertions(+), 6 deletions(-) diff --git a/drivers/md/dm-kcopyd.c b/drivers/md/dm-kcopyd.c index 4d3bbbea2e9a..2f9985f671ac 100644 --- a/drivers/md/dm-kcopyd.c +++ b/drivers/md/dm-kcopyd.c @@ -74,18 +74,20 @@ struct dm_kcopyd_client { atomic_t nr_jobs; /* - * We maintain four lists of jobs: + * We maintain five lists of jobs: * - * i) jobs waiting for pages - * ii) jobs that have pages, and are waiting for the io to be issued. - * iii) jobs that don't need to do any IO and just run a callback - * iv) jobs that have completed. + * i) jobs waiting to try copy offload + * ii) jobs waiting for pages + * iii) jobs that have pages, and are waiting for the io to be issued. + * iv) jobs that don't need to do any IO and just run a callback + * v) jobs that have completed. * - * All four of these are protected by job_lock. + * All five of these are protected by job_lock. */ spinlock_t job_lock; struct list_head callback_jobs; struct list_head complete_jobs; + struct list_head copy_jobs; struct list_head io_jobs; struct list_head pages_jobs; }; @@ -579,6 +581,43 @@ static int run_io_job(struct kcopyd_job *job) return r; } +static int run_copy_job(struct kcopyd_job *job) +{ + int r, i, count = 0; + struct range_entry range; + + struct request_queue *src_q, *dest_q; + + for (i = 0; i < job->num_dests; i++) { + range.dst = job->dests[i].sector << SECTOR_SHIFT; + range.src = job->source.sector << SECTOR_SHIFT; + range.len = job->source.count << SECTOR_SHIFT; + + src_q = bdev_get_queue(job->source.bdev); + dest_q = bdev_get_queue(job->dests[i].bdev); + + if (src_q != dest_q || !blk_queue_copy(src_q)) + break; + + r = blkdev_issue_copy(job->source.bdev, job->dests[i].bdev, + &range, 1, NULL, NULL, GFP_KERNEL); + if (r) + break; + + job->dests[i].count = 0; + count++; + } + + if (count == job->num_dests) { + push(&job->kc->complete_jobs, job); + } else { + push(&job->kc->pages_jobs, job); + r = 0; + } + + return r; +} + static int run_pages_job(struct kcopyd_job *job) { int r; @@ -659,6 +698,7 @@ static void do_work(struct work_struct *work) spin_unlock_irq(&kc->job_lock); blk_start_plug(&plug); + process_jobs(&kc->copy_jobs, kc, run_copy_job); process_jobs(&kc->complete_jobs, kc, run_complete_job); process_jobs(&kc->pages_jobs, kc, run_pages_job); process_jobs(&kc->io_jobs, kc, run_io_job); @@ -676,6 +716,8 @@ static void dispatch_job(struct kcopyd_job *job) atomic_inc(&kc->nr_jobs); if (unlikely(!job->source.count)) push(&kc->callback_jobs, job); + else if (job->source.bdev->bd_disk == job->dests[0].bdev->bd_disk) + push(&kc->copy_jobs, job); else if (job->pages == &zero_page_list) push(&kc->io_jobs, job); else @@ -916,6 +958,7 @@ struct dm_kcopyd_client *dm_kcopyd_client_create(struct dm_kcopyd_throttle *thro spin_lock_init(&kc->job_lock); INIT_LIST_HEAD(&kc->callback_jobs); INIT_LIST_HEAD(&kc->complete_jobs); + INIT_LIST_HEAD(&kc->copy_jobs); INIT_LIST_HEAD(&kc->io_jobs); INIT_LIST_HEAD(&kc->pages_jobs); kc->throttle = throttle; @@ -971,6 +1014,7 @@ void dm_kcopyd_client_destroy(struct dm_kcopyd_client *kc) BUG_ON(!list_empty(&kc->callback_jobs)); BUG_ON(!list_empty(&kc->complete_jobs)); + WARN_ON(!list_empty(&kc->copy_jobs)); BUG_ON(!list_empty(&kc->io_jobs)); BUG_ON(!list_empty(&kc->pages_jobs)); destroy_workqueue(kc->kcopyd_wq); -- 2.35.1.500.gb896f729e2
WARNING: multiple messages have this Message-ID (diff)
From: Nitesh Shetty <nj.shetty@samsung.com> To: Jens Axboe <axboe@kernel.dk>, Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>, dm-devel@redhat.com, Keith Busch <kbusch@kernel.org>, Christoph Hellwig <hch@lst.de>, Sagi Grimberg <sagi@grimberg.me>, James Smart <james.smart@broadcom.com>, Chaitanya Kulkarni <kch@nvidia.com>, Alexander Viro <viro@zeniv.linux.org.uk> Cc: p.raghav@samsung.com, joshi.k@samsung.com, gost.dev@samsung.com, anuj20.g@samsung.com, linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, Nitesh Shetty <nj.shetty@samsung.com>, linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, nitheshshetty@gmail.com Subject: [dm-devel] [PATCH v6 9/9] dm kcopyd: use copy offload support Date: Thu, 12 Jan 2023 17:29:03 +0530 [thread overview] Message-ID: <20230112115908.23662-10-nj.shetty@samsung.com> (raw) In-Reply-To: <20230112115908.23662-1-nj.shetty@samsung.com> Introduce copy_jobs to use copy-offload, if supported by underlying devices otherwise fall back to existing method. run_copy_jobs() calls block layer copy offload API, if both source and destination request queue are same and support copy offload. On successful completion, destination regions copied count is made zero, failed regions are processed via existing method. Signed-off-by: Nitesh Shetty <nj.shetty@samsung.com> Signed-off-by: Anuj Gupta <anuj20.g@samsung.com> --- drivers/md/dm-kcopyd.c | 56 +++++++++++++++++++++++++++++++++++++----- 1 file changed, 50 insertions(+), 6 deletions(-) diff --git a/drivers/md/dm-kcopyd.c b/drivers/md/dm-kcopyd.c index 4d3bbbea2e9a..2f9985f671ac 100644 --- a/drivers/md/dm-kcopyd.c +++ b/drivers/md/dm-kcopyd.c @@ -74,18 +74,20 @@ struct dm_kcopyd_client { atomic_t nr_jobs; /* - * We maintain four lists of jobs: + * We maintain five lists of jobs: * - * i) jobs waiting for pages - * ii) jobs that have pages, and are waiting for the io to be issued. - * iii) jobs that don't need to do any IO and just run a callback - * iv) jobs that have completed. + * i) jobs waiting to try copy offload + * ii) jobs waiting for pages + * iii) jobs that have pages, and are waiting for the io to be issued. + * iv) jobs that don't need to do any IO and just run a callback + * v) jobs that have completed. * - * All four of these are protected by job_lock. + * All five of these are protected by job_lock. */ spinlock_t job_lock; struct list_head callback_jobs; struct list_head complete_jobs; + struct list_head copy_jobs; struct list_head io_jobs; struct list_head pages_jobs; }; @@ -579,6 +581,43 @@ static int run_io_job(struct kcopyd_job *job) return r; } +static int run_copy_job(struct kcopyd_job *job) +{ + int r, i, count = 0; + struct range_entry range; + + struct request_queue *src_q, *dest_q; + + for (i = 0; i < job->num_dests; i++) { + range.dst = job->dests[i].sector << SECTOR_SHIFT; + range.src = job->source.sector << SECTOR_SHIFT; + range.len = job->source.count << SECTOR_SHIFT; + + src_q = bdev_get_queue(job->source.bdev); + dest_q = bdev_get_queue(job->dests[i].bdev); + + if (src_q != dest_q || !blk_queue_copy(src_q)) + break; + + r = blkdev_issue_copy(job->source.bdev, job->dests[i].bdev, + &range, 1, NULL, NULL, GFP_KERNEL); + if (r) + break; + + job->dests[i].count = 0; + count++; + } + + if (count == job->num_dests) { + push(&job->kc->complete_jobs, job); + } else { + push(&job->kc->pages_jobs, job); + r = 0; + } + + return r; +} + static int run_pages_job(struct kcopyd_job *job) { int r; @@ -659,6 +698,7 @@ static void do_work(struct work_struct *work) spin_unlock_irq(&kc->job_lock); blk_start_plug(&plug); + process_jobs(&kc->copy_jobs, kc, run_copy_job); process_jobs(&kc->complete_jobs, kc, run_complete_job); process_jobs(&kc->pages_jobs, kc, run_pages_job); process_jobs(&kc->io_jobs, kc, run_io_job); @@ -676,6 +716,8 @@ static void dispatch_job(struct kcopyd_job *job) atomic_inc(&kc->nr_jobs); if (unlikely(!job->source.count)) push(&kc->callback_jobs, job); + else if (job->source.bdev->bd_disk == job->dests[0].bdev->bd_disk) + push(&kc->copy_jobs, job); else if (job->pages == &zero_page_list) push(&kc->io_jobs, job); else @@ -916,6 +958,7 @@ struct dm_kcopyd_client *dm_kcopyd_client_create(struct dm_kcopyd_throttle *thro spin_lock_init(&kc->job_lock); INIT_LIST_HEAD(&kc->callback_jobs); INIT_LIST_HEAD(&kc->complete_jobs); + INIT_LIST_HEAD(&kc->copy_jobs); INIT_LIST_HEAD(&kc->io_jobs); INIT_LIST_HEAD(&kc->pages_jobs); kc->throttle = throttle; @@ -971,6 +1014,7 @@ void dm_kcopyd_client_destroy(struct dm_kcopyd_client *kc) BUG_ON(!list_empty(&kc->callback_jobs)); BUG_ON(!list_empty(&kc->complete_jobs)); + WARN_ON(!list_empty(&kc->copy_jobs)); BUG_ON(!list_empty(&kc->io_jobs)); BUG_ON(!list_empty(&kc->pages_jobs)); destroy_workqueue(kc->kcopyd_wq); -- 2.35.1.500.gb896f729e2 -- dm-devel mailing list dm-devel@redhat.com https://listman.redhat.com/mailman/listinfo/dm-devel
next prev parent reply other threads:[~2023-01-12 13:39 UTC|newest] Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top [not found] <CGME20230112115954epcas5p4a959bef952926b8976719f1179bb4436@epcas5p4.samsung.com> 2023-01-12 11:58 ` [PATCH v6 0/9] Implement copy offload support Nitesh Shetty 2023-01-12 11:58 ` [dm-devel] " Nitesh Shetty [not found] ` <CGME20230112120011epcas5p33c10ec9a0cb2ee4c0c68737bb879a154@epcas5p3.samsung.com> 2023-01-12 11:58 ` [PATCH v6 1/9] block: Introduce queue limits for copy-offload support Nitesh Shetty 2023-01-12 11:58 ` [dm-devel] " Nitesh Shetty [not found] ` <CGME20230112120039epcas5p49ccf70d806c530c8228130cc25737b51@epcas5p4.samsung.com> 2023-01-12 11:58 ` [PATCH v6 2/9] block: Add copy offload support infrastructure Nitesh Shetty 2023-01-12 11:58 ` [dm-devel] " Nitesh Shetty 2023-01-12 14:43 ` Hannes Reinecke 2023-01-12 14:43 ` [dm-devel] " Hannes Reinecke 2023-01-13 8:26 ` Nitesh Shetty 2023-01-13 8:26 ` [dm-devel] " Nitesh Shetty [not found] ` <CGME20230112120054epcas5p3ec5887c4e1de59f7529dafca1cd6aa65@epcas5p3.samsung.com> 2023-01-12 11:58 ` [PATCH v6 3/9] block: add emulation for copy Nitesh Shetty 2023-01-12 11:58 ` [dm-devel] " Nitesh Shetty 2023-01-12 14:46 ` Hannes Reinecke 2023-01-12 14:46 ` [dm-devel] " Hannes Reinecke 2023-01-12 14:48 ` Hannes Reinecke 2023-01-12 14:48 ` [dm-devel] " Hannes Reinecke [not found] ` <CGME20230112120131epcas5p4374e6add89990dd546bd0ae38f4386f0@epcas5p4.samsung.com> 2023-01-12 11:58 ` [PATCH v6 4/9] block: Introduce a new ioctl " Nitesh Shetty 2023-01-12 11:58 ` [dm-devel] " Nitesh Shetty [not found] ` <CGME20230112120151epcas5p1e7c3ec0c7bd0869b9cf0bea64d65991a@epcas5p1.samsung.com> 2023-01-12 11:58 ` [PATCH v6 5/9] nvme: add copy offload support Nitesh Shetty 2023-01-12 11:58 ` [dm-devel] " Nitesh Shetty [not found] ` <CGME20230112120201epcas5p1d2ee1f9fd6a1f458ffa770bb33b4bb41@epcas5p1.samsung.com> 2023-01-12 11:59 ` [PATCH v6 6/9] nvmet: add copy command support for bdev and file ns Nitesh Shetty 2023-01-12 11:59 ` [dm-devel] " Nitesh Shetty [not found] ` <CGME20230112120210epcas5p41524bba73af1dcf283d21b9c7ee9d239@epcas5p4.samsung.com> 2023-01-12 11:59 ` [PATCH v6 7/9] dm: Add support for copy offload Nitesh Shetty 2023-01-12 11:59 ` [dm-devel] " Nitesh Shetty [not found] ` <CGME20230112120229epcas5p38a07a42302d823422960eb11de5d685b@epcas5p3.samsung.com> 2023-01-12 11:59 ` Nitesh Shetty [this message] 2023-01-12 11:59 ` [dm-devel] [PATCH v6 9/9] dm kcopyd: use copy offload support Nitesh Shetty
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20230112115908.23662-10-nj.shetty@samsung.com \ --to=nj.shetty@samsung.com \ --cc=agk@redhat.com \ --cc=anuj20.g@samsung.com \ --cc=axboe@kernel.dk \ --cc=dm-devel@redhat.com \ --cc=gost.dev@samsung.com \ --cc=hch@lst.de \ --cc=james.smart@broadcom.com \ --cc=joshi.k@samsung.com \ --cc=kbusch@kernel.org \ --cc=kch@nvidia.com \ --cc=linux-block@vger.kernel.org \ --cc=linux-fsdevel@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-nvme@lists.infradead.org \ --cc=nitheshshetty@gmail.com \ --cc=p.raghav@samsung.com \ --cc=sagi@grimberg.me \ --cc=snitzer@kernel.org \ --cc=viro@zeniv.linux.org.uk \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.