* Re: [PATCH v2 05/10] block: add emulation for copy @ 2022-02-10 21:28 kernel test robot 0 siblings, 0 replies; 6+ messages in thread From: kernel test robot @ 2022-02-10 21:28 UTC (permalink / raw) To: kbuild [-- Attachment #1: Type: text/plain, Size: 12504 bytes --] CC: kbuild-all(a)lists.01.org In-Reply-To: <20220207141348.4235-6-nj.shetty@samsung.com> References: <20220207141348.4235-6-nj.shetty@samsung.com> TO: Nitesh Shetty <nj.shetty@samsung.com> TO: mpatocka(a)redhat.com CC: javier(a)javigon.com CC: chaitanyak(a)nvidia.com CC: linux-block(a)vger.kernel.org CC: linux-scsi(a)vger.kernel.org CC: dm-devel(a)redhat.com CC: linux-nvme(a)lists.infradead.org CC: linux-fsdevel(a)vger.kernel.org CC: axboe(a)kernel.dk CC: msnitzer(a)redhat.com CC: bvanassche(a)acm.org CC: martin.petersen(a)oracle.com CC: roland(a)purestorage.com CC: hare(a)suse.de CC: kbusch(a)kernel.org CC: hch(a)lst.de CC: Frederick.Knight(a)netapp.com CC: zach.brown(a)ni.com CC: osandov(a)fb.com CC: lsf-pc(a)lists.linux-foundation.org CC: djwong(a)kernel.org CC: josef(a)toxicpanda.com CC: clm(a)fb.com CC: dsterba(a)suse.com CC: tytso(a)mit.edu CC: jack(a)suse.com CC: joshi.k(a)samsung.com CC: arnav.dawn(a)samsung.com CC: nj.shetty(a)samsung.com Hi Nitesh, Thank you for the patch! Perhaps something to improve: [auto build test WARNING on axboe-block/for-next] [also build test WARNING on next-20220210] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch] url: https://github.com/0day-ci/linux/commits/Nitesh-Shetty/block-make-bio_map_kern-non-static/20220207-231407 base: https://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git for-next :::::: branch date: 3 days ago :::::: commit date: 3 days ago config: i386-randconfig-m021-20220207 (https://download.01.org/0day-ci/archive/20220211/202202110554.gDIZXN5V-lkp(a)intel.com/config) compiler: gcc-9 (Debian 9.3.0-22) 9.3.0 If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot <lkp@intel.com> Reported-by: Dan Carpenter <dan.carpenter@oracle.com> New smatch warnings: block/blk-lib.c:310 blk_submit_rw_buf() error: uninitialized symbol 'bio'. block/blk-lib.c:414 blk_copy_emulate() error: uninitialized symbol 'ret'. Old smatch warnings: block/blk-lib.c:272 blk_copy_offload() warn: possible memory leak of 'ctx' vim +/bio +310 block/blk-lib.c 12a9801a7301f1 Nitesh Shetty 2022-02-07 274 a7bb30870db803 Nitesh Shetty 2022-02-07 275 int blk_submit_rw_buf(struct block_device *bdev, void *buf, sector_t buf_len, a7bb30870db803 Nitesh Shetty 2022-02-07 276 sector_t sector, unsigned int op, gfp_t gfp_mask) a7bb30870db803 Nitesh Shetty 2022-02-07 277 { a7bb30870db803 Nitesh Shetty 2022-02-07 278 struct request_queue *q = bdev_get_queue(bdev); a7bb30870db803 Nitesh Shetty 2022-02-07 279 struct bio *bio, *parent = NULL; a7bb30870db803 Nitesh Shetty 2022-02-07 280 sector_t max_hw_len = min_t(unsigned int, queue_max_hw_sectors(q), a7bb30870db803 Nitesh Shetty 2022-02-07 281 queue_max_segments(q) << (PAGE_SHIFT - SECTOR_SHIFT)) << SECTOR_SHIFT; a7bb30870db803 Nitesh Shetty 2022-02-07 282 sector_t len, remaining; a7bb30870db803 Nitesh Shetty 2022-02-07 283 int ret; a7bb30870db803 Nitesh Shetty 2022-02-07 284 a7bb30870db803 Nitesh Shetty 2022-02-07 285 for (remaining = buf_len; remaining > 0; remaining -= len) { a7bb30870db803 Nitesh Shetty 2022-02-07 286 len = min_t(int, max_hw_len, remaining); a7bb30870db803 Nitesh Shetty 2022-02-07 287 retry: a7bb30870db803 Nitesh Shetty 2022-02-07 288 bio = bio_map_kern(q, buf, len, gfp_mask); a7bb30870db803 Nitesh Shetty 2022-02-07 289 if (IS_ERR(bio)) { a7bb30870db803 Nitesh Shetty 2022-02-07 290 len >>= 1; a7bb30870db803 Nitesh Shetty 2022-02-07 291 if (len) a7bb30870db803 Nitesh Shetty 2022-02-07 292 goto retry; a7bb30870db803 Nitesh Shetty 2022-02-07 293 return PTR_ERR(bio); a7bb30870db803 Nitesh Shetty 2022-02-07 294 } a7bb30870db803 Nitesh Shetty 2022-02-07 295 a7bb30870db803 Nitesh Shetty 2022-02-07 296 bio->bi_iter.bi_sector = sector >> SECTOR_SHIFT; a7bb30870db803 Nitesh Shetty 2022-02-07 297 bio->bi_opf = op; a7bb30870db803 Nitesh Shetty 2022-02-07 298 bio_set_dev(bio, bdev); a7bb30870db803 Nitesh Shetty 2022-02-07 299 bio->bi_end_io = NULL; a7bb30870db803 Nitesh Shetty 2022-02-07 300 bio->bi_private = NULL; a7bb30870db803 Nitesh Shetty 2022-02-07 301 a7bb30870db803 Nitesh Shetty 2022-02-07 302 if (parent) { a7bb30870db803 Nitesh Shetty 2022-02-07 303 bio_chain(parent, bio); a7bb30870db803 Nitesh Shetty 2022-02-07 304 submit_bio(parent); a7bb30870db803 Nitesh Shetty 2022-02-07 305 } a7bb30870db803 Nitesh Shetty 2022-02-07 306 parent = bio; a7bb30870db803 Nitesh Shetty 2022-02-07 307 sector += len; a7bb30870db803 Nitesh Shetty 2022-02-07 308 buf = (char *) buf + len; a7bb30870db803 Nitesh Shetty 2022-02-07 309 } a7bb30870db803 Nitesh Shetty 2022-02-07 @310 ret = submit_bio_wait(bio); a7bb30870db803 Nitesh Shetty 2022-02-07 311 bio_put(bio); a7bb30870db803 Nitesh Shetty 2022-02-07 312 a7bb30870db803 Nitesh Shetty 2022-02-07 313 return ret; a7bb30870db803 Nitesh Shetty 2022-02-07 314 } a7bb30870db803 Nitesh Shetty 2022-02-07 315 a7bb30870db803 Nitesh Shetty 2022-02-07 316 static void *blk_alloc_buf(sector_t req_size, sector_t *alloc_size, gfp_t gfp_mask) a7bb30870db803 Nitesh Shetty 2022-02-07 317 { a7bb30870db803 Nitesh Shetty 2022-02-07 318 int min_size = PAGE_SIZE; a7bb30870db803 Nitesh Shetty 2022-02-07 319 void *buf; a7bb30870db803 Nitesh Shetty 2022-02-07 320 a7bb30870db803 Nitesh Shetty 2022-02-07 321 while (req_size >= min_size) { a7bb30870db803 Nitesh Shetty 2022-02-07 322 buf = kvmalloc(req_size, gfp_mask); a7bb30870db803 Nitesh Shetty 2022-02-07 323 if (buf) { a7bb30870db803 Nitesh Shetty 2022-02-07 324 *alloc_size = req_size; a7bb30870db803 Nitesh Shetty 2022-02-07 325 return buf; a7bb30870db803 Nitesh Shetty 2022-02-07 326 } a7bb30870db803 Nitesh Shetty 2022-02-07 327 /* retry half the requested size */ a7bb30870db803 Nitesh Shetty 2022-02-07 328 req_size >>= 1; a7bb30870db803 Nitesh Shetty 2022-02-07 329 } a7bb30870db803 Nitesh Shetty 2022-02-07 330 a7bb30870db803 Nitesh Shetty 2022-02-07 331 return NULL; a7bb30870db803 Nitesh Shetty 2022-02-07 332 } a7bb30870db803 Nitesh Shetty 2022-02-07 333 12a9801a7301f1 Nitesh Shetty 2022-02-07 334 static inline int blk_copy_sanity_check(struct block_device *src_bdev, 12a9801a7301f1 Nitesh Shetty 2022-02-07 335 struct block_device *dst_bdev, struct range_entry *rlist, int nr) 12a9801a7301f1 Nitesh Shetty 2022-02-07 336 { 12a9801a7301f1 Nitesh Shetty 2022-02-07 337 unsigned int align_mask = max( 12a9801a7301f1 Nitesh Shetty 2022-02-07 338 bdev_logical_block_size(dst_bdev), bdev_logical_block_size(src_bdev)) - 1; 12a9801a7301f1 Nitesh Shetty 2022-02-07 339 sector_t len = 0; 12a9801a7301f1 Nitesh Shetty 2022-02-07 340 int i; 12a9801a7301f1 Nitesh Shetty 2022-02-07 341 12a9801a7301f1 Nitesh Shetty 2022-02-07 342 for (i = 0; i < nr; i++) { 12a9801a7301f1 Nitesh Shetty 2022-02-07 343 if (rlist[i].len) 12a9801a7301f1 Nitesh Shetty 2022-02-07 344 len += rlist[i].len; 12a9801a7301f1 Nitesh Shetty 2022-02-07 345 else 12a9801a7301f1 Nitesh Shetty 2022-02-07 346 return -EINVAL; 12a9801a7301f1 Nitesh Shetty 2022-02-07 347 if ((rlist[i].dst & align_mask) || (rlist[i].src & align_mask) || 12a9801a7301f1 Nitesh Shetty 2022-02-07 348 (rlist[i].len & align_mask)) 12a9801a7301f1 Nitesh Shetty 2022-02-07 349 return -EINVAL; 12a9801a7301f1 Nitesh Shetty 2022-02-07 350 rlist[i].comp_len = 0; 12a9801a7301f1 Nitesh Shetty 2022-02-07 351 } 12a9801a7301f1 Nitesh Shetty 2022-02-07 352 12a9801a7301f1 Nitesh Shetty 2022-02-07 353 if (!len && len >= MAX_COPY_TOTAL_LENGTH) 12a9801a7301f1 Nitesh Shetty 2022-02-07 354 return -EINVAL; 12a9801a7301f1 Nitesh Shetty 2022-02-07 355 12a9801a7301f1 Nitesh Shetty 2022-02-07 356 return 0; 12a9801a7301f1 Nitesh Shetty 2022-02-07 357 } 12a9801a7301f1 Nitesh Shetty 2022-02-07 358 a7bb30870db803 Nitesh Shetty 2022-02-07 359 static inline sector_t blk_copy_max_range(struct range_entry *rlist, int nr, sector_t *max_len) a7bb30870db803 Nitesh Shetty 2022-02-07 360 { a7bb30870db803 Nitesh Shetty 2022-02-07 361 int i; a7bb30870db803 Nitesh Shetty 2022-02-07 362 sector_t len = 0; a7bb30870db803 Nitesh Shetty 2022-02-07 363 a7bb30870db803 Nitesh Shetty 2022-02-07 364 *max_len = 0; a7bb30870db803 Nitesh Shetty 2022-02-07 365 for (i = 0; i < nr; i++) { a7bb30870db803 Nitesh Shetty 2022-02-07 366 *max_len = max(*max_len, rlist[i].len); a7bb30870db803 Nitesh Shetty 2022-02-07 367 len += rlist[i].len; a7bb30870db803 Nitesh Shetty 2022-02-07 368 } a7bb30870db803 Nitesh Shetty 2022-02-07 369 a7bb30870db803 Nitesh Shetty 2022-02-07 370 return len; a7bb30870db803 Nitesh Shetty 2022-02-07 371 } a7bb30870db803 Nitesh Shetty 2022-02-07 372 a7bb30870db803 Nitesh Shetty 2022-02-07 373 /* a7bb30870db803 Nitesh Shetty 2022-02-07 374 * If native copy offload feature is absent, this function tries to emulate, a7bb30870db803 Nitesh Shetty 2022-02-07 375 * by copying data from source to a temporary buffer and from buffer to a7bb30870db803 Nitesh Shetty 2022-02-07 376 * destination device. a7bb30870db803 Nitesh Shetty 2022-02-07 377 */ a7bb30870db803 Nitesh Shetty 2022-02-07 378 static int blk_copy_emulate(struct block_device *src_bdev, int nr, a7bb30870db803 Nitesh Shetty 2022-02-07 379 struct range_entry *rlist, struct block_device *dest_bdev, gfp_t gfp_mask) a7bb30870db803 Nitesh Shetty 2022-02-07 380 { a7bb30870db803 Nitesh Shetty 2022-02-07 381 void *buf = NULL; a7bb30870db803 Nitesh Shetty 2022-02-07 382 int ret, nr_i = 0; a7bb30870db803 Nitesh Shetty 2022-02-07 383 sector_t src, dst, copy_len, buf_len, read_len, copied_len, max_len = 0, remaining = 0; a7bb30870db803 Nitesh Shetty 2022-02-07 384 a7bb30870db803 Nitesh Shetty 2022-02-07 385 copy_len = blk_copy_max_range(rlist, nr, &max_len); a7bb30870db803 Nitesh Shetty 2022-02-07 386 buf = blk_alloc_buf(max_len, &buf_len, gfp_mask); a7bb30870db803 Nitesh Shetty 2022-02-07 387 if (!buf) a7bb30870db803 Nitesh Shetty 2022-02-07 388 return -ENOMEM; a7bb30870db803 Nitesh Shetty 2022-02-07 389 a7bb30870db803 Nitesh Shetty 2022-02-07 390 for (copied_len = 0; copied_len < copy_len; copied_len += read_len) { a7bb30870db803 Nitesh Shetty 2022-02-07 391 if (!remaining) { a7bb30870db803 Nitesh Shetty 2022-02-07 392 rlist[nr_i].comp_len = 0; a7bb30870db803 Nitesh Shetty 2022-02-07 393 src = rlist[nr_i].src; a7bb30870db803 Nitesh Shetty 2022-02-07 394 dst = rlist[nr_i].dst; a7bb30870db803 Nitesh Shetty 2022-02-07 395 remaining = rlist[nr_i++].len; a7bb30870db803 Nitesh Shetty 2022-02-07 396 } a7bb30870db803 Nitesh Shetty 2022-02-07 397 a7bb30870db803 Nitesh Shetty 2022-02-07 398 read_len = min_t(sector_t, remaining, buf_len); a7bb30870db803 Nitesh Shetty 2022-02-07 399 ret = blk_submit_rw_buf(src_bdev, buf, read_len, src, REQ_OP_READ, gfp_mask); a7bb30870db803 Nitesh Shetty 2022-02-07 400 if (ret) a7bb30870db803 Nitesh Shetty 2022-02-07 401 goto out; a7bb30870db803 Nitesh Shetty 2022-02-07 402 src += read_len; a7bb30870db803 Nitesh Shetty 2022-02-07 403 remaining -= read_len; a7bb30870db803 Nitesh Shetty 2022-02-07 404 ret = blk_submit_rw_buf(dest_bdev, buf, read_len, dst, REQ_OP_WRITE, a7bb30870db803 Nitesh Shetty 2022-02-07 405 gfp_mask); a7bb30870db803 Nitesh Shetty 2022-02-07 406 if (ret) a7bb30870db803 Nitesh Shetty 2022-02-07 407 goto out; a7bb30870db803 Nitesh Shetty 2022-02-07 408 else a7bb30870db803 Nitesh Shetty 2022-02-07 409 rlist[nr_i - 1].comp_len += read_len; a7bb30870db803 Nitesh Shetty 2022-02-07 410 dst += read_len; a7bb30870db803 Nitesh Shetty 2022-02-07 411 } a7bb30870db803 Nitesh Shetty 2022-02-07 412 out: a7bb30870db803 Nitesh Shetty 2022-02-07 413 kvfree(buf); a7bb30870db803 Nitesh Shetty 2022-02-07 @414 return ret; a7bb30870db803 Nitesh Shetty 2022-02-07 415 } a7bb30870db803 Nitesh Shetty 2022-02-07 416 --- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [RFC PATCH 0/3] NVMe copy offload patches @ 2022-02-04 19:41 Nitesh Shetty 2022-02-07 14:13 ` [PATCH v2 00/10] Add Copy offload support Nitesh Shetty 0 siblings, 1 reply; 6+ messages in thread From: Nitesh Shetty @ 2022-02-04 19:41 UTC (permalink / raw) To: Mikulas Patocka Cc: Javier González, Chaitanya Kulkarni, linux-block, linux-scsi, dm-devel, linux-nvme, linux-fsdevel, Jens Axboe, msnitzer@redhat.com >> msnitzer@redhat.com, Bart Van Assche, martin.petersen@oracle.com >> Martin K. Petersen, roland, Hannes Reinecke, kbus @imap.gmail.com>> Keith Busch, Christoph Hellwig, Frederick.Knight, zach.brown, osandov, lsf-pc, djwong, josef, clm, dsterba, tytso, jack, Kanchan Joshi, arnav.dawn, Nitesh Shetty On Wed, Feb 2, 2022 at 12:23 PM Mikulas Patocka <mpatocka@redhat.com> wrote: > > Hi > > Here I'm submitting the first version of NVMe copy offload patches as a > request for comment. They use the token-based approach as we discussed on > the phone call. > > The first patch adds generic copy offload support to the block layer - it > adds two new bio types (REQ_OP_COPY_READ_TOKEN and > REQ_OP_COPY_WRITE_TOKEN) and a new ioctl BLKCOPY and a kernel function > blkdev_issue_copy. > > The second patch adds copy offload support to the NVMe subsystem. > > The third patch implements a "nvme-debug" driver - it is similar to > "scsi-debug", it simulates a nvme host controller, it keeps data in memory > and it supports copy offload according to NVMe Command Set Specification > 1.0a. (there are no hardware or software implementations supporting copy > offload so far, so I implemented it in nvme-debug) > > TODO: > * implement copy offload in device mapper linear target > * implement copy offload in software NVMe target driver We had a series that adds these two elements https://github.com/nitesh-shetty/linux_copy_offload/tree/main/v1 Overall series supports – 1. Multi-source/destination interface(yes, it does add complexity but GC use-case needs it) 2. Copy-emulation at block-layer 3. Dm-linear and dm-kcopyd support (for cases not requiring split) 4. Nvmet support (for block and file backend) These patches definitely need more feedback. If links are hard to read, we can send another RFC instead. But before that it would be great to have your inputs on the path forward. But before that it would be great to have your inputs on the path forward. PS: The payload-scheme in your series is particularly interesting and simplifying plumbing, so you might notice that above patches borrow that > * make it possible to complete REQ_OP_COPY_WRITE_TOKEN bios asynchronously Patch[0] support asynchronous copy write,if multi dst/src payload is sent. [0] https://github.com/nitesh-shetty/linux_copy_offload/blob/main/v1/0003-block-Add-copy-offload-support-infrastructure.patch -- Nitesh ^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH v2 00/10] Add Copy offload support @ 2022-02-07 14:13 ` Nitesh Shetty [not found] ` <CGME20220207141930epcas5p2bcbff65f78ad1dede64648d73ddb3770@epcas5p2.samsung.com> 0 siblings, 1 reply; 6+ messages in thread From: Nitesh Shetty @ 2022-02-07 14:13 UTC (permalink / raw) To: mpatocka Cc: javier, chaitanyak, linux-block, linux-scsi, dm-devel, linux-nvme, linux-fsdevel, axboe, msnitzer, bvanassche, martin.petersen, roland, hare, kbusch, hch, Frederick.Knight, zach.brown, osandov, lsf-pc, djwong, josef, clm, dsterba, tytso, jack, joshi.k, arnav.dawn, nj.shetty The patch series covers the points discussed in November 2021 virtual call [LSF/MM/BFP TOPIC] Storage: Copy Offload[0]. We have covered the Initial agreed requirements in this patchset. Patchset borrows Mikulas's token based approach for 2 bdev implementation. This is on top of our previous patchset v1[1]. Overall series supports – 1. Driver - NVMe Copy command (single NS), including support in nvme-target (for block and file backend) 2. Block layer - Block-generic copy (REQ_COPY flag), with interface accommodating two block-devs, and multi-source/destination interface - Emulation, when offload is natively absent - dm-linear support (for cases not requiring split) 3. User-interface - new ioctl 4. In-kernel user - dm-kcopyd [0] https://lore.kernel.org/linux-nvme/CA+1E3rJ7BZ7LjQXXTdX+-0Edz=zT14mmPGMiVCzUgB33C60tbQ@mail.gmail.com/ [1] https://lore.kernel.org/linux-block/20210817101423.12367-1-selvakuma.s1@samsung.com/ Arnav Dawn (1): nvmet: add copy command support for bdev and file ns Nitesh Shetty (6): block: Introduce queue limits for copy-offload support block: Add copy offload support infrastructure block: Introduce a new ioctl for copy block: add emulation for copy dm: Add support for copy offload. dm: Enable copy offload for dm-linear target SelvaKumar S (3): block: make bio_map_kern() non static nvme: add copy support dm kcopyd: use copy offload support block/blk-lib.c | 335 ++++++++++++++++++++++++++++++ block/blk-map.c | 2 +- block/blk-settings.c | 6 + block/blk-sysfs.c | 51 +++++ block/blk.h | 2 + block/ioctl.c | 37 ++++ drivers/md/dm-kcopyd.c | 57 ++++- drivers/md/dm-linear.c | 1 + drivers/md/dm-table.c | 43 ++++ drivers/md/dm.c | 6 + drivers/nvme/host/core.c | 121 ++++++++++- drivers/nvme/host/nvme.h | 7 + drivers/nvme/host/pci.c | 9 + drivers/nvme/host/trace.c | 19 ++ drivers/nvme/target/admin-cmd.c | 8 +- drivers/nvme/target/io-cmd-bdev.c | 66 ++++++ drivers/nvme/target/io-cmd-file.c | 48 +++++ include/linux/blk_types.h | 20 ++ include/linux/blkdev.h | 17 ++ include/linux/device-mapper.h | 5 + include/linux/nvme.h | 43 +++- include/uapi/linux/fs.h | 23 ++ 22 files changed, 912 insertions(+), 14 deletions(-) -- 2.30.0-rc0 ^ permalink raw reply [flat|nested] 6+ messages in thread
[parent not found: <CGME20220207141930epcas5p2bcbff65f78ad1dede64648d73ddb3770@epcas5p2.samsung.com>]
* [PATCH v2 05/10] block: add emulation for copy [not found] ` <CGME20220207141930epcas5p2bcbff65f78ad1dede64648d73ddb3770@epcas5p2.samsung.com> @ 2022-02-07 14:13 ` Nitesh Shetty 2022-02-08 3:20 ` kernel test robot 2022-02-16 13:32 ` Mikulas Patocka 0 siblings, 2 replies; 6+ messages in thread From: Nitesh Shetty @ 2022-02-07 14:13 UTC (permalink / raw) To: mpatocka Cc: javier, chaitanyak, linux-block, linux-scsi, dm-devel, linux-nvme, linux-fsdevel, axboe, msnitzer, bvanassche, martin.petersen, roland, hare, kbusch, hch, Frederick.Knight, zach.brown, osandov, lsf-pc, djwong, josef, clm, dsterba, tytso, jack, joshi.k, arnav.dawn, nj.shetty For the devices which does not support copy, copy emulation is added. Copy-emulation is implemented by reading from source ranges into memory and writing to the corresponding destination synchronously. TODO: Optimise emulation. Signed-off-by: Nitesh Shetty <nj.shetty@samsung.com> --- block/blk-lib.c | 119 ++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 119 insertions(+) diff --git a/block/blk-lib.c b/block/blk-lib.c index 3ae2c27b566e..05c8cd02fffc 100644 --- a/block/blk-lib.c +++ b/block/blk-lib.c @@ -272,6 +272,65 @@ int blk_copy_offload(struct block_device *src_bdev, int nr_srcs, return cio_await_completion(cio); } +int blk_submit_rw_buf(struct block_device *bdev, void *buf, sector_t buf_len, + sector_t sector, unsigned int op, gfp_t gfp_mask) +{ + struct request_queue *q = bdev_get_queue(bdev); + struct bio *bio, *parent = NULL; + sector_t max_hw_len = min_t(unsigned int, queue_max_hw_sectors(q), + queue_max_segments(q) << (PAGE_SHIFT - SECTOR_SHIFT)) << SECTOR_SHIFT; + sector_t len, remaining; + int ret; + + for (remaining = buf_len; remaining > 0; remaining -= len) { + len = min_t(int, max_hw_len, remaining); +retry: + bio = bio_map_kern(q, buf, len, gfp_mask); + if (IS_ERR(bio)) { + len >>= 1; + if (len) + goto retry; + return PTR_ERR(bio); + } + + bio->bi_iter.bi_sector = sector >> SECTOR_SHIFT; + bio->bi_opf = op; + bio_set_dev(bio, bdev); + bio->bi_end_io = NULL; + bio->bi_private = NULL; + + if (parent) { + bio_chain(parent, bio); + submit_bio(parent); + } + parent = bio; + sector += len; + buf = (char *) buf + len; + } + ret = submit_bio_wait(bio); + bio_put(bio); + + return ret; +} + +static void *blk_alloc_buf(sector_t req_size, sector_t *alloc_size, gfp_t gfp_mask) +{ + int min_size = PAGE_SIZE; + void *buf; + + while (req_size >= min_size) { + buf = kvmalloc(req_size, gfp_mask); + if (buf) { + *alloc_size = req_size; + return buf; + } + /* retry half the requested size */ + req_size >>= 1; + } + + return NULL; +} + static inline int blk_copy_sanity_check(struct block_device *src_bdev, struct block_device *dst_bdev, struct range_entry *rlist, int nr) { @@ -297,6 +356,64 @@ static inline int blk_copy_sanity_check(struct block_device *src_bdev, return 0; } +static inline sector_t blk_copy_max_range(struct range_entry *rlist, int nr, sector_t *max_len) +{ + int i; + sector_t len = 0; + + *max_len = 0; + for (i = 0; i < nr; i++) { + *max_len = max(*max_len, rlist[i].len); + len += rlist[i].len; + } + + return len; +} + +/* + * If native copy offload feature is absent, this function tries to emulate, + * by copying data from source to a temporary buffer and from buffer to + * destination device. + */ +static int blk_copy_emulate(struct block_device *src_bdev, int nr, + struct range_entry *rlist, struct block_device *dest_bdev, gfp_t gfp_mask) +{ + void *buf = NULL; + int ret, nr_i = 0; + sector_t src, dst, copy_len, buf_len, read_len, copied_len, max_len = 0, remaining = 0; + + copy_len = blk_copy_max_range(rlist, nr, &max_len); + buf = blk_alloc_buf(max_len, &buf_len, gfp_mask); + if (!buf) + return -ENOMEM; + + for (copied_len = 0; copied_len < copy_len; copied_len += read_len) { + if (!remaining) { + rlist[nr_i].comp_len = 0; + src = rlist[nr_i].src; + dst = rlist[nr_i].dst; + remaining = rlist[nr_i++].len; + } + + read_len = min_t(sector_t, remaining, buf_len); + ret = blk_submit_rw_buf(src_bdev, buf, read_len, src, REQ_OP_READ, gfp_mask); + if (ret) + goto out; + src += read_len; + remaining -= read_len; + ret = blk_submit_rw_buf(dest_bdev, buf, read_len, dst, REQ_OP_WRITE, + gfp_mask); + if (ret) + goto out; + else + rlist[nr_i - 1].comp_len += read_len; + dst += read_len; + } +out: + kvfree(buf); + return ret; +} + static inline bool blk_check_copy_offload(struct request_queue *src_q, struct request_queue *dest_q) { @@ -346,6 +463,8 @@ int blkdev_issue_copy(struct block_device *src_bdev, int nr, if (blk_check_copy_offload(src_q, dest_q)) ret = blk_copy_offload(src_bdev, nr, rlist, dest_bdev, gfp_mask); + else + ret = blk_copy_emulate(src_bdev, nr, rlist, dest_bdev, gfp_mask); return ret; } -- 2.30.0-rc0 ^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH v2 05/10] block: add emulation for copy 2022-02-07 14:13 ` [PATCH v2 05/10] block: add emulation for copy Nitesh Shetty @ 2022-02-08 3:20 ` kernel test robot 2022-02-16 13:32 ` Mikulas Patocka 1 sibling, 0 replies; 6+ messages in thread From: kernel test robot @ 2022-02-08 3:20 UTC (permalink / raw) To: Nitesh Shetty, mpatocka Cc: kbuild-all, javier, chaitanyak, linux-block, linux-scsi, dm-devel, linux-nvme, linux-fsdevel, axboe, msnitzer Hi Nitesh, Thank you for the patch! Perhaps something to improve: [auto build test WARNING on axboe-block/for-next] [also build test WARNING on next-20220207] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch] url: https://github.com/0day-ci/linux/commits/Nitesh-Shetty/block-make-bio_map_kern-non-static/20220207-231407 base: https://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git for-next config: nios2-randconfig-r001-20220207 (https://download.01.org/0day-ci/archive/20220208/202202081132.axCkiVgv-lkp@intel.com/config) compiler: nios2-linux-gcc (GCC) 11.2.0 reproduce (this is a W=1 build): wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # https://github.com/0day-ci/linux/commit/a7bb30870db803af4ad955a968992222bcfb478f git remote add linux-review https://github.com/0day-ci/linux git fetch --no-tags linux-review Nitesh-Shetty/block-make-bio_map_kern-non-static/20220207-231407 git checkout a7bb30870db803af4ad955a968992222bcfb478f # save the config file to linux build tree mkdir build_dir COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross O=build_dir ARCH=nios2 SHELL=/bin/bash If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot <lkp@intel.com> All warnings (new ones prefixed by >>): block/blk-lib.c:185:5: warning: no previous prototype for 'blk_copy_offload' [-Wmissing-prototypes] 185 | int blk_copy_offload(struct block_device *src_bdev, int nr_srcs, | ^~~~~~~~~~~~~~~~ >> block/blk-lib.c:275:5: warning: no previous prototype for 'blk_submit_rw_buf' [-Wmissing-prototypes] 275 | int blk_submit_rw_buf(struct block_device *bdev, void *buf, sector_t buf_len, | ^~~~~~~~~~~~~~~~~ vim +/blk_submit_rw_buf +275 block/blk-lib.c 180 181 /* 182 * blk_copy_offload - Use device's native copy offload feature 183 * Go through user provide payload, prepare new payload based on device's copy offload limits. 184 */ > 185 int blk_copy_offload(struct block_device *src_bdev, int nr_srcs, 186 struct range_entry *rlist, struct block_device *dst_bdev, gfp_t gfp_mask) 187 { 188 struct request_queue *sq = bdev_get_queue(src_bdev); 189 struct request_queue *dq = bdev_get_queue(dst_bdev); 190 struct bio *read_bio, *write_bio; 191 struct copy_ctx *ctx; 192 struct cio *cio; 193 struct page *token; 194 sector_t src_blk, copy_len, dst_blk; 195 sector_t remaining, max_copy_len = LONG_MAX; 196 int ri = 0, ret = 0; 197 198 cio = kzalloc(sizeof(struct cio), GFP_KERNEL); 199 if (!cio) 200 return -ENOMEM; 201 atomic_set(&cio->refcount, 0); 202 cio->rlist = rlist; 203 204 max_copy_len = min3(max_copy_len, (sector_t)sq->limits.max_copy_sectors, 205 (sector_t)dq->limits.max_copy_sectors); 206 max_copy_len = min3(max_copy_len, (sector_t)sq->limits.max_copy_range_sectors, 207 (sector_t)dq->limits.max_copy_range_sectors) << SECTOR_SHIFT; 208 209 for (ri = 0; ri < nr_srcs; ri++) { 210 cio->rlist[ri].comp_len = rlist[ri].len; 211 for (remaining = rlist[ri].len, src_blk = rlist[ri].src, dst_blk = rlist[ri].dst; 212 remaining > 0; 213 remaining -= copy_len, src_blk += copy_len, dst_blk += copy_len) { 214 copy_len = min(remaining, max_copy_len); 215 216 token = alloc_page(gfp_mask); 217 if (unlikely(!token)) { 218 ret = -ENOMEM; 219 goto err_token; 220 } 221 222 read_bio = bio_alloc(src_bdev, 1, REQ_OP_READ | REQ_COPY | REQ_NOMERGE, 223 gfp_mask); 224 if (!read_bio) { 225 ret = -ENOMEM; 226 goto err_read_bio; 227 } 228 read_bio->bi_iter.bi_sector = src_blk >> SECTOR_SHIFT; 229 read_bio->bi_iter.bi_size = copy_len; 230 __bio_add_page(read_bio, token, PAGE_SIZE, 0); 231 ret = submit_bio_wait(read_bio); 232 if (ret) { 233 bio_put(read_bio); 234 goto err_read_bio; 235 } 236 bio_put(read_bio); 237 ctx = kzalloc(sizeof(struct copy_ctx), gfp_mask); 238 if (!ctx) { 239 ret = -ENOMEM; 240 goto err_read_bio; 241 } 242 ctx->cio = cio; 243 ctx->range_idx = ri; 244 ctx->start_sec = rlist[ri].src; 245 246 write_bio = bio_alloc(dst_bdev, 1, REQ_OP_WRITE | REQ_COPY | REQ_NOMERGE, 247 gfp_mask); 248 if (!write_bio) { 249 ret = -ENOMEM; 250 goto err_read_bio; 251 } 252 253 write_bio->bi_iter.bi_sector = dst_blk >> SECTOR_SHIFT; 254 write_bio->bi_iter.bi_size = copy_len; 255 __bio_add_page(write_bio, token, PAGE_SIZE, 0); 256 write_bio->bi_end_io = bio_copy_end_io; 257 write_bio->bi_private = ctx; 258 atomic_inc(&cio->refcount); 259 submit_bio(write_bio); 260 } 261 } 262 263 /* Wait for completion of all IO's*/ 264 return cio_await_completion(cio); 265 266 err_read_bio: 267 __free_page(token); 268 err_token: 269 rlist[ri].comp_len = min_t(sector_t, rlist[ri].comp_len, (rlist[ri].len - remaining)); 270 271 cio->io_err = ret; 272 return cio_await_completion(cio); 273 } 274 > 275 int blk_submit_rw_buf(struct block_device *bdev, void *buf, sector_t buf_len, 276 sector_t sector, unsigned int op, gfp_t gfp_mask) 277 { 278 struct request_queue *q = bdev_get_queue(bdev); 279 struct bio *bio, *parent = NULL; 280 sector_t max_hw_len = min_t(unsigned int, queue_max_hw_sectors(q), 281 queue_max_segments(q) << (PAGE_SHIFT - SECTOR_SHIFT)) << SECTOR_SHIFT; 282 sector_t len, remaining; 283 int ret; 284 285 for (remaining = buf_len; remaining > 0; remaining -= len) { 286 len = min_t(int, max_hw_len, remaining); 287 retry: 288 bio = bio_map_kern(q, buf, len, gfp_mask); 289 if (IS_ERR(bio)) { 290 len >>= 1; 291 if (len) 292 goto retry; 293 return PTR_ERR(bio); 294 } 295 296 bio->bi_iter.bi_sector = sector >> SECTOR_SHIFT; 297 bio->bi_opf = op; 298 bio_set_dev(bio, bdev); 299 bio->bi_end_io = NULL; 300 bio->bi_private = NULL; 301 302 if (parent) { 303 bio_chain(parent, bio); 304 submit_bio(parent); 305 } 306 parent = bio; 307 sector += len; 308 buf = (char *) buf + len; 309 } 310 ret = submit_bio_wait(bio); 311 bio_put(bio); 312 313 return ret; 314 } 315 --- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v2 05/10] block: add emulation for copy @ 2022-02-08 3:20 ` kernel test robot 0 siblings, 0 replies; 6+ messages in thread From: kernel test robot @ 2022-02-08 3:20 UTC (permalink / raw) To: kbuild-all [-- Attachment #1: Type: text/plain, Size: 7043 bytes --] Hi Nitesh, Thank you for the patch! Perhaps something to improve: [auto build test WARNING on axboe-block/for-next] [also build test WARNING on next-20220207] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch] url: https://github.com/0day-ci/linux/commits/Nitesh-Shetty/block-make-bio_map_kern-non-static/20220207-231407 base: https://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git for-next config: nios2-randconfig-r001-20220207 (https://download.01.org/0day-ci/archive/20220208/202202081132.axCkiVgv-lkp(a)intel.com/config) compiler: nios2-linux-gcc (GCC) 11.2.0 reproduce (this is a W=1 build): wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # https://github.com/0day-ci/linux/commit/a7bb30870db803af4ad955a968992222bcfb478f git remote add linux-review https://github.com/0day-ci/linux git fetch --no-tags linux-review Nitesh-Shetty/block-make-bio_map_kern-non-static/20220207-231407 git checkout a7bb30870db803af4ad955a968992222bcfb478f # save the config file to linux build tree mkdir build_dir COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross O=build_dir ARCH=nios2 SHELL=/bin/bash If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot <lkp@intel.com> All warnings (new ones prefixed by >>): block/blk-lib.c:185:5: warning: no previous prototype for 'blk_copy_offload' [-Wmissing-prototypes] 185 | int blk_copy_offload(struct block_device *src_bdev, int nr_srcs, | ^~~~~~~~~~~~~~~~ >> block/blk-lib.c:275:5: warning: no previous prototype for 'blk_submit_rw_buf' [-Wmissing-prototypes] 275 | int blk_submit_rw_buf(struct block_device *bdev, void *buf, sector_t buf_len, | ^~~~~~~~~~~~~~~~~ vim +/blk_submit_rw_buf +275 block/blk-lib.c 180 181 /* 182 * blk_copy_offload - Use device's native copy offload feature 183 * Go through user provide payload, prepare new payload based on device's copy offload limits. 184 */ > 185 int blk_copy_offload(struct block_device *src_bdev, int nr_srcs, 186 struct range_entry *rlist, struct block_device *dst_bdev, gfp_t gfp_mask) 187 { 188 struct request_queue *sq = bdev_get_queue(src_bdev); 189 struct request_queue *dq = bdev_get_queue(dst_bdev); 190 struct bio *read_bio, *write_bio; 191 struct copy_ctx *ctx; 192 struct cio *cio; 193 struct page *token; 194 sector_t src_blk, copy_len, dst_blk; 195 sector_t remaining, max_copy_len = LONG_MAX; 196 int ri = 0, ret = 0; 197 198 cio = kzalloc(sizeof(struct cio), GFP_KERNEL); 199 if (!cio) 200 return -ENOMEM; 201 atomic_set(&cio->refcount, 0); 202 cio->rlist = rlist; 203 204 max_copy_len = min3(max_copy_len, (sector_t)sq->limits.max_copy_sectors, 205 (sector_t)dq->limits.max_copy_sectors); 206 max_copy_len = min3(max_copy_len, (sector_t)sq->limits.max_copy_range_sectors, 207 (sector_t)dq->limits.max_copy_range_sectors) << SECTOR_SHIFT; 208 209 for (ri = 0; ri < nr_srcs; ri++) { 210 cio->rlist[ri].comp_len = rlist[ri].len; 211 for (remaining = rlist[ri].len, src_blk = rlist[ri].src, dst_blk = rlist[ri].dst; 212 remaining > 0; 213 remaining -= copy_len, src_blk += copy_len, dst_blk += copy_len) { 214 copy_len = min(remaining, max_copy_len); 215 216 token = alloc_page(gfp_mask); 217 if (unlikely(!token)) { 218 ret = -ENOMEM; 219 goto err_token; 220 } 221 222 read_bio = bio_alloc(src_bdev, 1, REQ_OP_READ | REQ_COPY | REQ_NOMERGE, 223 gfp_mask); 224 if (!read_bio) { 225 ret = -ENOMEM; 226 goto err_read_bio; 227 } 228 read_bio->bi_iter.bi_sector = src_blk >> SECTOR_SHIFT; 229 read_bio->bi_iter.bi_size = copy_len; 230 __bio_add_page(read_bio, token, PAGE_SIZE, 0); 231 ret = submit_bio_wait(read_bio); 232 if (ret) { 233 bio_put(read_bio); 234 goto err_read_bio; 235 } 236 bio_put(read_bio); 237 ctx = kzalloc(sizeof(struct copy_ctx), gfp_mask); 238 if (!ctx) { 239 ret = -ENOMEM; 240 goto err_read_bio; 241 } 242 ctx->cio = cio; 243 ctx->range_idx = ri; 244 ctx->start_sec = rlist[ri].src; 245 246 write_bio = bio_alloc(dst_bdev, 1, REQ_OP_WRITE | REQ_COPY | REQ_NOMERGE, 247 gfp_mask); 248 if (!write_bio) { 249 ret = -ENOMEM; 250 goto err_read_bio; 251 } 252 253 write_bio->bi_iter.bi_sector = dst_blk >> SECTOR_SHIFT; 254 write_bio->bi_iter.bi_size = copy_len; 255 __bio_add_page(write_bio, token, PAGE_SIZE, 0); 256 write_bio->bi_end_io = bio_copy_end_io; 257 write_bio->bi_private = ctx; 258 atomic_inc(&cio->refcount); 259 submit_bio(write_bio); 260 } 261 } 262 263 /* Wait for completion of all IO's*/ 264 return cio_await_completion(cio); 265 266 err_read_bio: 267 __free_page(token); 268 err_token: 269 rlist[ri].comp_len = min_t(sector_t, rlist[ri].comp_len, (rlist[ri].len - remaining)); 270 271 cio->io_err = ret; 272 return cio_await_completion(cio); 273 } 274 > 275 int blk_submit_rw_buf(struct block_device *bdev, void *buf, sector_t buf_len, 276 sector_t sector, unsigned int op, gfp_t gfp_mask) 277 { 278 struct request_queue *q = bdev_get_queue(bdev); 279 struct bio *bio, *parent = NULL; 280 sector_t max_hw_len = min_t(unsigned int, queue_max_hw_sectors(q), 281 queue_max_segments(q) << (PAGE_SHIFT - SECTOR_SHIFT)) << SECTOR_SHIFT; 282 sector_t len, remaining; 283 int ret; 284 285 for (remaining = buf_len; remaining > 0; remaining -= len) { 286 len = min_t(int, max_hw_len, remaining); 287 retry: 288 bio = bio_map_kern(q, buf, len, gfp_mask); 289 if (IS_ERR(bio)) { 290 len >>= 1; 291 if (len) 292 goto retry; 293 return PTR_ERR(bio); 294 } 295 296 bio->bi_iter.bi_sector = sector >> SECTOR_SHIFT; 297 bio->bi_opf = op; 298 bio_set_dev(bio, bdev); 299 bio->bi_end_io = NULL; 300 bio->bi_private = NULL; 301 302 if (parent) { 303 bio_chain(parent, bio); 304 submit_bio(parent); 305 } 306 parent = bio; 307 sector += len; 308 buf = (char *) buf + len; 309 } 310 ret = submit_bio_wait(bio); 311 bio_put(bio); 312 313 return ret; 314 } 315 --- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v2 05/10] block: add emulation for copy 2022-02-07 14:13 ` [PATCH v2 05/10] block: add emulation for copy Nitesh Shetty 2022-02-08 3:20 ` kernel test robot @ 2022-02-16 13:32 ` Mikulas Patocka 2022-02-17 13:18 ` Nitesh Shetty 1 sibling, 1 reply; 6+ messages in thread From: Mikulas Patocka @ 2022-02-16 13:32 UTC (permalink / raw) To: Nitesh Shetty Cc: javier, chaitanyak, linux-block, linux-scsi, dm-devel, linux-nvme, linux-fsdevel, axboe, msnitzer, bvanassche, martin.petersen, roland, hare, kbusch, hch, Frederick.Knight, zach.brown, osandov, lsf-pc, djwong, josef, clm, dsterba, tytso, jack, joshi.k, arnav.dawn On Mon, 7 Feb 2022, Nitesh Shetty wrote: > + goto retry; > + return PTR_ERR(bio); > + } > + > + bio->bi_iter.bi_sector = sector >> SECTOR_SHIFT; > + bio->bi_opf = op; > + bio_set_dev(bio, bdev); > @@ -346,6 +463,8 @@ int blkdev_issue_copy(struct block_device *src_bdev, int nr, > > if (blk_check_copy_offload(src_q, dest_q)) > ret = blk_copy_offload(src_bdev, nr, rlist, dest_bdev, gfp_mask); > + else > + ret = blk_copy_emulate(src_bdev, nr, rlist, dest_bdev, gfp_mask); > > return ret; > } The emulation is not reliable because a device mapper device may be reconfigured and it may lose the copy capability between the calls to blk_check_copy_offload and blk_copy_offload. You should call blk_copy_emulate if blk_copy_offload returns an error. Mikulas ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v2 05/10] block: add emulation for copy 2022-02-16 13:32 ` Mikulas Patocka @ 2022-02-17 13:18 ` Nitesh Shetty 0 siblings, 0 replies; 6+ messages in thread From: Nitesh Shetty @ 2022-02-17 13:18 UTC (permalink / raw) To: Mikulas Patocka Cc: javier, chaitanyak, linux-block, linux-scsi, dm-devel, linux-nvme, linux-fsdevel, axboe, msnitzer, bvanassche, martin.petersen, roland, hare, kbusch, hch, Frederick.Knight, zach.brown, osandov, lsf-pc, djwong, josef, clm, dsterba, tytso, jack, joshi.k, arnav.dawn [-- Attachment #1: Type: text/plain, Size: 1057 bytes --] On Wed, Feb 16, 2022 at 08:32:45AM -0500, Mikulas Patocka wrote: > > > On Mon, 7 Feb 2022, Nitesh Shetty wrote: > > > + goto retry; > > + return PTR_ERR(bio); > > + } > > + > > + bio->bi_iter.bi_sector = sector >> SECTOR_SHIFT; > > + bio->bi_opf = op; > > + bio_set_dev(bio, bdev); > > @@ -346,6 +463,8 @@ int blkdev_issue_copy(struct block_device *src_bdev, int nr, > > > > if (blk_check_copy_offload(src_q, dest_q)) > > ret = blk_copy_offload(src_bdev, nr, rlist, dest_bdev, gfp_mask); > > + else > > + ret = blk_copy_emulate(src_bdev, nr, rlist, dest_bdev, gfp_mask); > > > > return ret; > > } > > The emulation is not reliable because a device mapper device may be > reconfigured and it may lose the copy capability between the calls to > blk_check_copy_offload and blk_copy_offload. > > You should call blk_copy_emulate if blk_copy_offload returns an error. > > Mikulas > > I agree, it was in our todo list to fallback to emulation for partial copy offload failures. In next version we will add this. -- Nitesh Shetty [-- Attachment #2: Type: text/plain, Size: 0 bytes --] ^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2022-02-18 5:01 UTC | newest] Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2022-02-10 21:28 [PATCH v2 05/10] block: add emulation for copy kernel test robot -- strict thread matches above, loose matches on Subject: below -- 2022-02-04 19:41 [RFC PATCH 0/3] NVMe copy offload patches Nitesh Shetty 2022-02-07 14:13 ` [PATCH v2 00/10] Add Copy offload support Nitesh Shetty [not found] ` <CGME20220207141930epcas5p2bcbff65f78ad1dede64648d73ddb3770@epcas5p2.samsung.com> 2022-02-07 14:13 ` [PATCH v2 05/10] block: add emulation for copy Nitesh Shetty 2022-02-08 3:20 ` kernel test robot 2022-02-08 3:20 ` kernel test robot 2022-02-16 13:32 ` Mikulas Patocka 2022-02-17 13:18 ` Nitesh Shetty
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.