ksummit.lists.linux.dev archive mirror
 help / color / mirror / Atom feed
From: Selva Jove <selvajove@gmail.com>
To: ksummit@lists.linux.dev
Cc: joshiiitr@gmail.com, nitheshshetty@gmail.com
Subject: [TECH TOPIC] Settling Copy Offload via NVMe SCC
Date: Fri, 25 Jun 2021 20:17:29 +0530	[thread overview]
Message-ID: <CAHqX9vZ_F4p0J_E3DZ4eoW0d3J2dZET5GEbM4Gr5wkUdRRPsAQ@mail.gmail.com> (raw)

The de-facto way of copying data in I/O stack has been pulling it from
one location followed by pushing to another. The farther the
application, requiring copy, is from storage, the longer it takes for
the trip to be over. With copy-offload the trips get shorter as the
storage device presents an interface to do internal data-copying. This
enables the host to optimise the pull-and-push method, freeing up the
host CPU, RAM and the fabric elements.

The copy-offload interface has existed in SCSI storage for at least a
decade through XCOPY but faced insurmountable challenges in getting
into the Linux I/O stack. As for NVMe storage, copy-offload made its
way into the main specification with a new Simple Copy Command(SCC)
recently. This has stimulated a renewed interest and efforts towards
copy-offload in the Linux community.

In this talk, we speak of the upstream efforts that we are doing around SCC -
https://lore.kernel.org/linux-nvme/20210219124517.79359-1-selvakuma.s1@samsung.com/#r

We'd extensively cover the design-decisions and seek the feedback on
the plumbing aspects such as -

1. User-interface. Should it be a new ioctl/syscall, io_uring based
opcode or must it fit into existing syscalls such as copy_file_range.
2. The transport mode between block-layer and NVMe. A chain of empty
bios (like discard) vs bio with payload.
3. Must SCSI XCOPY compatibility be considered while we go about
building interfaces around NVMe SCC?
4. Feasibility and challenges for in-kernel use cases, including the
file-systems and device-mappers

Thanks,
--------------
Selva & Nitesh

             reply	other threads:[~2021-06-25 14:47 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-25 14:47 Selva Jove [this message]
2021-06-25 16:02 ` [TECH TOPIC] Settling Copy Offload via NVMe SCC Bart Van Assche
2021-06-25 16:08   ` James Bottomley
2021-06-29 14:40     ` Selva Jove

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAHqX9vZ_F4p0J_E3DZ4eoW0d3J2dZET5GEbM4Gr5wkUdRRPsAQ@mail.gmail.com \
    --to=selvajove@gmail.com \
    --cc=joshiiitr@gmail.com \
    --cc=ksummit@lists.linux.dev \
    --cc=nitheshshetty@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).