linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Mike Rapoport <rppt@linux.vnet.ibm.com>
To: Nathan Hjelm <hjelmn@me.com>
Cc: Open MPI Developers <devel@lists.open-mpi.org>,
	Andrei Vagin <avagin@openvz.org>, Arnd Bergmann <arnd@arndb.de>,
	Jann Horn <jannh@google.com>,
	rr-dev@mozilla.org, linux-api@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	Josh Triplett <josh@joshtriplett.org>,
	criu@openvz.org, linux-mm@kvack.org, gdb@sourceware.org,
	Alexander Viro <viro@zeniv.linux.org.uk>,
	Greg KH <gregkh@linuxfoundation.org>,
	linux-fsdevel@vger.kernel.org,
	Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Michael Kerrisk <mtk.manpages@gmail.com>
Subject: Re: [OMPI devel] [PATCH v5 0/4] vm: add a syscall to map a process memory into a pipe
Date: Tue, 27 Feb 2018 09:10:20 +0200	[thread overview]
Message-ID: <20180227071020.GA24633@rapoport-lnx> (raw)
In-Reply-To: <B9A6330F-48FE-4260-A505-3FF043874F0F@me.com>

On Mon, Feb 26, 2018 at 09:38:19AM -0700, Nathan Hjelm wrote:
> All MPI implementations have support for using CMA to transfer data
> between local processes. The performance is fairly good (not as good as
> XPMEM) but the interface limits what we can do with to remote process
> memory (no atomics). I have not heard about this new proposal. What is
> the benefit of the proposed calls over the existing calls?

The proposed system call call that combines functionality of
process_vm_read and vmsplice [1] and it's particularly useful when one
needs to read the remote process memory and then write it to a file
descriptor. In this case a sequence of process_vm_read() + write() calls
that involves two copies of data can be replaced with process_vm_splice() +
splice() which does not involve copy at all.

[1] https://lkml.org/lkml/2018/1/9/32
 
> -Nathan
> 
> > On Feb 26, 2018, at 2:02 AM, Pavel Emelyanov <xemul@virtuozzo.com> wrote:
> > 
> > On 02/21/2018 03:44 AM, Andrew Morton wrote:
> >> On Tue,  9 Jan 2018 08:30:49 +0200 Mike Rapoport <rppt@linux.vnet.ibm.com> wrote:
> >> 
> >>> This patches introduces new process_vmsplice system call that combines
> >>> functionality of process_vm_read and vmsplice.
> >> 
> >> All seems fairly strightforward.  The big question is: do we know that
> >> people will actually use this, and get sufficient value from it to
> >> justify its addition?
> > 
> > Yes, that's what bothers us a lot too :) I've tried to start with finding out if anyone
> > used the sys_read/write_process_vm() calls, but failed :( Does anybody know how popular
> > these syscalls are? If its users operate on big amount of memory, they could benefit from
> > the proposed splice extension.
> > 
> > -- Pavel

-- 
Sincerely yours,
Mike.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2018-02-27  7:10 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-01-09  6:30 [PATCH v5 0/4] vm: add a syscall to map a process memory into a pipe Mike Rapoport
2018-01-09  6:30 ` [PATCH v5 1/4] fs/splice: introduce pages_to_pipe helper Mike Rapoport
2018-01-09  6:30 ` [PATCH v5 2/4] vm: add a syscall to map a process memory into a pipe Mike Rapoport
2018-01-09  6:30 ` [PATCH v5 3/4] x86: wire up the process_vmsplice syscall Mike Rapoport
2018-01-11 17:10   ` kbuild test robot
2018-01-09  6:30 ` [PATCH v5 4/4] test: add a test for " Mike Rapoport
2018-02-21  0:44 ` [PATCH v5 0/4] vm: add a syscall to map a process memory into a pipe Andrew Morton
2018-02-26  9:02   ` Pavel Emelyanov
2018-02-26 16:38     ` [OMPI devel] " Nathan Hjelm
2018-02-27  7:10       ` Mike Rapoport [this message]
2018-02-27  2:18     ` Dmitry V. Levin
2018-02-28  6:11       ` Andrei Vagin
2018-02-28  7:12       ` Pavel Emelyanov
2018-02-28 17:50         ` Andrei Vagin
2018-02-28 23:12         ` [OMPI devel] " Atchley, Scott

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180227071020.GA24633@rapoport-lnx \
    --to=rppt@linux.vnet.ibm.com \
    --cc=akpm@linux-foundation.org \
    --cc=arnd@arndb.de \
    --cc=avagin@openvz.org \
    --cc=criu@openvz.org \
    --cc=devel@lists.open-mpi.org \
    --cc=gdb@sourceware.org \
    --cc=gregkh@linuxfoundation.org \
    --cc=hjelmn@me.com \
    --cc=jannh@google.com \
    --cc=josh@joshtriplett.org \
    --cc=linux-api@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mtk.manpages@gmail.com \
    --cc=rr-dev@mozilla.org \
    --cc=tglx@linutronix.de \
    --cc=viro@zeniv.linux.org.uk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).