All of lore.kernel.org
 help / color / mirror / Atom feed
From: Mittal, Rishabh <rimittal at ebay.com>
To: spdk@lists.01.org
Subject: Re: [SPDK] NBD with SPDK
Date: Wed, 14 Aug 2019 17:55:31 +0000	[thread overview]
Message-ID: <5CC2540F-3F90-4C2B-9C83-C2DD3D6C5F2F@ebay.com> (raw)
In-Reply-To: 0A4299FB-9034-4530-A633-D4DB7EC1269A@intel.com

[-- Attachment #1: Type: text/plain, Size: 3282 bytes --]

That’s right.  I am thinking of using function remap_page_range in kernel only for the buffers which are in use currently. I don't think there will be much cost of mapping the physical address to virtual address.

Xiaoxi,

What data size are you using in your testing.


Thanks
Rishabh Mittal

On 8/14/19, 9:54 AM, "Harris, James R" <james.r.harris(a)intel.com> wrote:

    
    
    On 8/14/19, 9:18 AM, "Walker, Benjamin" <benjamin.walker(a)intel.com> wrote:
    
    <trim>
        
        When an I/O is performed in the process initiating the I/O to a file, the data
        goes into the OS page cache buffers at a layer far above the bio stack
        (somewhere up in VFS). If SPDK were to reserve some memory and hand it off to
        your kernel driver, your kernel driver would still need to copy it to that
        location out of the page cache buffers. We can't safely share the page cache
        buffers with a user space process.
       
    I think Rishabh was suggesting the SPDK reserve the virtual address space only.
    Then the kernel could map the page cache buffers into that virtual address space.
    That would not require a data copy, but would require the mapping operations.
    
    I think the profiling data would be really helpful - to quantify how much of the 50us
    Is due to copying the 4KB of data.  That can help drive next steps on how to optimize
    the SPDK NBD module.
    
    Thanks,
    
    -Jim
    
    
        As Paul said, I'm skeptical that the memcpy is significant in the overall
        performance you're measuring. I encourage you to go look at some profiling data
        and confirm that the memcpy is really showing up. I suspect the overhead is
        instead primarily in these spots:
        
        1) Dynamic buffer allocation in the SPDK NBD backend.
        
        As Paul indicated, the NBD target is dynamically allocating memory for each I/O.
        The NBD backend wasn't designed to be fast - it was designed to be simple.
        Pooling would be a lot faster and is something fairly easy to implement.
        
        2) The way SPDK does the syscalls when it implements the NBD backend.
        
        Again, the code was designed to be simple, not high performance. It simply calls
        read() and write() on the socket for each command. There are much higher
        performance ways of doing this, they're just more complex to implement.
        
        3) The lack of multi-queue support in NBD
        
        Every I/O is funneled through a single sockpair up to user space. That means
        there is locking going on. I believe this is just a limitation of NBD today - it
        doesn't plug into the block-mq stuff in the kernel and expose multiple
        sockpairs. But someone more knowledgeable on the kernel stack would need to take
        a look.
        
        Thanks,
        Ben
        
        > 
        > Couple of things that I am not really sure in this flow is :- 1. How memory
        > registration is going to work with RDMA driver.
        > 2. What changes are required in spdk memory management
        > 
        > Thanks
        > Rishabh Mittal
        
    
    


             reply	other threads:[~2019-08-14 17:55 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-08-14 17:55 Mittal, Rishabh [this message]
  -- strict thread matches above, loose matches on Subject: below --
2019-09-23  1:03 [SPDK] NBD with SPDK Huang Zhiteng
2019-09-06 20:31 Kadayam, Hari
2019-09-06 17:13 Mittal, Rishabh
2019-09-06  2:14 Szmyd, Brian
2019-09-06  2:08 Huang Zhiteng
2019-09-05 22:00 Szmyd, Brian
2019-09-05 21:22 Walker, Benjamin
2019-09-05 20:11 Luse, Paul E
2019-09-04 23:27 Luse, Paul E
2019-09-04 23:03 Luse, Paul E
2019-09-04 18:08 Walker, Benjamin
2019-08-30 22:28 Mittal, Rishabh
2019-08-30 17:06 Walker, Benjamin
2019-08-30  1:05 Mittal, Rishabh
2019-08-19 14:41 Luse, Paul E
2019-08-16  1:50 Mittal, Rishabh
2019-08-16  1:26 Harris, James R
2019-08-15 23:34 Mittal, Rishabh
2019-08-14 17:05 Kadayam, Hari
2019-08-14 16:54 Harris, James R
2019-08-14 16:18 Walker, Benjamin
2019-08-14 14:28 Luse, Paul E
2019-08-13 22:08 Mittal, Rishabh
2019-08-13 21:45 Harris, James R
2019-08-13 19:55 Mittal, Rishabh
2019-08-12 18:41 Harris, James R
2019-08-12 18:11 Harris, James R
2019-08-11 23:33 Mittal, Rishabh
2019-08-11 22:51 Mittal, Rishabh
2019-08-11 19:53 Luse, Paul E
2019-08-11  1:08 Mittal, Rishabh

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5CC2540F-3F90-4C2B-9C83-C2DD3D6C5F2F@ebay.com \
    --to=spdk@lists.01.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.