All of lore.kernel.org
 help / color / mirror / Atom feed
From: Harris, James R <james.r.harris at intel.com>
To: spdk@lists.01.org
Subject: Re: [SPDK] Problem with Blobstore when write 65MB continously
Date: Wed, 10 Jan 2018 16:21:17 +0000	[thread overview]
Message-ID: <6CE0D7EB-15CC-44A2-888B-C283A1C54050@intel.com> (raw)
In-Reply-To: 82C9F782B054C94B9FC04A331649C77A9D4926BB@fmsmsx104.amr.corp.intel.com

[-- Attachment #1: Type: text/plain, Size: 3704 bytes --]

Hi Paul and Zhengyu,

The problem is that the app is not giving the block device a chance to complete any I/O while submitting the 520 back-to-back requests.  Blobstore is passive here – it does not do any polling on the block device – that is up to the application.

Technically, with a malloc backend, there is really no polling required since it’s just a memcpy – but the bdev layer defers immediate completions as an event so that bdev API users are ensured they will never get their completion callback invoked in the context of the bdev IO submission.  So in this test case, malloc ends up behaving similarly to an asynchronous block device backend like NVMe.

For NVMe, just giving the app time to poll will not guarantee that completions will occur fast enough to allow more submissions.  The CPU will always be able to submit I/O at a faster rate than the NVMe device can complete them (even for very small I/O).

Increasing the number of channel reqs would work – but at some point these will still run out.  So it really depends on your application – either increase the channel reqs to the absolutely maximum you will ever need, or add ENOMEM handling.

Note that using more channels will only work if those channels are each allocated on a separate thread.  Multiple requests to allocate a Blobstore channel on the same thread will always return the same channel.

Regards, 

-Jim

On 1/10/18, 9:03 AM, "SPDK on behalf of Luse, Paul E" <spdk-bounces(a)lists.01.org on behalf of paul.e.luse(a)intel.com> wrote:

    Hi Zhang,
    
    I'm not suggesting changing anything right now, thanks for the point on the SSD though - I'd wait for Ben or someone else to jump in with a bit more info on why this is happening.
    
    Thanks!!
    Paul
    
    -----Original Message-----
    From: Zhengyu Zhang [mailto:freeman.zhang1992(a)gmail.com] 
    Sent: Wednesday, January 10, 2018 9:00 AM
    To: Luse, Paul E <paul.e.luse(a)intel.com>
    Cc: Storage Performance Development Kit <spdk(a)lists.01.org>
    Subject: Re: [SPDK] Problem with Blobstore when write 65MB continously
    
    Hi Paul
    
    Thanks for your reply!
    
    
    On 1/10/18 11:18 PM, Luse, Paul E wrote:
    > So what’s happening here is internally within Blobstore when
    > _spdk_blob_request_submit_op() tries to get a channel->req via
    > spdk_bs_batch_open() for a write (bear the end) it doesn’t have any 
    > available so returns NULL which results in a callback error of –ENOMEM 
    > to the hello_blob callback.  The default number of channel reqs is 512 
    > and the hello_blob app doesn’t change that and uses a single channel 
    > for submitting the 520 back to back write requests that you are 
    > issuing and this failure happens right there towards the end.
    > 
    
    So you are suggesting me to tune the number of channel reqs or using multiple channels if I want to write more?
    
    > 
    > Ben, if I crank up the max channel reqs this works OK.  I’m thinking 
    > this shouldn’t be needed and wondering why we aren’t placing completed 
    > channel reqs back on the list fast enough (if that’s the real problem).
    > Something to do with this being a malloc backend maybe?  Would try 
    > w/nvme but my dev system isn’t quite ready for prime time yet after 
    > reblasting it….
    > 
    >
    
    I tested on both malloc backend and an P3700 NVMe SSD. Their results are the same.
    
    Thanks!
    Zhengyu
    _______________________________________________
    SPDK mailing list
    SPDK(a)lists.01.org
    https://lists.01.org/mailman/listinfo/spdk
    


             reply	other threads:[~2018-01-10 16:21 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-01-10 16:21 Harris, James R [this message]
  -- strict thread matches above, loose matches on Subject: below --
2018-01-11  4:08 [SPDK] Problem with Blobstore when write 65MB continously Zhengyu Zhang
2018-01-10 20:53 Walker, Benjamin
2018-01-10 19:28 Andrey Kuzmin
2018-01-10 17:17 Walker, Benjamin
2018-01-10 17:11 Walker, Benjamin
2018-01-10 17:02 Luse, Paul E
2018-01-10 17:00 Andrey Kuzmin
2018-01-10 16:58 Walker, Benjamin
2018-01-10 16:47 Luse, Paul E
2018-01-10 16:32 Walker, Benjamin
2018-01-10 16:03 Luse, Paul E
2018-01-10 15:59 Zhengyu Zhang
2018-01-10 15:18 Luse, Paul E
2018-01-10 14:03 Luse, Paul E
2018-01-10  3:15 Zhengyu Zhang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=6CE0D7EB-15CC-44A2-888B-C283A1C54050@intel.com \
    --to=spdk@lists.01.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.