All of lore.kernel.org
 help / color / mirror / Atom feed
From: Luse, Paul E <paul.e.luse at intel.com>
To: spdk@lists.01.org
Subject: Re: [SPDK] Problem with Blobstore when write 65MB continously
Date: Wed, 10 Jan 2018 16:03:44 +0000	[thread overview]
Message-ID: <82C9F782B054C94B9FC04A331649C77A9D4926BB@fmsmsx104.amr.corp.intel.com> (raw)
In-Reply-To: d4bd6ead-9c64-a1fe-6ca2-491c96ec823f@gmail.com

[-- Attachment #1: Type: text/plain, Size: 1812 bytes --]

Hi Zhang,

I'm not suggesting changing anything right now, thanks for the point on the SSD though - I'd wait for Ben or someone else to jump in with a bit more info on why this is happening.

Thanks!!
Paul

-----Original Message-----
From: Zhengyu Zhang [mailto:freeman.zhang1992(a)gmail.com] 
Sent: Wednesday, January 10, 2018 9:00 AM
To: Luse, Paul E <paul.e.luse(a)intel.com>
Cc: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] Problem with Blobstore when write 65MB continously

Hi Paul

Thanks for your reply!


On 1/10/18 11:18 PM, Luse, Paul E wrote:
> So what’s happening here is internally within Blobstore when
> _spdk_blob_request_submit_op() tries to get a channel->req via
> spdk_bs_batch_open() for a write (bear the end) it doesn’t have any 
> available so returns NULL which results in a callback error of –ENOMEM 
> to the hello_blob callback.  The default number of channel reqs is 512 
> and the hello_blob app doesn’t change that and uses a single channel 
> for submitting the 520 back to back write requests that you are 
> issuing and this failure happens right there towards the end.
> 

So you are suggesting me to tune the number of channel reqs or using multiple channels if I want to write more?

> 
> Ben, if I crank up the max channel reqs this works OK.  I’m thinking 
> this shouldn’t be needed and wondering why we aren’t placing completed 
> channel reqs back on the list fast enough (if that’s the real problem).
> Something to do with this being a malloc backend maybe?  Would try 
> w/nvme but my dev system isn’t quite ready for prime time yet after 
> reblasting it….
> 
>

I tested on both malloc backend and an P3700 NVMe SSD. Their results are the same.

Thanks!
Zhengyu

             reply	other threads:[~2018-01-10 16:03 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-01-10 16:03 Luse, Paul E [this message]
  -- strict thread matches above, loose matches on Subject: below --
2018-01-11  4:08 [SPDK] Problem with Blobstore when write 65MB continously Zhengyu Zhang
2018-01-10 20:53 Walker, Benjamin
2018-01-10 19:28 Andrey Kuzmin
2018-01-10 17:17 Walker, Benjamin
2018-01-10 17:11 Walker, Benjamin
2018-01-10 17:02 Luse, Paul E
2018-01-10 17:00 Andrey Kuzmin
2018-01-10 16:58 Walker, Benjamin
2018-01-10 16:47 Luse, Paul E
2018-01-10 16:32 Walker, Benjamin
2018-01-10 16:21 Harris, James R
2018-01-10 15:59 Zhengyu Zhang
2018-01-10 15:18 Luse, Paul E
2018-01-10 14:03 Luse, Paul E
2018-01-10  3:15 Zhengyu Zhang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=82C9F782B054C94B9FC04A331649C77A9D4926BB@fmsmsx104.amr.corp.intel.com \
    --to=spdk@lists.01.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.