So what’s happening here is internally within Blobstore when _spdk_blob_request_submit_op() tries to get a channel->req via spdk_bs_batch_open() for a write (bear the end) it doesn’t have any available so returns NULL which results in a callback error of –ENOMEM to the hello_blob callback. The default number of channel reqs is 512 and the hello_blob app doesn’t change that and uses a single channel for submitting the 520 back to back write requests that you are issuing and this failure happens right there towards the end. Ben, if I crank up the max channel reqs this works OK. I’m thinking this shouldn’t be needed and wondering why we aren’t placing completed channel reqs back on the list fast enough (if that’s the real problem). Something to do with this being a malloc backend maybe? Would try w/nvme but my dev system isn’t quite ready for prime time yet after reblasting it…. Thx Paul From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Luse, Paul E Sent: Wednesday, January 10, 2018 7:04 AM To: Storage Performance Development Kit Subject: Re: [SPDK] Problem with Blobstore when write 65MB continously Hi Zhang, Not sure off the top of my head but I’m hapy to take a quick look, will let ya know what I see on this end… -Paul From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Zhengyu Zhang Sent: Tuesday, January 9, 2018 8:16 PM To: Storage Performance Development Kit Subject: [SPDK] Problem with Blobstore when write 65MB continously Hi list! I want to write some app with Blobstore in SPDK. I am playing with example/blob/hello_world/hello_blob.c for a while. I modified the hello_blob to make it write more pages than its original one page: for ( i = 0; i < SOMEVAL; i ++) { spdk_bs_io_write_blob(hello_context->blob, hello_context->channel, hello_context->write_buff, offset, 32, write_complete, hello_context); offset += 32; } I meant to write blob for SOMEVAL times and 32 pages for each write. When the total amount of writing data is below 64M (SOMEVAL <= 512), it works fine. However, when the total size is over 64M, e.g. 65M, it breaks: hello_blob.c: 388:blob_create_complete: *NOTICE*: new blob id 4294967296 hello_blob.c: 327:open_complete: *NOTICE*: entry hello_blob.c: 338:open_complete: *NOTICE*: blobstore has FREE clusters of 380063 hello_blob.c: 358:open_complete: *NOTICE*: resized blob now has USED clusters of 65 hello_blob.c: 295:sync_complete: *NOTICE*: entry hello_blob.c: 253:blob_write: *NOTICE*: entry hello_blob.c: 232:write_complete: *NOTICE*: entry hello_blob.c: 115:unload_bs: *ERROR*: Error in write completion (err -12) blobstore.c:2563:spdk_bs_unload: *ERROR*: Blobstore still has open blobs hello_blob.c: 99:unload_complete: *NOTICE*: entry hello_blob.c: 101:unload_complete: *ERROR*: Error -16 unloading the bobstore I have no idea what is going on ... can anyone help? Thanks ZHengyu