From mboxrd@z Thu Jan 1 00:00:00 1970 Content-Type: multipart/mixed; boundary="===============7016259099330076227==" MIME-Version: 1.0 From: Niu, Yawei Subject: Re: [SPDK] io size limitation on spdk_blob_io_write() Date: Thu, 21 Mar 2019 12:08:16 +0000 Message-ID: <3E5C6E79-9E3E-4444-8552-1914FAAB7CB7@intel.com> List-ID: To: spdk@lists.01.org --===============7016259099330076227== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Thanks for the reply, Maciek. Yes, our cluster size is 1GB by default, and we have our own finer block al= locator inside of blob (we need a 4k block size allocator, I'm afraid that'= s not feasible for the blob allocator), so using small cluster size isn't a= n option for us. Would you like to improve the blob io interface to split I/O according to b= ackend bdev limitations (I think it's similar to the cross cluster boundary= split)? Otherwise, we have to be aware of the bdev limitations underneath = the blobstore, which looks not quite clean to me. What do you think? Thanks -Niu =EF=BB=BFOn 21/03/2019, 3:41 PM, "SPDK on behalf of Szwed, Maciej" wrote: Hi Niu, We do split I/O according to backend bdev limitations but only if you c= reate bdev and use spdk_bdev_read/write/... commands. For blob interface th= ere isn't any mechanism for that unfortunately. I'm guessing that you are using cluster size for blobs at least 128MB. = You can try to set cluster size to lower value than the limitation of NVMe = bdev and blobstore layer will always split I/O up to cluster size. = Regards, Maciek = -----Original Message----- From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Niu, Yawei Sent: Thursday, March 21, 2019 2:36 AM To: Storage Performance Development Kit Subject: [SPDK] io size limitation on spdk_blob_io_write() = Hi, = We discovered that spdk_blob_io_write() will fail with large io size (1= 28MB) over NVMe bdev, I checked the SPDK code a bit, and seems the failure = reason is the size exceeded the limitation of NVMe bdev io request size (wh= ich depends on the io queue depth & max transfer size). We may work around to the problem by splitting the io into several spdk= _blob_io_write() calls, but I was wondering if blobstore should hide these = bdev details/limitations for blobstore caller and split the I/O according t= o backend bdev limitations (just like what we did for cross cluster boundar= y io)? So that blobstore caller doesn=E2=80=99t need to differentiate what = type of bdev underneath? Any thoughts? = Thanks -Niu _______________________________________________ SPDK mailing list SPDK(a)lists.01.org https://lists.01.org/mailman/listinfo/spdk _______________________________________________ SPDK mailing list SPDK(a)lists.01.org https://lists.01.org/mailman/listinfo/spdk = --===============7016259099330076227==--