All of lore.kernel.org
 help / color / mirror / Atom feed
* [SPDK]  performance of rocksdb on blobfs
@ 2017-05-04  5:04 Liu, Yun1
  0 siblings, 0 replies; 3+ messages in thread
From: Liu, Yun1 @ 2017-05-04  5:04 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 505 bytes --]


I've tried db_bench in the rocksdb on blobfs, but didn't get performance improvement as expected.
Compared with NO_SPDK, insert increased 70%, readwrite increased 28%, but overwrite decreased 38%, randread decreased 48%.
(the attached pic is the detail results.)
Is it a normal result or is there a way that we can get a better result by adjusting some parameters?

PS, here are the environment:
Storage: P3700, OS: CentOS 7.3, gcc: 6.3, Keys: 16B, Values: 1000B, Entries: 500M

Thanks,
Yun


[-- Attachment #2: rocksdb_db_bench.png --]
[-- Type: image/png, Size: 36035 bytes --]

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [SPDK] performance of rocksdb on blobfs
@ 2017-05-05  7:31 Andrey Kuzmin
  0 siblings, 0 replies; 3+ messages in thread
From: Andrey Kuzmin @ 2017-05-05  7:31 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 2261 bytes --]

Related, are any reference RocksDB SPDK/NO_SPDK comparison results
available?

Thanks,
Andrey

On May 4, 2017 20:06, "Harris, James R" <james.r.harris(a)intel.com> wrote:

>
> > On May 3, 2017, at 10:04 PM, Liu, Yun1 <yun1.liu(a)intel.com> wrote:
> >
> >
> > I've tried db_bench in the rocksdb on blobfs, but didn't get performance
> improvement as expected.
> > Compared with NO_SPDK, insert increased 70%, readwrite increased 28%,
> but overwrite decreased 38%, randread decreased 48%.
> > (the attached pic is the detail results.)
> > Is it a normal result or is there a way that we can get a better result
> by adjusting some parameters?
> >
> > PS, here are the environment:
> > Storage: P3700, OS: CentOS 7.3, gcc: 6.3, Keys: 16B, Values: 1000B,
> Entries: 500M
>
> Hi Yun,
>
> This is not the expected result.
>
> Can you confirm a few pieces of data for me:
>
> 1) FW revision on your P3700.  Recommendation is to have FW version 1H0 or
> greater (look at last three digits in FW version).  TRIM performance
> especially is much better with more recent versions of P3700 FW and helps
> blobfs performance more than kernel/XFS performance (even with mount -o
> discard).  This would not explain randread performance differences (since
> randread does not drive any compaction which means no file deletions) but
> could affect overwrite performance.
> 2) What size of SPDK blobfs cache are you using for your tests?  Note that
> the kernel (NO_SPDK) case will use as much memory as possible for page
> cache, while SPDK blobfs will be restricted to the amount specified.  The
> run_test.sh script will override the default 4GB blobfs cache by specifying
> a larger value in the CACHE_SIZE environment variable which is specified in
> terms of MB.  So CACHE_SIZE=16384 ./run_test.sh will use 16GB of cache for
> blobfs.  Again this would likely affect performance more for the overwrite
> case, where compaction drives blobfs cache usage with flush and compaction
> operations.
> 3) Can you describe the CPU/platform you are running on?
>
> Thanks,
>
> -Jim
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 2810 bytes --]

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [SPDK] performance of rocksdb on blobfs
@ 2017-05-04 17:06 Harris, James R
  0 siblings, 0 replies; 3+ messages in thread
From: Harris, James R @ 2017-05-04 17:06 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 1842 bytes --]


> On May 3, 2017, at 10:04 PM, Liu, Yun1 <yun1.liu(a)intel.com> wrote:
> 
> 
> I've tried db_bench in the rocksdb on blobfs, but didn't get performance improvement as expected.
> Compared with NO_SPDK, insert increased 70%, readwrite increased 28%, but overwrite decreased 38%, randread decreased 48%.
> (the attached pic is the detail results.)
> Is it a normal result or is there a way that we can get a better result by adjusting some parameters?
> 
> PS, here are the environment:
> Storage: P3700, OS: CentOS 7.3, gcc: 6.3, Keys: 16B, Values: 1000B, Entries: 500M

Hi Yun,

This is not the expected result.

Can you confirm a few pieces of data for me:

1) FW revision on your P3700.  Recommendation is to have FW version 1H0 or greater (look at last three digits in FW version).  TRIM performance especially is much better with more recent versions of P3700 FW and helps blobfs performance more than kernel/XFS performance (even with mount -o discard).  This would not explain randread performance differences (since randread does not drive any compaction which means no file deletions) but could affect overwrite performance.
2) What size of SPDK blobfs cache are you using for your tests?  Note that the kernel (NO_SPDK) case will use as much memory as possible for page cache, while SPDK blobfs will be restricted to the amount specified.  The run_test.sh script will override the default 4GB blobfs cache by specifying a larger value in the CACHE_SIZE environment variable which is specified in terms of MB.  So CACHE_SIZE=16384 ./run_test.sh will use 16GB of cache for blobfs.  Again this would likely affect performance more for the overwrite case, where compaction drives blobfs cache usage with flush and compaction operations.
3) Can you describe the CPU/platform you are running on?

Thanks,

-Jim


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2017-05-05  7:31 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-05-04  5:04 [SPDK] performance of rocksdb on blobfs Liu, Yun1
2017-05-04 17:06 Harris, James R
2017-05-05  7:31 Andrey Kuzmin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.