linux-scsi.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Bart Van Assche <bvanassche@acm.org>
To: "Martin K. Petersen" <martin.petersen@oracle.com>,
	Christoph Hellwig <hch@infradead.org>
Cc: Daejun Park <daejun7.park@samsung.com>,
	"avri.altman@wdc.com" <avri.altman@wdc.com>,
	"jejb@linux.ibm.com" <jejb@linux.ibm.com>,
	"asutoshd@codeaurora.org" <asutoshd@codeaurora.org>,
	"beanhuo@micron.com" <beanhuo@micron.com>,
	"stanley.chu@mediatek.com" <stanley.chu@mediatek.com>,
	"cang@codeaurora.org" <cang@codeaurora.org>,
	"tomas.winkler@intel.com" <tomas.winkler@intel.com>,
	ALIM AKHTAR <alim.akhtar@samsung.com>,
	"linux-scsi@vger.kernel.org" <linux-scsi@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Sang-yoon Oh <sangyoon.oh@samsung.com>,
	Sung-Jun Park <sungjun07.park@samsung.com>,
	yongmyung lee <ymhungry.lee@samsung.com>,
	Jinyoung CHOI <j-young.choi@samsung.com>,
	Adel Choi <adel.choi@samsung.com>,
	BoRam Shin <boram.shin@samsung.com>
Subject: Re: [PATCH v6 0/5] scsi: ufs: Add Host Performance Booster Support
Date: Wed, 22 Jul 2020 07:34:18 -0700	[thread overview]
Message-ID: <182631f8-5821-ae50-142b-fbe224d5066a@acm.org> (raw)
In-Reply-To: <yq1blk7g1jd.fsf@ca-mkp.ca.oracle.com>

On 2020-07-22 06:27, Martin K. Petersen wrote:
> Christoph Hellwig wrote:
>> As this monster seems to come back again and again let me re-iterate
>> my statement:
>>
>> I do not think Linux should support a broken standards extensions that
>> creates a huge share state between the Linux initiator and the target
>> device like this with all its associated problems.
> 
> I spent a couple of hours looking at this series again last night. And
> while the code has improved, I do remain concerned about the general
> concept.
> 
> I understand how caching the FTL in host memory can improve performance
> from a theoretical perspective. However, I am not sure how much a
> difference this is going to make outside of synthetic benchmarks. What
> are the workloads that keep reading the same blocks from media? Or does
> the performance improvement exclusively come from the second order
> pre-fetching effect for larger I/Os? If so, why is the device's internal
> L2P SRAM cache ineffective at handling that case?

Hi Martin,

These are great questions. The size of the L2P table is proportional to
the device capacity and device capacities keep increasing. My
understanding is that on-device SRAM is much more expensive than (host)
DRAM. Caching the L2P table in host memory allows to keep the (UFS)
device cost low. The Samsung HPB paper explains this as follows: "Mobile
storage devices typically have RAM with constrained size, thus lack in
memory to keep the whole mapping table."

This is not an entirely new approach. The L2P table of the Fusion-io
PCIe SSD adapters that were introduced more than ten years ago was
entirely kept in host DRAM. The manual of that device documented how
much memory the Fusion-io driver needed for the L2P table.

This issue is not unique to UFS devices. My understanding is that DRAM
cost is a significant part of the cost of enterprise and consumer SSD
devices. SSD manufacturers are also interested in solutions to reduce
the amount of DRAM inside SSDs. One possible solution, namely paging the
L2P table, has a significant disadvantage, namely that it doubles the
number of media accesses for random I/O with a small transfer size.

The performance benefit of HPB comes from significantly reducing the
number of media accesses in case of random I/O.

I am not claiming that HPB is a perfect solution. But I wouldn't be
surprised if enterprise SSD vendors would start looking into a similar
solution sooner or later.

Bart.

  reply	other threads:[~2020-07-22 14:34 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <CGME20200713103423epcms2p8442ee7cc22395e4a4cedf224f95c45e8@epcms2p8>
2020-07-13 10:34 ` [PATCH v6 0/5] scsi: ufs: Add Host Performance Booster Support Daejun Park
     [not found]   ` <CGME20200713103423epcms2p8442ee7cc22395e4a4cedf224f95c45e8@epcms2p4>
2020-07-13 10:40     ` [PATCH v6 2/5] scsi: ufs: Add UFS-feature layer Daejun Park
2020-07-22  6:41       ` Christoph Hellwig
2020-07-22 12:46         ` Martin K. Petersen
2020-07-22 15:06           ` Bart Van Assche
2020-07-13 10:53     ` [PATCH v6 4/5] scsi: ufs: L2P map management for HPB read Daejun Park
2020-07-15 18:34   ` [PATCH v6 0/5] scsi: ufs: Add Host Performance Booster Support Avi Shchislowski
2020-07-16  1:41     ` Bart Van Assche
2020-07-16 10:00       ` Avi Shchislowski
2020-07-16 16:21         ` Eric Biggers
2020-07-16 16:45         ` Alim Akhtar
2020-07-17 15:54           ` Avi Shchislowski
2020-08-10 15:38         ` Greg KH
2020-07-16  8:13     ` Bean Huo
2020-07-16  8:14     ` Bean Huo
2020-07-16  1:05   ` Alim Akhtar
2020-07-17  5:24     ` Avri Altman
2020-07-19  6:35       ` Avri Altman
2020-07-21 18:15         ` Alim Akhtar
2020-07-22  4:20         ` Martin K. Petersen
2020-07-22  6:18           ` Avri Altman
2020-07-22  6:39   ` Christoph Hellwig
2020-07-22 13:27     ` Martin K. Petersen
2020-07-22 14:34       ` Bart Van Assche [this message]
2020-07-27  9:33     ` Pavel Machek
     [not found]   ` <CGME20200713103423epcms2p8442ee7cc22395e4a4cedf224f95c45e8@epcms2p3>
2020-07-13 10:38     ` [PATCH v6 1/5] scsi: ufs: Add UFS feature related parameter Daejun Park
2020-07-13 12:13       ` Can Guo
2020-07-13 10:58     ` [PATCH v6 5/5] scsi: ufs: Prepare HPB read for cached sub-region Daejun Park
2020-07-27  6:18     ` [PATCH v6 2/5] scsi: ufs: Add UFS-feature layer Daejun Park
2020-08-04 18:43       ` Bart Van Assche
     [not found]   ` <CGME20200713103423epcms2p8442ee7cc22395e4a4cedf224f95c45e8@epcms2p1>
2020-07-13 10:50     ` [PATCH v6 3/5] scsi: ufs: Introduce HPB module Daejun Park
2020-08-04 23:33     ` Re: [PATCH v6 2/5] scsi: ufs: Add UFS-feature layer Daejun Park

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=182631f8-5821-ae50-142b-fbe224d5066a@acm.org \
    --to=bvanassche@acm.org \
    --cc=adel.choi@samsung.com \
    --cc=alim.akhtar@samsung.com \
    --cc=asutoshd@codeaurora.org \
    --cc=avri.altman@wdc.com \
    --cc=beanhuo@micron.com \
    --cc=boram.shin@samsung.com \
    --cc=cang@codeaurora.org \
    --cc=daejun7.park@samsung.com \
    --cc=hch@infradead.org \
    --cc=j-young.choi@samsung.com \
    --cc=jejb@linux.ibm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-scsi@vger.kernel.org \
    --cc=martin.petersen@oracle.com \
    --cc=sangyoon.oh@samsung.com \
    --cc=stanley.chu@mediatek.com \
    --cc=sungjun07.park@samsung.com \
    --cc=tomas.winkler@intel.com \
    --cc=ymhungry.lee@samsung.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).