linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ulf Hansson <ulf.hansson@linaro.org>
To: Michael Wu <michael@allwinnertech.com>
Cc: "Adrian Hunter" <adrian.hunter@intel.com>,
	"Christian Löhle" <CLoehle@hyperstone.com>,
	"Avri Altman" <avri.altman@wdc.com>,
	"beanhuo@micron.com" <beanhuo@micron.com>,
	"porzio@gmail.com" <porzio@gmail.com>,
	"linux-mmc@vger.kernel.org" <linux-mmc@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	allwinner-opensource-support
	<allwinner-opensource-support@allwinnertech.com>
Subject: Re: [PATCH] mmc: block: enable cache-flushing when mmc cache is on
Date: Tue, 29 Mar 2022 11:53:26 +0200	[thread overview]
Message-ID: <CAPDyKFqT5RYCtD34bnDZ15m3Tz_3SjLw=VzZ_YTn8W5UbYKSgw@mail.gmail.com> (raw)
In-Reply-To: <e56d5447-9b08-d2e3-09a8-4546e4112d1c@allwinnertech.com>

On Tue, 29 Mar 2022 at 11:08, Michael Wu <michael@allwinnertech.com> wrote:
>
> On 28/03/2022 19:38, Ulf Hansson wrote:
> > On Mon, 28 Mar 2022 at 12:11, Michael Wu <michael@allwinnertech.com> wrote:
> >>
> >> On 25/03/2022 18:13, Ulf Hansson wrote:
> >>> On Fri, 25 Mar 2022 at 06:46, Michael Wu <michael@allwinnertech.com> wrote:
> >>>>
> >>>> On 24/03/2022 19:27, Ulf Hansson wrote:
> >>>>> On Thu, 17 Mar 2022 at 10:14, Ulf Hansson <ulf.hansson@linaro.org> wrote:
> >>>>>>
> >>>>>> On Wed, 16 Mar 2022 at 17:08, Adrian Hunter <adrian.hunter@intel.com> wrote:
> >>>>>>>
> >>>>>>> On 16.3.2022 16.46, Christian Löhle wrote:
> >>>>>>>>> So we are not going to let the block layer know about SD cache?
> >>>>>>>>> Or is it a separate change?
> >>>>>>>>
> >>>>>>>> I have some code for this laying around, but as it requires reading, parsing and writing Function Registers,
> >>>>>>>> in particular PEH, it's a lot of boilerplate code to get the functionality, but I'll clean it up and send a patch in the coming weeks.
> >>>>>>>>
> >>>>>>>
> >>>>>>> We have the sd cache flush.  We would presumably just need to call blk_queue_write_cache()
> >>>>>>> for the !mmc_card_mmc(card) case e.g.
> >>>>>>>
> >>>>>>>            if (mmc_has_reliable_write(card)) {
> >>>>>>>                    md->flags |= MMC_BLK_REL_WR;
> >>>>>>>                    enable_fua = true;
> >>>>>>>            }
> >>>>>>>
> >>>>>>>            if (mmc_cache_enabled(card->host))
> >>>>>>>                    enable_cache = true;
> >>>>>>>
> >>>>>>>            blk_queue_write_cache(md->queue.queue, enable_cache, enable_fua);
> >>>>>>
> >>>>>> To me, this seems like the most reasonable thing to do.
> >>>>>>
> >>>>>> However, I have to admit that it's not clear to me, if there was a
> >>>>>> good reason to why commit f4c5522b0a88 ("mmc: Reliable write
> >>>>>> support.") also added support for REQ_FLUSH (write back cache) and why
> >>>>>> not only REQ_FUA. I assumed this was wrong too, right?
> >>>>>>
> >>>>
> >>>> Hi Ulf,
> >>>>
> >>>> 1. I've found the reason. If we only enable REQ_FUA, there won't be any
> >>>> effect -- The block layer won't send any request with FUA flag to the
> >>>> driver.
> >>>> If we want REQ_FUA to take effect, we must enable REQ_FLUSH. But on the
> >>>> contrary, REQ_FLUSH does not rely on REQ_FUA.
> >>>> In the previous patch(commit f4c5522b0a88 ("mmc: Reliable write
> >>>> support.")), REQ_FLUSH was added to make REQ_FUA effective. I've done
> >>>> experiments to prove this.
> >>>
> >>> Thanks for doing the research and for confirming.
> >>>
> >>> Note that this is also pretty well documented in
> >>> Documentation/block/writeback_cache_control.rst.
> >>
> >> Thanks for reminding. I'm clear now.
> >>
> >>>
> >>>>
> >>>> 2. Why block layer requires REQ_FLUSH to make REQ_FUA effective? I did
> >>>> not find the reason. Does anyone know about this? Thank you.
> >>>
> >>> The REQ_FLUSH indicates that the storage device has a write back
> >>> cache, which also can be flushed in some device specific way.
> >>>
> >>> The REQ_FUA (Force Unit Access), tells that the data can be written to
> >>> the storage device, in a way that when the I/O request is completed,
> >>> the data is fully written to the device (the data must not be left in
> >>> the write back cache). In other words, REQ_FUA doesn't make sense
> >>> unless REQ_FLUSH is supported too.
> >>>
> >>
> >> Thank you for your answer.
> >>
> >>> $subject patch should also conform to this pattern.
> >>
> >> I'm not sure if I understood this in a right way... Did you mean I
> >> should modify the subject of this mail/patch?
> >
> > No, I just meant that the code in the patch should conform to this.
>
> No problem. Please have a look at the code below.
>
> >
> > If REQ_FUA is set, REQ_FLUSH must be set too.
> >
> >>
> >>>
> >>> However, it's still questionable to me whether we want to support
> >>> REQ_FUA through the eMMC reliable write command - in case we also have
> >>> support for managing the eMMC cache separately. It looks to me that
> >>> the reason why we currently support REQ_FUA, is because back in the
> >>> days when there was only the eMMC reliable write command available, it
> >>> was simply the best we could do. But it was never really a good fit.
> >>>
> >>> I am starting to think that we may consider dropping REQ_FUA, if we
> >>> have the option to manage the eMMC cache separately - no matter
> >>> whether the eMMC reliable write command is supported or not. In this
> >>> case, REQ_FLUSH is sufficient and also a better match to what we
> >>> really can support.
> >>
> >> Hi Ulf,
> >> As to dropping REQ_FUA, I don't know if it is a good idea, but generally
> >> we are facing three possible situations:
> >>
> >> 1. If both cache and reliable-write are available, both REQ_FUA and
> >> REQ_FLUSH can be supported at the same time. In this case, with
> >> available cache, the behavior of reliable-write is to write eMMC while
> >> skipping cache, which is consistent with the current kernel's definition
> >> of REQ_FUA. What's more, most eMMCs now support both cache and
> >> reliable-write command.
> >
> > Yes, this seems reasonable.
> >
> >
> >> 2. If only reliable-write is available, REQ_FUA should not be supported,
> >> which is consistent with the current standard in another way. But I
> >> don't think eMMCs that only support reliable-write can be easily found
> >> nowadays.
> >
> > If we drop REQ_FUA for this case, I am worried that we might break use
> > cases for those older eMMC devices.
> >
> > So, no, let's keep REQ_FUA and REQ_FLUSH if reliable-write is supported.
>
> OK. Let's keep them.
>
> >
> >> 3. If only cache is available, we just use REQ_FLUSH. It is not in
> >> conflict with keeping REQ_FUA.
> >
> > Right.
> >
> >>
> >> Maybe, is it more reasonable to reserve FUA and use if/else to pick it
> >> up or down, considering the compatibility? I mean, in most cases, FUA
> >> and FLUSH are complementary. So it seems more feasible with branch to
> >> choose.
> >
> > Let's summarize what I think we should do then:
> >
> >   if (reliable-write supported) {
> >       enable_fua = true;
> >       enable_cache = true;
> > }
> >
> > if (mmc_cache_enabled)
> >       enable_cache = true;
> >
> > blk_queue_write_cache(md->queue.queue, enable_cache, enable_fua);
> >
> > Does this seem reasonable to you?
>
> Yes. Let me attach the complete code here:
>
>      if (md->flags & MMC_BLK_CMD23 &&
>          ((card->ext_csd.rel_param & EXT_CSD_WR_REL_PARAM_EN) ||
>          card->ext_csd.rel_sectors)) {
>          md->flags |= MMC_BLK_REL_WR;
>          enable_fua = true;
>          enable_cache = true;
>      }
>
>      if (mmc_cache_enabled(card->host))
>          enable_cache = true;
>
>      blk_queue_write_cache(md->queue.queue, enable_cache, enable_fua);
>
>
> If this is good, I'll submit a patch-v2 soon.

Yes, this looks good to me!

Kind regards
Uffe

  reply	other threads:[~2022-03-29  9:54 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-03-12  4:43 [PATCH] mmc: block: enable cache-flushing when mmc cache is on Michael Wu
2022-03-14  6:54 ` Adrian Hunter
2022-03-14  9:20   ` Michael Wu
2022-03-14  9:37     ` Avri Altman
2022-03-16  9:54       ` Michael Wu
2022-03-16 11:09         ` Avri Altman
2022-03-16 11:28           ` Adrian Hunter
2022-03-16 14:46             ` Christian Löhle
2022-03-16 16:05               ` Adrian Hunter
2022-03-17  9:14                 ` Ulf Hansson
2022-03-24 11:27                   ` Ulf Hansson
2022-03-25  5:45                     ` Michael Wu
2022-03-25 10:13                       ` Ulf Hansson
2022-03-28 10:11                         ` Michael Wu
2022-03-28 11:38                           ` Ulf Hansson
2022-03-29  9:08                             ` Michael Wu
2022-03-29  9:53                               ` Ulf Hansson [this message]
2022-03-27  8:09                 ` Avri Altman
2022-03-14 11:10     ` Avri Altman
2022-03-14  7:26 ` Avri Altman
2022-03-14 10:32   ` Adrian Hunter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAPDyKFqT5RYCtD34bnDZ15m3Tz_3SjLw=VzZ_YTn8W5UbYKSgw@mail.gmail.com' \
    --to=ulf.hansson@linaro.org \
    --cc=CLoehle@hyperstone.com \
    --cc=adrian.hunter@intel.com \
    --cc=allwinner-opensource-support@allwinnertech.com \
    --cc=avri.altman@wdc.com \
    --cc=beanhuo@micron.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mmc@vger.kernel.org \
    --cc=michael@allwinnertech.com \
    --cc=porzio@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).