All of lore.kernel.org
 help / color / mirror / Atom feed
From: Joe Thornber <thornber@redhat.com>
To: stefanrin@gmail.com
Cc: dm-devel@redhat.com, ejt@redhat.com
Subject: Re: Significantly dropped dm-cache performance in 4.13 compared to 4.11
Date: Tue, 14 Nov 2017 11:00:47 +0000	[thread overview]
Message-ID: <20171114110046.6lmv34rqngs6pjwf@reti> (raw)
In-Reply-To: <20171113190111.GE32510@redhat.com>

On Mon, Nov 13, 2017 at 02:01:11PM -0500, Mike Snitzer wrote:
> On Mon, Nov 13 2017 at 12:31pm -0500,
> Stefan Ring <stefanrin@gmail.com> wrote:
> 
> > On Thu, Nov 9, 2017 at 4:15 PM, Stefan Ring <stefanrin@gmail.com> wrote:
> > > On Tue, Nov 7, 2017 at 3:41 PM, Joe Thornber <thornber@redhat.com> wrote:
> > >> On Fri, Nov 03, 2017 at 07:50:23PM +0100, Stefan Ring wrote:
> > >>> It strikes me as odd that the amount read from the spinning disk is
> > >>> actually more than what comes out of the combined device in the end.
> > >>
> > >> This suggests dm-cache is trying to promote too way too much.
> > >> I'll try and reproduce the issue, your setup sounds pretty straight forward.
> > >
> > > I think it's actually the most straight-forward you can get ;).
> > >
> > > I've also tested kernel 4.12 in the meantime, which behaves just like
> > > 4.13. So the difference in behavior seems to have been introduced
> > > somewhere between 4.11 and 4.12.
> > >
> > > I've also done plain dd from the dm-cache disk to /dev/null a few
> > > times, which wrote enormous amounts of data to the SDD. My poor SSD
> > > has received the same amount of writes during the last week that it
> > > has had to endure during the entire previous year.
> > 
> > Do you think it would make a difference if I removed and recreated the cache?
> > 
> > I don't want to fry my SSD any longer. I've just copied several large
> > files into the dm-cached zfs dataset, and while reading them back
> > immediately afterwards, the SSD started writing crazy amounts again.
> > In my understanding, linear reads should rarely end up on the cache
> > device, but that is absolutely not what I'm experiencing.
> 
> Joe tried to reproduce your reported issue today and couldn't.

I'm not sure what's going on here.  Would you mind sending me the
metadata please?  Either a cache_dump of it, or a copy of the metadata
dev?

- Joe

  reply	other threads:[~2017-11-14 11:00 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-11-03 18:50 Significantly dropped dm-cache performance in 4.13 compared to 4.11 Stefan Ring
2017-11-07 14:41 ` Joe Thornber
2017-11-09 15:15   ` Stefan Ring
2017-11-13 17:31     ` Stefan Ring
2017-11-13 19:01       ` Mike Snitzer
2017-11-14 11:00         ` Joe Thornber [this message]
2017-11-14 14:53           ` Stefan Ring
2017-11-14 18:41           ` Stefan Ring
2017-12-15 16:03         ` Stefan Ring

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20171114110046.6lmv34rqngs6pjwf@reti \
    --to=thornber@redhat.com \
    --cc=dm-devel@redhat.com \
    --cc=ejt@redhat.com \
    --cc=stefanrin@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.