linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Theodore Ts'o" <tytso@mit.edu>
To: Luis Chamberlain <mcgrof@kernel.org>
Cc: Sasha Levin <sashal@kernel.org>,
	Amir Goldstein <amir73il@gmail.com>,
	Greg KH <gregkh@linuxfoundation.org>,
	lsf-pc <lsf-pc@lists.linux-foundation.org>,
	linux-fsdevel <linux-fsdevel@vger.kernel.org>,
	Jan Kara <jack@suse.cz>,
	"Darrick J. Wong" <darrick.wong@oracle.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Matthew Wilcox <willy@infradead.org>
Subject: Re: [LSF/MM TOPIC] FS, MM, and stable trees
Date: Fri, 11 Mar 2022 00:23:55 -0500	[thread overview]
Message-ID: <Yirc69JyH5N/pXKJ@mit.edu> (raw)
In-Reply-To: <Yij4lD19KGloWPJw@bombadil.infradead.org>

On Wed, Mar 09, 2022 at 10:57:24AM -0800, Luis Chamberlain wrote:
> On Tue, Mar 08, 2022 at 02:06:57PM -0500, Sasha Levin wrote:
> > What we can't do is invest significant time into doing the testing work
> > ourselves for each and every subsystem in the kernel.
> 
> I think this experience helps though, it gives you I think a better
> appreciation for what concerns we have to merge any fix and the effort
> and dilligence required to ensure we don't regress. I think the
> kernel-ci steady state goal takes this a bit further.

Different communities seem to have different goals that they believe
the stable kernels should be aiming for.  Sure, if you never merge any
fix, you can guarantee that there will be no regressions.  However,
the question is whether the result is a better quality kernel.  For
example, there is a recent change to XFS which fixes a security bug
which allows an attacker to gain access to deleted data.  How do you
balance the tradeoff of "no regressions, ever", versus, "we'll leave a
security bug in XFS which is fixed in mainline linux, but we fear
regressions so much that we won't even backport a single-line fix to
the stable kernel?"

In my view, the service which Greg, Sasha and the other stable
maintainers provide is super-valuable, and I am happy that ext4
changes are automatically cherry-picked into the stable kernel.  Have
there been times when this has resulted in regressions in ext4 for the
stable kernel?  Sure!  It's only been a handful of a times, though,
and the number of bug fixes that users using stable kernels have _not_
seen *far* outweighs the downsides of the occasional regressions
(which gets found and then reverted).

> 
> Perhaps the one area that might interest folks is the test setup,
> using loopback drives and truncated files, if you find holes in
> this please let me know:
> 
> https://github.com/mcgrof/kdevops/blob/master/docs/testing-with-loopback.md
> 
> In my experience this setup just finds *more* issues, rather than less,
> and in my experience as well none of these issues found were bogus, they
> always lead to real bugs:
> 
> https://github.com/mcgrof/kdevops/blob/master/docs/seeing-more-issues.md

Different storage devices --- Google Cloud Persistent Disks, versus
single spindle HDD's, SSD's, eMMC flash, iSCSI devices --- will have
different timing characteristics, and this will affect what failures
you are likely to find.

So if most of the developers for a particular file system tend to use
a particular kind of hardware --- say, HDD's and SSD's --- and you use
something different, such as file-based loopback drives, it's not
surprising that you'll find a different set of failures more often.
It's not that loopback drives are inherently better at finding
problems --- it's just that all of the bugs that are easily findable
on HDD and SSD devices have already been fixed, and so the first
person to test using loopback will find a bunch of new bugs.

This is why I consider myself very lucky that one of the ext4
developers had been testing on Rasberry PI, and they found bugs that
was missed on my GCE setup, and vice versa.  And when he moved to a
newer test rig, which had a faster CPU and faster SSD, he found a
different set of flaky test failures that he couldn't reproduce on his
older test system.

So having a wide diversity of test rigs is really important.  Another
take home is that if you are responsible for a vast number of data
center servers, there isn't a real substitute for running tests on the
hardware that you are using in production.  One of the reasons why we
created android-xfstests was that there were some bugs that weren't
found when testing using KVM, but were much more easily found when
running xfstests on an actual Android device.  And it's why we run
continuous test spinners running xfstests using data center HDD's,
data center SSD's, iSCSI, iBlock (basically something like FUSE but
for block devices, that we'd love to get upstreamed someday), etc.
And these tests are run using the exact file system configuration that
we use in production.

Different people will have different test philosophies, but mine is
that I'm not looking for absolute perfection on upstream kernels.  I
get much better return on investment if I do basic testing for
upstream, but reserve the massive, continuous test spinning, on the
hardware platforms that my employer cares the most about from a $$$
perspective.

And it's actually not about the hardware costs.  The hardware costs
are actually pretty cheap, at least from a company's perspective.
What's actually super-duper expensive is the engineering time to
monitor the test results; to anaylze and root cause flaky test
failures, etc.  In general, "People time" >>> "hardware costs", by two
orders of magnitude.

So ultimately, it's going to be about the business case.  If I can
justify to my company why investing a portion of an engineer to setup
a dedicated test spinner on a particular hardware / software
combination, I can generally get the head count.  But if it's to do
massive testing and on an LTS kernel or a file system that doesn't
have commercial value for many company, it's going to be a tough slog.

Fortunately, though, I claim that we don't need to run xfstests a
thousand times before a commit is deemed "safe" for backporting to LTS
kernels.  (I'm pretty sure we aren't doing that during our upstream
development.)

It makes more sense to reserve that kind of intensive testing for
product kernels which are downstream of LTS, and if they find
problems, they can report that back to the stable kernel maintainers,
and if necessary, we can revert a commit.  In fact, I suspect that
when we *do* that kind of intensive testing, we'll probably find that
the problem still exists in upstream, it's just no one had actually
noticed.

That's certainly been my experience.  When we first deployed ext4 to
Google Data Centers, ten years ago, the fact that we had extensive
monitoring meant that we found a data corruption bug that was
ultimately root caused to a spinlock being released one line too
early.  Not only was the bug not fixed in upstream, it had turned out
that the bug had been in upstream for ten years before *that*, and it
had not been detected in multiple Red Hat and SuSE "golden master"
release testing, and all of the enterprise users of RHEL and SLES

(My guess is that people had written off the failure to cosmic rays,
or unreproducible hardware flakiness.  It was only when we ran at
scale, on millions of file systems under high stress, with
sufficiently automated monitoring of our production servers, that we
were able to detect it.)

So that's why I'm a bit philosophical about testing.  More testing is
always good, but perfection is not attainable.  So we test up to where
it makes business sense, and we accept that there may be some bug
escapes.  That's OK, though, since I'd much rather make sure security
bugs and other stability bugs get backported, even if that means that
once in a blue moon, there is a regression that requires a revert in
the LTS kernel.

Cheers,

					- Ted

  reply	other threads:[~2022-03-11  5:25 UTC|newest]

Thread overview: 55+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-02-12 17:00 [LSF/MM TOPIC] FS, MM, and stable trees Sasha Levin
2019-02-12 21:32 ` Steve French
2019-02-13  7:20   ` Amir Goldstein
2019-02-13  7:37     ` Greg KH
2019-02-13  9:01       ` Amir Goldstein
2019-02-13  9:18         ` Greg KH
2019-02-13 19:25           ` Sasha Levin
2019-02-13 19:52             ` Greg KH
2019-02-13 20:14               ` James Bottomley
2019-02-15  1:50                 ` Sasha Levin
2019-02-15  2:48                   ` James Bottomley
2019-02-16 18:28                     ` Theodore Y. Ts'o
2019-02-21 15:34                       ` Luis Chamberlain
2019-02-21 18:52                         ` Theodore Y. Ts'o
2019-03-20  3:46               ` Jon Masters
2019-03-20  5:06                 ` Greg KH
2019-03-20  6:14                   ` Jon Masters
2019-03-20  6:28                     ` Greg KH
2019-03-20  6:32                       ` Jon Masters
2022-03-08  9:32 ` Amir Goldstein
2022-03-08 10:08   ` Greg KH
2022-03-08 11:04     ` Amir Goldstein
2022-03-08 15:42       ` Luis Chamberlain
2022-03-08 19:06       ` Sasha Levin
2022-03-09 18:57         ` Luis Chamberlain
2022-03-11  5:23           ` Theodore Ts'o [this message]
2022-03-11 12:00             ` Jan Kara
2022-03-11 20:52             ` Luis Chamberlain
2022-03-11 22:04               ` Theodore Ts'o
2022-03-11 22:36                 ` Luis Chamberlain
2022-04-27 18:58                 ` Amir Goldstein
2022-05-01 16:25                   ` Luis Chamberlain
2022-03-10 23:59         ` Steve French
2022-03-11  0:36           ` Chuck Lever III
2022-03-11 20:54             ` Luis Chamberlain
2022-03-08 16:40     ` Theodore Ts'o
2022-03-08 17:16       ` Amir Goldstein
2022-03-09  0:43       ` Dave Chinner
2022-03-09 18:41       ` Luis Chamberlain
2022-03-09 18:49         ` Josef Bacik
2022-03-09 19:00           ` Luis Chamberlain
2022-03-09 21:19             ` Josef Bacik
2022-03-10  1:28               ` Luis Chamberlain
2022-03-10 18:51                 ` Josef Bacik
2022-03-10 22:41                   ` Luis Chamberlain
2022-03-11 12:09                     ` Jan Kara
2022-03-11 18:32                       ` Luis Chamberlain
2022-03-12  2:07                   ` Luis Chamberlain
2022-03-14 22:45                     ` btrfs profiles to test was: (Re: [LSF/MM TOPIC] FS, MM, and stable trees) Luis Chamberlain
2022-03-15 14:23                       ` Josef Bacik
2022-03-15 17:42                         ` Luis Chamberlain
2022-03-29 20:24       ` [LSF/MM TOPIC] FS, MM, and stable trees Amir Goldstein
2022-04-10 15:11         ` Amir Goldstein
2022-03-08 10:54   ` Jan Kara
2022-03-09  0:02   ` Dave Chinner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Yirc69JyH5N/pXKJ@mit.edu \
    --to=tytso@mit.edu \
    --cc=amir73il@gmail.com \
    --cc=darrick.wong@oracle.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=jack@suse.cz \
    --cc=josef@toxicpanda.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=lsf-pc@lists.linux-foundation.org \
    --cc=mcgrof@kernel.org \
    --cc=sashal@kernel.org \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).