From: Matthew Wilcox <willy@infradead.org>
To: Mike Rapoport <rppt@kernel.org>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
Andrew Morton <akpm@linux-foundation.org>,
Linux Kernel Mailing List <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH] lib/test_free_pages: Add basic progress indicators
Date: Mon, 19 Oct 2020 15:05:05 +0100 [thread overview]
Message-ID: <20201019140505.GR20115@casper.infradead.org> (raw)
In-Reply-To: <20201018171252.GA392079@kernel.org>
On Sun, Oct 18, 2020 at 08:12:52PM +0300, Mike Rapoport wrote:
> On Sun, Oct 18, 2020 at 04:01:46PM +0100, Matthew Wilcox wrote:
> > On Sun, Oct 18, 2020 at 04:39:27PM +0200, Geert Uytterhoeven wrote:
> > > Hi Matthew,
> > >
> > > On Sun, Oct 18, 2020 at 4:25 PM Matthew Wilcox <willy@infradead.org> wrote:
> > > > On Sun, Oct 18, 2020 at 04:04:45PM +0200, Geert Uytterhoeven wrote:
> > > > > The test module to check that free_pages() does not leak memory does not
> > > > > provide any feedback whatsoever its state or progress, but may take some
> > > > > time on slow machines. Add the printing of messages upon starting each
> > > > > phase of the test, and upon completion.
> > > >
> > > > It's not supposed to take a long time. Can you crank down that 1000 *
> > >
> > > It took 1m11s on ARAnyM, running on an i7-8700K.
> > > Real hardware may even take longer.
> >
> > 71 seconds is clearly too long. 0.7 seconds would be fine, so 10 * 1000
> > would be appropriate, but then that's only 320MB which might not be
> > enough to notice on a modern machine.
> >
> > > > 1000 to something more appropriate?
> > >
> > > What would be a suitable value? You do want to see it "leak gigabytes
> > > of memory and probably OOM your system" if something's wrong,
> > > so decreasing the value a lot may not be a good idea?
> > >
> > > Regardless, if it OOMs, I think you do want to see this happens
> > > while running this test.
> >
> > How about scaling with the amount of memory on the machine?
> >
> > This might cause problems on machines with terabytes of memory.
> > Maybe we should cap it at a terabyte?
>
> On ARAnyM wih 782 MBytes of RAM running on i7-8650U it takes ~1.75
> seconds.
That seems like a somewhat unusual configuration. I think it's pretty
strange to find an actual m68k with more than 128MB of memory. I mean,
I can set up my laptop to believe it has 64TB of memory, and this will
run slowly, but I don't think it's any real problem.
> Still, I think adding some verbosity to the test wouldn't hurt ;-)
I prefer the unix philosophy of only emitting messages if something's
wrong.
next prev parent reply other threads:[~2020-10-19 14:05 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-10-18 14:04 [PATCH] lib/test_free_pages: Add basic progress indicators Geert Uytterhoeven
2020-10-18 14:25 ` Matthew Wilcox
2020-10-18 14:39 ` Geert Uytterhoeven
2020-10-18 15:01 ` Matthew Wilcox
2020-10-18 17:12 ` Mike Rapoport
2020-10-19 14:05 ` Matthew Wilcox [this message]
2020-10-19 14:20 ` Geert Uytterhoeven
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20201019140505.GR20115@casper.infradead.org \
--to=willy@infradead.org \
--cc=akpm@linux-foundation.org \
--cc=geert@linux-m68k.org \
--cc=linux-kernel@vger.kernel.org \
--cc=rppt@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).