linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Kai Krakow <kai@kaishome.de>
To: Nix <nix@esperi.org.uk>
Cc: Eric Wheeler <bcache@lists.ewheeler.net>,
	linux-bcache@vger.kernel.org, linux-block@vger.kernel.org
Subject: Re: [PATCH 1/3] bcache: introduce bcache sysfs entries for ioprio-based bypass/writeback hints
Date: Tue, 6 Oct 2020 15:10:37 +0200	[thread overview]
Message-ID: <CAC2ZOYvA966Jwa1CGepRDUmBn4=-vpZR82YZZQxT8L+f7-HTUQ@mail.gmail.com> (raw)
In-Reply-To: <87o8lfa692.fsf@esperi.org.uk>

Am Di., 6. Okt. 2020 um 14:28 Uhr schrieb Nix <nix@esperi.org.uk>:
>
> On 5 Oct 2020, Eric Wheeler verbalised:
>
> > [+cc:bcache and blocklist]
> >
> > (It did not look like this was being CC'd to the list so I have pasted the
> > relevant bits of conversation. Kai, please resend your patch set and CC
> > the list linux-bcache@vger.kernel.org)
>
> Oh sorry. I don't know what's been going on with the Cc:s here.
>
> > I am glad that people are still making effective use of this patch!
>
> :)
>
> > It works great unless you are using mq-scsi (or perhaps mq-dm). For the
> > multi-queue systems out there, ioprio does not seem to pass down through
> > the stack into bcache, probably because it is passed through a worker
> > thread for the submission or some other detail that I have not researched.
>
> That sounds like a bug in the mq-scsi machinery: it surely should be
> passing the ioprio off to the worker thread so that the worker thread
> can reliably mimic the behaviour of the thread it's acting on behalf of.

Maybe this was only an issue early in mq-scsi before it got more
schedulers than just iosched-none? It has bfq now, and it should work.
Depending on the filesystem, tho, that may still not fully apply...
e.g. btrfs doesn't use ioprio for delayed refs resulting from such io,
it will simply queue it up at the top of the io queue.

>
> > Long ago others had concerns using ioprio as the mechanism for cache
> > hinting, so what does everyone think about implementing cgroup inside of
> > bcache? From what I can tell, cgroups have a stronger binding to an IO
> > than ioprio hints.
>
> Nice idea, but...

Yeah, it would fit my use-case perfectly.

> using cgroups would make this essentially unusable for
> me, and probably for most other people, because on a systemd system the
> cgroup hierarchy is more or less owned in fee simple by systemd, and it
> won't let you use cgroups for something else,

That's probably not completely true, you can still define slices which
act as a cgroup container for all services and processes contained in
it, and you can use "systemctl edit myscope.slice" to change
scheduler, memory accounting, and IO params at runtime.

> or even make your own
> underneath the ones it's managing: it sometimes seems to work but they
> can suddenly go away without warning and all the processes in them get
> transferred out by systemd or even killed off.

See above, use slices, don't try to sneak around systemd's cgroup
management - especially not in services.

> (And as for making systemd set up suitable cgroups, that too would make
> it unusable for me: I tend to run jobs ad-hoc with ionice, use ionice in
> scripts etc to reduce caching when I know it won't be needed, and that
> sort of thing is just not mature enough to be reliable in systemd yet.

You can still define a slice for such ad-hoc processes by using
systemd-run to make your process into a transient one-shot service.
It's not much different from prepending "ionice ... schedtool ...".
I'm using that put some desktop programs in a resource jail to avoid
cache thrashing, e.g. by browsers which tend to dominate the cache:
https://github.com/kakra/gentoo-cgw (this will integrate with the
package manager to replace the original executable with a wrapper).
But that has some flaws, as in when running a browser from a Steam
container, it starts to act strange... But otherwise I'm using it
quite successfully.

> It's rare for a systemd --user invocation to get everything so confused
> that the entire system is reundered unusable, but it has happened to me
> in the past, so unlike ionice I am now damn wary of using systemd --user
> invocations for anything. They're a hell of a lot clunkier for ad-hoc
> use than a simple ionice, too: you can't just say "run this command in a
> --user", you have to set up a .service file etc.)

Not sure what you did, I never experienced that. Usually that happens
when processes managed by a systemd service try to escape the current
session, i.e. by running "su -" or "sudo", so some uses of ionice may
experience similar results.

So my current situation is: I defined a slice for background jobs
(backup, maintenance jobs etc), one for games (boosting the CPU/IO/mem
share), one for browsers (limiting CPU to fight against run-away
javascripts), and some more. The trick is to define all slices with a
lower bound of memory below which the kernel won't reclaim memory from
it - I found that's one of the most important knobs to fight laggy
desktop usage. I usually look at the memory needed by the processes
when running, then add some amount of cache I think would be useful
for the processes, as cgroup memory accounting luckily app allocations
AND cache memory. Actually, limiting memory with cgroups can have
quite an opposite effect (as processes tend to swap then, even with
plenty of RAM available).

Regards,
Kai

  reply	other threads:[~2020-10-06 13:10 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20201003111056.14635-1-kai@kaishome.de>
     [not found] ` <20201003111056.14635-2-kai@kaishome.de>
     [not found]   ` <87362ucen3.fsf@esperi.org.uk>
     [not found]     ` <CAC2ZOYt+ZMep=PT5FbQKiqZ0EE1f4+JJn=oTJUtQjLwGvy=KfQ@mail.gmail.com>
2020-10-05 19:41       ` [PATCH 1/3] bcache: introduce bcache sysfs entries for ioprio-based bypass/writeback hints Eric Wheeler
2020-10-06 12:28         ` Nix
2020-10-06 13:10           ` Kai Krakow [this message]
2020-10-06 16:34             ` Nix
2020-10-07  0:41               ` Eric Wheeler
2020-10-07 12:43                 ` Nix
2020-10-07 20:35         ` Eric Wheeler
2020-10-08 10:45           ` Coly Li

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAC2ZOYvA966Jwa1CGepRDUmBn4=-vpZR82YZZQxT8L+f7-HTUQ@mail.gmail.com' \
    --to=kai@kaishome.de \
    --cc=bcache@lists.ewheeler.net \
    --cc=linux-bcache@vger.kernel.org \
    --cc=linux-block@vger.kernel.org \
    --cc=nix@esperi.org.uk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).