linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: Roman Gushchin <guro@fb.com>
Cc: linux-mm@kvack.org, linux-fsdevel@vger.kernel.org,
	linux-kernel@vger.kernel.org, kernel-team@fb.com, tj@kernel.org,
	Jan Kara <jack@suse.cz>
Subject: Re: [PATCH] cgroup, blkcg: prevent dirty inodes to pin dying memory cgroups
Date: Tue, 8 Oct 2019 15:06:31 +1100	[thread overview]
Message-ID: <20191008040630.GA15134@dread.disaster.area> (raw)
In-Reply-To: <20191004221104.646711-1-guro@fb.com>

On Fri, Oct 04, 2019 at 03:11:04PM -0700, Roman Gushchin wrote:
> This is a RFC patch, which is not intended to be merged as is,
> but hopefully will start a discussion which can result in a good
> solution for the described problem.
> 
> --
> 
> We've noticed that the number of dying cgroups on our production hosts
> tends to grow with the uptime. This time it's caused by the writeback
> code.
> 
> An inode which is getting dirty for the first time is associated
> with the wb structure (look at __inode_attach_wb()). It can later
> be switched to another wb under some conditions (e.g. some other
> cgroup is writing a lot of data to the same inode), but generally
> stays associated up to the end of life of the inode structure.
> 
> The problem is that the wb structure holds a reference to the original
> memory cgroup. So if the inode was dirty once, it has a good chance
> to pin down the original memory cgroup.
> 
> An example from the real life: some service runs periodically and
> updates rpm packages. Each time in a new memory cgroup. Installed
> .so files are heavily used by other cgroups, so corresponding inodes
> tend to stay alive for a long. So do pinned memory cgroups.
> In production I've seen many hosts with 1-2 thousands of dying
> cgroups.
> 
> This is not the first problem with the dying memory cgroups. As
> always, the problem is with their relative size: memory cgroups
> are large objects, easily 100x-1000x larger that inodes. So keeping
> a couple of thousands of dying cgroups in memory without a good reason
> (what we easily do with inodes) is quite costly (and is measured
> in tens and hundreds of Mb).
> 
> One possible approach to this problem is to switch inodes associated
> with dying wbs to the root wb. Switching is a best effort operation
> which can fail silently, so unfortunately we can't run once over a
> list of associated inodes (even if we'd have such a list). So we
> really have to scan all inodes.
> 
> In the proposed patch I schedule a work on each memory cgroup
> deletion, which is probably too often. Alternatively, we can do it
> periodically under some conditions (e.g. the number of dying memory
> cgroups is larger than X). So it's basically a gc run.
> 
> I wonder if there are any better ideas?
> 
> Signed-off-by: Roman Gushchin <guro@fb.com>
> ---
>  fs/fs-writeback.c | 29 +++++++++++++++++++++++++++++
>  mm/memcontrol.c   |  5 +++++
>  2 files changed, 34 insertions(+)
> 
> diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
> index 542b02d170f8..4bbc9a200b2c 100644
> --- a/fs/fs-writeback.c
> +++ b/fs/fs-writeback.c
> @@ -545,6 +545,35 @@ static void inode_switch_wbs(struct inode *inode, int new_wb_id)
>  	up_read(&bdi->wb_switch_rwsem);
>  }
>  
> +static void reparent_dirty_inodes_one_sb(struct super_block *sb, void *arg)
> +{
> +	struct inode *inode, *next;
> +
> +	spin_lock(&sb->s_inode_list_lock);
> +	list_for_each_entry_safe(inode, next, &sb->s_inodes, i_sb_list) {
> +		spin_lock(&inode->i_lock);
> +		if (inode->i_state & (I_NEW | I_FREEING | I_WILL_FREE)) {
> +			spin_unlock(&inode->i_lock);
> +			continue;
> +		}
> +
> +		if (inode->i_wb && wb_dying(inode->i_wb)) {
> +			spin_unlock(&inode->i_lock);
> +			inode_switch_wbs(inode, root_mem_cgroup->css.id);
> +			continue;
> +		}
> +
> +		spin_unlock(&inode->i_lock);
> +	}
> +	spin_unlock(&sb->s_inode_list_lock);

No idea what the best solution is, but I think this is fundamentally
unworkable. It's not uncommon to have a hundred million cached
inodes these days, often on a single filesystem. Anything that
requires a brute-force system wide inode scan, especially without
conditional reschedule points, is largely a non-starter.

Also, inode_switch_wbs() is not guaranteed to move the inode to the
destination wb.  There can only be WB_FRN_MAX_IN_FLIGHT (1024)
switches in flight at once and switches are run via RCU callbacks,
so I suspect that using inode_switch_wbs() for bulk re-assignment is
going to be a lot more complex than just finding inodes to call
inode_switch_wbs() on....

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com


  parent reply	other threads:[~2019-10-08  4:06 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-04 22:11 [PATCH] cgroup, blkcg: prevent dirty inodes to pin dying memory cgroups Roman Gushchin
2019-10-07 14:57 ` Vlastimil Babka
2019-10-07 23:35   ` Roman Gushchin
2019-10-07 16:19 ` Michal Koutný
2019-10-07 23:24   ` Roman Gushchin
2019-10-08  4:06 ` Dave Chinner [this message]
     [not found]   ` <20191008053854.GA14951@castle.dhcp.thefacebook.com>
2019-10-08  8:20     ` Jan Kara
2019-10-09  5:19       ` Roman Gushchin
2019-10-09 21:48       ` Roman Gushchin
2019-10-07  6:01 Hillf Danton
2019-10-07 22:02 ` Roman Gushchin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191008040630.GA15134@dread.disaster.area \
    --to=david@fromorbit.com \
    --cc=guro@fb.com \
    --cc=jack@suse.cz \
    --cc=kernel-team@fb.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=tj@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).