linux-unionfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: cgxu <cgxu519@mykernel.net>
To: Amir Goldstein <amir73il@gmail.com>
Cc: Jan Kara <jack@suse.cz>, Miklos Szeredi <miklos@szeredi.hu>,
	overlayfs <linux-unionfs@vger.kernel.org>,
	Sargun Dhillon <sargun@sargun.me>
Subject: Re: [PATCH v12] ovl: improve syncfs efficiency
Date: Tue, 26 May 2020 15:50:13 +0800	[thread overview]
Message-ID: <6bce615e-b8ef-e63f-3829-e2b785a02f5d@mykernel.net> (raw)
In-Reply-To: <CAOQ4uxi4coKOoYar7Y==i=P21j5r8fi_0op+BZR-VQ1w5CMUew@mail.gmail.com>



On 5/20/20 3:24 PM, Amir Goldstein wrote:
> On Wed, May 20, 2020 at 4:02 AM cgxu <cgxu519@mykernel.net> wrote:
>>
>> On 5/6/20 5:53 PM, Chengguang Xu wrote:
>>> Current syncfs(2) syscall on overlayfs just calls sync_filesystem()
>>> on upper_sb to synchronize whole dirty inodes in upper filesystem
>>> regardless of the overlay ownership of the inode. In the use case of
>>> container, when multiple containers using the same underlying upper
>>> filesystem, it has some shortcomings as below.
>>>
>>> (1) Performance
>>> Synchronization is probably heavy because it actually syncs unnecessary
>>> inodes for target overlayfs.
>>>
>>> (2) Interference
>>> Unplanned synchronization will probably impact IO performance of
>>> unrelated container processes on the other overlayfs.
>>>
>>> This patch tries to only sync target dirty upper inodes which are belong
>>> to specific overlayfs instance and wait for completion. By doing this,
>>> it is able to reduce cost of synchronization and will not seriously impact
>>> IO performance of unrelated processes.
>>>
>>> Signed-off-by: Chengguang Xu <cgxu519@mykernel.net>
>>
>> Except explicit sycnfs is triggered by user process, there is also implicit
>> syncfs during umount process of overlayfs instance. Every syncfs will
>> deliver to upper fs and whole dirty data of upper fs syncs to persistent
>> device at same time.
>>
>> In high density container environment, especially for temporary jobs,
>> this is quite unwilling  behavior. Should we provide an option to
>> mitigate this effect for containers which don't care about dirty data?
>>
> 
> This is not the first time this sort of suggestion has been made:
> https://lore.kernel.org/linux-unionfs/4bc73729-5d85-36b7-0768-ae5952ae05e9@mykernel.net/T/#md5fc5d51852016da7e042f5d9e5ef7a6d21ea822

The link above seems just my mail thread in mail list.


> 
> At the time, I proposed to use the SHUTDOWN ioctl as a means
> for containers runtime to communicate careless teardown.
> 
> I've pushed an uptodate version of ovl-shutdown RFC [1].
> It is only lightly tested.
> It does not take care of OVL_SHUTDOWN_FLAGS_NOSYNC, but this
> is trivial. I also think it misses some smp_mb__after_atomic() for
> accessing ofs->goingdown and ofs->creator_cred.
> I did not address my own comments on the API [2].
> And there are no tests at all.
> 
> If this works for your use case, let me know how you want to proceed.
> I could re-post the ioctl and access hook patches, leaving out the actual
> shutdown patch for you to work on.
> You may add some of your own patched, write tests and post v2.
> 

Seems the use case is sightly different with ours, in our use case,
we hope to skip sync behavior in overlayfs layer(sometimes there will be 
still syncing behavior triggered by wirteback of upper lyaer) for 
certain kind of containers(I don't mean all kind of containers).

Optimization of syncfs will mitigate the effect of sync behavior but 
maybe directly skipping dirty date syncing is better for special use case.


Thanks,
cgxu

> 
> [1] https://github.com/amir73il/linux/commits/ovl-shutdown
> [2] https://lore.kernel.org/linux-unionfs/CAOQ4uxiau7N6NtMLzjwPzHa0nMKZWi4nu6AwnQkR0GFnKA4nPg@mail.gmail.com/
> 

  parent reply	other threads:[~2020-05-26  7:50 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-05-06  9:53 [PATCH v12] ovl: improve syncfs efficiency Chengguang Xu
2020-05-20  1:01 ` cgxu
2020-05-20  7:24   ` Amir Goldstein
2020-05-22  9:31     ` Miklos Szeredi
2020-05-22 13:44       ` Vivek Goyal
2020-05-22 14:40         ` Giuseppe Scrivano
2020-05-26  7:50     ` cgxu [this message]
2020-05-26  8:25       ` Amir Goldstein
2020-08-31 14:22 ` cgxu
2020-08-31 14:58   ` Miklos Szeredi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=6bce615e-b8ef-e63f-3829-e2b785a02f5d@mykernel.net \
    --to=cgxu519@mykernel.net \
    --cc=amir73il@gmail.com \
    --cc=jack@suse.cz \
    --cc=linux-unionfs@vger.kernel.org \
    --cc=miklos@szeredi.hu \
    --cc=sargun@sargun.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).