linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Gao Xiang <hsiangkao@linux.alibaba.com>
To: Amir Goldstein <amir73il@gmail.com>
Cc: Alexander Larsson <alexl@redhat.com>,
	Jingbo Xu <jefflexu@linux.alibaba.com>,
	gscrivan@redhat.com, brauner@kernel.org,
	linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org,
	david@fromorbit.com, viro@zeniv.linux.org.uk,
	Vivek Goyal <vgoyal@redhat.com>,
	Miklos Szeredi <miklos@szeredi.hu>
Subject: Re: [PATCH v3 0/6] Composefs: an opportunistically sharing verified image filesystem
Date: Thu, 2 Feb 2023 15:37:20 +0800	[thread overview]
Message-ID: <02edb5d6-a232-eed6-0338-26f9a63cfdb6@linux.alibaba.com> (raw)
In-Reply-To: <9c8e76a3-a60a-90a2-f726-46db39bc6558@linux.alibaba.com>



On 2023/2/2 15:17, Gao Xiang wrote:
> 
> 
> On 2023/2/2 14:37, Amir Goldstein wrote:
>> On Wed, Feb 1, 2023 at 1:22 PM Gao Xiang <hsiangkao@linux.alibaba.com> wrote:
>>>
>>>
>>>
>>> On 2023/2/1 18:01, Gao Xiang wrote:
>>>>
>>>>
>>>> On 2023/2/1 17:46, Alexander Larsson wrote:
>>>>
>>>> ...
>>>>
>>>>>>
>>>>>>                                     | uncached(ms)| cached(ms)
>>>>>> ----------------------------------|-------------|-----------
>>>>>> composefs (with digest)           | 326         | 135
>>>>>> erofs (w/o -T0)                   | 264         | 172
>>>>>> erofs (w/o -T0) + overlayfs       | 651         | 238
>>>>>> squashfs (compressed)                | 538         | 211
>>>>>> squashfs (compressed) + overlayfs | 968         | 302
>>>>>
>>>>>
>>>>> Clearly erofs with sparse files is the best fs now for the ro-fs +
>>>>> overlay case. But still, we can see that the additional cost of the
>>>>> overlayfs layer is not negligible.
>>>>>
>>>>> According to amir this could be helped by a special composefs-like mode
>>>>> in overlayfs, but its unclear what performance that would reach, and
>>>>> we're then talking net new development that further complicates the
>>>>> overlayfs codebase. Its not clear to me which alternative is easier to
>>>>> develop/maintain.
>>>>>
>>>>> Also, the difference between cached and uncached here is less than in
>>>>> my tests. Probably because my test image was larger. With the test
>>>>> image I use, the results are:
>>>>>
>>>>>                                     | uncached(ms)| cached(ms)
>>>>> ----------------------------------|-------------|-----------
>>>>> composefs (with digest)           | 681         | 390
>>>>> erofs (w/o -T0) + overlayfs       | 1788        | 532
>>>>> squashfs (compressed) + overlayfs | 2547        | 443
>>>>>
>>>>>
>>>>> I gotta say it is weird though that squashfs performed better than
>>>>> erofs in the cached case. May be worth looking into. The test data I'm
>>>>> using is available here:
>>>>
>>>> As another wild guess, cached performance is a just vfs-stuff.
>>>>
>>>> I think the performance difference may be due to ACL (since both
>>>> composefs and squashfs don't support ACL).  I already asked Jingbo
>>>> to get more "perf data" to analyze this but he's now busy in another
>>>> stuff.
>>>>
>>>> Again, my overall point is quite simple as always, currently
>>>> composefs is a read-only filesystem with massive symlink-like files.
>>>> It behaves as a subset of all generic read-only filesystems just
>>>> for this specific use cases.
>>>>
>>>> In facts there are many options to improve this (much like Amir
>>>> said before):
>>>>     1) improve overlayfs, and then it can be used with any local fs;
>>>>
>>>>     2) enhance erofs to support this (even without on-disk change);
>>>>
>>>>     3) introduce fs/composefs;
>>>>
>>>> In addition to option 1), option 2) has many benefits as well, since
>>>> your manifest files can save real regular files in addition to composefs
>>>> model.
>>>
>>> (add some words..)
>>>
>>> My first response at that time (on Slack) was "kindly request
>>> Giuseppe to ask in the fsdevel mailing list if this new overlay model
>>> and use cases is feasable", if so, I'm much happy to integrate in to
>>> EROFS (in a cooperative way) in several ways:
>>>
>>>    - just use EROFS symlink layout and open such file in a stacked way;
>>>
>>> or (now)
>>>
>>>    - just identify overlayfs "trusted.overlay.redirect" in EROFS itself
>>>      and open file so such image can be both used for EROFS only and
>>>      EROFS + overlayfs.
>>>
>>> If that happened, then I think the overlayfs "metacopy" option can
>>> also be shown by other fs community people later (since I'm not an
>>> overlay expert), but I'm not sure why they becomes impossible finally
>>> and even not mentioned at all.
>>>
>>> Or if you guys really don't want to use EROFS for whatever reasons
>>> (EROFS is completely open-source, used, contributed by many vendors),
>>> you could improve squashfs, ext4, or other exist local fses with this
>>> new use cases (since they don't need any on-disk change as well, for
>>> example, by using some xattr), I don't think it's really hard.
>>>
>>
>> Engineering-wise, merging composefs features into EROFS
>> would be the simplest option and FWIW, my personal preference.
>>
>> However, you need to be aware that this will bring into EROFS
>> vfs considerations, such as  s_stack_depth nesting (which AFAICS
>> is not see incremented composefs?). It's not the end of the world, but this
>> is no longer plain fs over block game. There's a whole new class of bugs
>> (that syzbot is very eager to explore) so you need to ask yourself whether
>> this is a direction you want to lead EROFS towards.
> 
> I'd like to make a seperated Kconfig for this.  I consider this just because
> currently composefs is much similar to EROFS but it doesn't have some ability
> to keep real regular file (even some README, VERSION or Changelog in these
> images) in its (composefs-called) manifest files. Even its on-disk super block
> doesn't have a UUID now [1] and some boot sector for booting or some potential
> hybird formats such as tar + EROFS, cpio + EROFS.
> 
> I'm not sure if those potential new on-disk features is unneeded even for
> future composefs.  But if composefs laterly supports such on-disk features,
> that makes composefs closer to EROFS even more.  I don't see disadvantage to
> make these actual on-disk compatible (like ext2 and ext4).
> 
> The only difference now is manifest file itself I/O interface -- bio vs file.
> but EROFS can be distributed to raw block devices as well, composefs can't.
> 
> Also, I'd like to seperate core-EROFS from advanced features (or people who
> are interested to work on this are always welcome) and composefs-like model,
> if people don't tend to use any EROFS advanced features, it could be disabled
> from compiling explicitly.

Apart from that, I still fail to get some thoughts (apart from unprivileged
mounts) how EROFS + overlayfs combination fails on automative real workloads
aside from "ls -lR" (readdir + stat).

And eventually we still need overlayfs for most use cases to do writable
stuffs, anyway, it needs some words to describe why such < 1s difference is
very very important to the real workload as you already mentioned before.

And with overlayfs lazy lookup, I think it can be close to ~100ms or better.

> 
>>
>> Giuseppe expressed his plans to make use of the composefs method
>> inside userns one day. It is not a hard dependency, but I believe that
>> keeping the "RO efficient verifiable image format" functionality (EROFS)
>> separate from "userns composition of verifiable images" (overlayfs)
>> may benefit the userns mount goal in the long term.
> 
> If that is needed, I'm very happy to get more detailed path of this from
> some discussion in LSF/MM/BPF 2023: how we get this (userns) reliably in
> practice.
> 
> As of code lines, core EROFS on-disk format is quite simple (I don't think
> total LOC is a barrier), if you see
>    fs/erofs/data.c
>    fs/erofs/namei.c
>    fs/erofs/dir.c
> 
> or
>     erofs_super_block
>     erofs_inode_compact
>     erofs_inode_extended
>     erofs_dirent
> 
> but for example, fs/erofs/super.c which is just used to enable EROFS advanced
> features is almost 1000LOC now.  But most code is quite trivial, I don't think
> these can cause any difference to userns plan.
> 
> Thanks,
> Gao Xiang
> 
> [1] https://lore.kernel.org/r/CAOQ4uxjm7i+uO4o4470ACctsft1m18EiUpxBfCeT-Wyqf1FAYg@mail.gmail.com/
> 
>>
>> Thanks,
>> Amir.

  reply	other threads:[~2023-02-02  7:37 UTC|newest]

Thread overview: 80+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-01-20 15:23 [PATCH v3 0/6] Composefs: an opportunistically sharing verified image filesystem Alexander Larsson
2023-01-20 15:23 ` [PATCH v3 1/6] fsverity: Export fsverity_get_digest Alexander Larsson
2023-01-20 15:23 ` [PATCH v3 2/6] composefs: Add on-disk layout header Alexander Larsson
2023-01-20 15:23 ` [PATCH v3 3/6] composefs: Add descriptor parsing code Alexander Larsson
2023-01-20 15:23 ` [PATCH v3 4/6] composefs: Add filesystem implementation Alexander Larsson
2023-01-20 15:23 ` [PATCH v3 5/6] composefs: Add documentation Alexander Larsson
2023-01-21  2:19   ` Bagas Sanjaya
2023-01-20 15:23 ` [PATCH v3 6/6] composefs: Add kconfig and build support Alexander Larsson
2023-01-20 19:44 ` [PATCH v3 0/6] Composefs: an opportunistically sharing verified image filesystem Amir Goldstein
2023-01-20 22:18   ` Giuseppe Scrivano
2023-01-21  3:08     ` Gao Xiang
2023-01-21 16:19       ` Giuseppe Scrivano
2023-01-21 17:15         ` Gao Xiang
2023-01-21 22:34           ` Giuseppe Scrivano
2023-01-22  0:39             ` Gao Xiang
2023-01-22  9:01               ` Giuseppe Scrivano
2023-01-22  9:32                 ` Giuseppe Scrivano
2023-01-24  0:08                   ` Gao Xiang
2023-01-21 10:57     ` Amir Goldstein
2023-01-21 15:01       ` Giuseppe Scrivano
2023-01-21 15:54         ` Amir Goldstein
2023-01-21 16:26           ` Gao Xiang
2023-01-23 17:56   ` Alexander Larsson
2023-01-23 23:59     ` Gao Xiang
2023-01-24  3:24     ` Amir Goldstein
2023-01-24 13:10       ` Alexander Larsson
2023-01-24 14:40         ` Gao Xiang
2023-01-24 19:06         ` Amir Goldstein
2023-01-25  4:18           ` Dave Chinner
2023-01-25  8:32             ` Amir Goldstein
2023-01-25 10:08               ` Alexander Larsson
2023-01-25 10:43                 ` Amir Goldstein
2023-01-25 10:39               ` Giuseppe Scrivano
2023-01-25 11:17                 ` Amir Goldstein
2023-01-25 12:30                   ` Giuseppe Scrivano
2023-01-25 12:46                     ` Amir Goldstein
2023-01-25 13:10                       ` Giuseppe Scrivano
2023-01-25 18:07                         ` Amir Goldstein
2023-01-25 19:45                           ` Giuseppe Scrivano
2023-01-25 20:23                             ` Amir Goldstein
2023-01-25 20:29                               ` Amir Goldstein
2023-01-27 15:57                               ` Vivek Goyal
2023-01-25 15:24                       ` Christian Brauner
2023-01-25 16:05                         ` Giuseppe Scrivano
2023-01-25  9:37           ` Alexander Larsson
2023-01-25 10:05             ` Gao Xiang
2023-01-25 10:15               ` Alexander Larsson
2023-01-27 10:24                 ` Gao Xiang
2023-02-01  4:28                   ` Jingbo Xu
2023-02-01  7:44                     ` Amir Goldstein
2023-02-01  8:59                       ` Jingbo Xu
2023-02-01  9:52                         ` Alexander Larsson
2023-02-01 12:39                           ` Jingbo Xu
2023-02-01  9:46                     ` Alexander Larsson
2023-02-01 10:01                       ` Gao Xiang
2023-02-01 11:22                         ` Gao Xiang
2023-02-02  6:37                           ` Amir Goldstein
2023-02-02  7:17                             ` Gao Xiang
2023-02-02  7:37                               ` Gao Xiang [this message]
2023-02-03 11:32                                 ` Alexander Larsson
2023-02-03 12:46                                   ` Amir Goldstein
2023-02-03 15:09                                     ` Gao Xiang
2023-02-05 19:06                                       ` Amir Goldstein
2023-02-06  7:59                                         ` Amir Goldstein
2023-02-06 10:35                                         ` Miklos Szeredi
2023-02-06 13:30                                           ` Amir Goldstein
2023-02-06 16:34                                             ` Miklos Szeredi
2023-02-06 17:16                                               ` Amir Goldstein
2023-02-06 18:17                                                 ` Amir Goldstein
2023-02-06 19:32                                                 ` Miklos Szeredi
2023-02-06 20:06                                                   ` Amir Goldstein
2023-02-07  8:12                                                     ` Alexander Larsson
2023-02-06 12:51                                         ` Alexander Larsson
2023-02-07  8:12                                         ` Jingbo Xu
2023-02-06 12:43                                     ` Alexander Larsson
2023-02-06 13:27                                       ` Gao Xiang
2023-02-06 15:31                                         ` Alexander Larsson
2023-02-01 12:06                       ` Jingbo Xu
2023-02-02  4:57                       ` Jingbo Xu
2023-02-02  4:59                         ` Jingbo Xu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=02edb5d6-a232-eed6-0338-26f9a63cfdb6@linux.alibaba.com \
    --to=hsiangkao@linux.alibaba.com \
    --cc=alexl@redhat.com \
    --cc=amir73il@gmail.com \
    --cc=brauner@kernel.org \
    --cc=david@fromorbit.com \
    --cc=gscrivan@redhat.com \
    --cc=jefflexu@linux.alibaba.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=miklos@szeredi.hu \
    --cc=vgoyal@redhat.com \
    --cc=viro@zeniv.linux.org.uk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).