linux-mtd.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Ben Schroeder <klowd92@gmail.com>
To: Richard Weinberger <richard.weinberger@gmail.com>
Cc: linux-mtd@lists.infradead.org
Subject: Re: Available space loss due to fragmentation?
Date: Wed, 10 Jul 2019 18:18:03 +0300	[thread overview]
Message-ID: <CAMk-x8NfGfXZ6c0QV9kVO677PSo4gmHCgPKV=7_iz7HYX6wYEQ@mail.gmail.com> (raw)
In-Reply-To: <CAFLxGvy=iwwUbS8O1xMAtMZYibzQ+vbft1ZVpg9erR=S1c8s2g@mail.gmail.com>

On Wed, Jul 10, 2019 at 12:34 PM Richard Weinberger
<richard.weinberger@gmail.com> wrote:
>
> Ben,
>
> On Wed, Jul 10, 2019 at 5:32 AM Ben Schroeder <klowd92@gmail.com> wrote:
> > Why do I see a loss of space when rewriting the same file?
>
> Please see my answer below.
>
> > Can I use an upgrade scheme with file binary diff as mentioned above -
> > One that would run correctly with low available space?
>
> If the filesystem is full and all nodes are already packed, it can be
> a challenge.
>
> > Can I use an upgrade scheme with UBI volume binary diff?
>
> Yes, you can alter a dynamic volume as you wish. But keep NAND odds on mind.
> So you need to replace whole LEBs.
>
> > Sorry for the long mail, I have not found much information about fragmentation
> > and space loss in UBIFS. Let me know if I forgot any relevant details.
>
> I think the root cause of the problem you see is how NAND works.
> On NAND we write always full pages. So if you ask UBIFS to change one byte
> of a file or change meta data, it has to waste a full page.
>
> Luckily Linux is a modern operating system with a write-cache and upon
> write-back UBIFS can pack nodes (UBIFS data nodes, inode nodes, etc...) together
> and wastes less space.
> But it wastes still a significant amount of space if userspace forces
> it to persist data.
> i.e. by using fsync()/fdatasync().
> If UBIFS runs out of space the garbage collector will rewrite nodes
> and pack them tightly
> together.
>
> So, if you have a pre-created UBIFS, nodes are already packed and your
> update mechanism
> might force UBIFS to faster than the garbage collector can pack nodes.
>
> With that information in mind, do your other questions resolve?
>

Thanks for the reply Richard.
I just wanted to reiterate that i am using SPI NOR Flash, partitioned
in an A/B scheme as so:

mtd7
Name:                           rootfs
Type:                           nor
Eraseblock size:                65536 bytes, 64.0 KiB
Amount of eraseblocks:          880 (57671680 bytes, 55.0 MiB)
Minimum input/output unit size: 1 byte
Sub-page size:                  1 byte
Character device major/minor:   90:14
Bad blocks are allowed:         false
Device is writable:             true

mtd8
Name:                           rootfs1
Type:                           ubi
Eraseblock size:                65408 bytes, 63.9 KiB
Amount of eraseblocks:          353 (23089024 bytes, 22.0 MiB)
Minimum input/output unit size: 1 byte
Sub-page size:                  1 byte
Character device major/minor:   90:16
Bad blocks are allowed:         false
Device is writable:             true

mtd9
Name:                           rootfs2
Type:                           ubi
Eraseblock size:                65408 bytes, 63.9 KiB
Amount of eraseblocks:          353 (23089024 bytes, 22.0 MiB)
Minimum input/output unit size: 1 byte
Sub-page size:                  1 byte
Character device major/minor:   90:18
Bad blocks are allowed:         false
Device is writable:             true

I am not sure the garbage collector will improve the available space issue.
Regardless of the UBI being mounted with sync option enabled or disabled,
the issue persists. Even if i allow time for the background thread to run.
The issue seems very problematic when considering the fact that i am
downgrading the filesystem, patching files to be slightly smaller size
than before,
and i am still running out of disk space, regardless of how long i
wait for garbage collection.
On this regard, i will stick with your answer that it can be a serious
challenge if all nodes are packed,
and there is little available free space.

Could you please clarify your answer regarding binary patching UBI Volumes:
> Yes, you can alter a dynamic volume as you wish. But keep NAND odds on mind.
> So you need to replace whole LEBs.

It was my understanding that because UBI keeps tracks of bad blocks
and erase counters,
so that overwriting an existing and running UBI partition using a
binary diff against a newer UBI partition,
might cause loss of that metadata, or even corruption.

> --
> Thanks,
> //richard

Thanks,
Ben.

______________________________________________________
Linux MTD discussion mailing list
http://lists.infradead.org/mailman/listinfo/linux-mtd/

  reply	other threads:[~2019-07-10 15:18 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-07-10  3:32 Available space loss due to fragmentation? Ben Schroeder
2019-07-10  9:34 ` Richard Weinberger
2019-07-10 15:18   ` Ben Schroeder [this message]
2019-07-11 10:16     ` Richard Weinberger
2019-07-11 15:53       ` Ben Schroeder
2019-07-11 20:41         ` Richard Weinberger

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAMk-x8NfGfXZ6c0QV9kVO677PSo4gmHCgPKV=7_iz7HYX6wYEQ@mail.gmail.com' \
    --to=klowd92@gmail.com \
    --cc=linux-mtd@lists.infradead.org \
    --cc=richard.weinberger@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).