From: Ritesh Harjani <email@example.com>
Cc: firstname.lastname@example.org, Jan Kara <email@example.com>,
firstname.lastname@example.org, "Aneesh Kumar K . V" <email@example.com>,
Ritesh Harjani <firstname.lastname@example.org>
Subject: [RFCv4 0/6] Improve ext4 handling of ENOSPC with multi-threaded use-case
Date: Sun, 10 May 2020 12:38:20 +0530 [thread overview]
Message-ID: <email@example.com> (raw)
v3 -> v4:
1. Splitted code cleanups and debug improvements as a separate patch series.
2. Dropped rcu_barrier() approach since it did cause some latency
in my testing of ENOSPC handling.
3. This patch series takes a different approach to improve the multi-threaded
ENOSPC handling in ext4 mballoc code. Below mail gives more details.
Consider a case where your disk is close to full but still enough space
remains for your multi-threaded application to run. Now when this application
threads tries to write (e.g. sparse file followed by mmap write or
even fallocate multiple files) in parallel, then with current code of
ext4 multi-block allocator, the application may get an ENOSPC error in some
cases. Examining disk space at this time, we see there is sufficient space
remaining for your application to continue to run.
1. Our internal test team was easily able to reproduce this ENOSPC error on
an upstream kernel with 2GB ext4 image, with 64K blocksize. They didn't try
above 2GB and reprorted this issue directly to dev team. On examining the
free space when the application gets ENOSPC, the free space left was more
then 50% of filesystem size in some cases.
2. For debugging/development of these patches, I used below script  to
trigger this issue quite frequently on a 64K blocksize setup with 240MB
Summary of patches and problem with current design
There were 3 main problems which these patches tries to address and hence
improve the ENOSPC handling in ext4's multi-block allocator code.
1. Patch-2: Earlier we were considering the group is good or not (means
checking if it has enough free blocks to serve your request) without taking
the group's lock. This could result into a race where, if another thread is
discarding the group's prealloc list, then the allocation thread will not
consider those about to be free blocks and will fail will return that group
is not fit for allocation thus eventually fails with ENOSPC error.
2. Patch-4: Discard PA algoritm only scans the PA list to free up the
additional blocks which got added to PA. This is done by the same thread-A
which at 1st couldn't allocate any blocks. But there is a window where,
once the blocks were allocated (say by some other thread-B previously) we
drop the group's lock and then checks to see if some of these blocks could
be added to prealloc list of the group from where we allocated some blocks.
After that we take the lock and add these additional blocks allocated by
thread-B to the PA list. But say if thread-A tries to scan the PA list
between this time interval then there is possibilty that it won't find any
blocks added to the PA list and hence may return ENOSPC error.
Hence this patch tries to add those additional blocks to the PA list just
after the blocks are marked as used with the same group's spinlock held.
3. Patch-3: Introduces a per cpu discard_pa_seq counter which is increased
whenever there is block allocation/freeing or when the discarding of any
group's PA list has started. With this we could know when to stop the
retrying logic and return ENOSPC error if there is actually no free space
There is an optimization done in the block allocation fast path with this
approach that, before starting the block allocation, we only sample the
percpu seq count on that cpu. Only when the allocation fails and discard
couldn't free up any of the blocks in all of the group's PA list, that is
when we sample the percpu seq counter sum over all possible cpus to check
if we need to retry.
Tested fstests with default bs of 4K and bs == PAGESIZE ("-g auto")
No new failures were reported with this patch series in this testing.
1. This patch series is based on top of mballoc code cleanup patch series
posted at .
2. Patch-2 & Patch-3 is intentionally kept separate for better reviewer's
attention on what each patch is trying to address.
Ritesh Harjani (6):
ext4: mballoc: Refactor ext4_mb_good_group()
ext4: mballoc: Use ext4_lock_group() around calculations involving bb_free
ext4: mballoc: Optimize ext4_mb_good_group_nolock further if grp needs init
ext4: mballoc: Add blocks to PA list under same spinlock after allocating blocks
ext4: mballoc: Refactor ext4_mb_discard_preallocations()
ext4: mballoc: Introduce pcpu seqcnt for freeing PA to improve ENOSPC handling
fs/ext4/mballoc.c | 249 +++++++++++++++++++++++++++++++++-------------
1 file changed, 179 insertions(+), 70 deletions(-)
next reply other threads:[~2020-05-10 7:08 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-05-10 7:08 Ritesh Harjani [this message]
2020-05-10 7:08 ` [RFCv4 1/6] ext4: mballoc: Refactor ext4_mb_good_group() Ritesh Harjani
2020-05-10 7:08 ` [RFCv4 2/6] ext4: mballoc: Use ext4_lock_group() around calculations involving bb_free Ritesh Harjani
2020-05-10 7:08 ` [RFCv4 3/6] ext4: mballoc: Optimize ext4_mb_good_group_nolock further if grp needs init Ritesh Harjani
2020-05-10 7:08 ` [RFCv4 4/6] ext4: mballoc: Add blocks to PA list under same spinlock after allocating blocks Ritesh Harjani
2020-05-10 7:08 ` [RFCv4 5/6] ext4: mballoc: Refactor ext4_mb_discard_preallocations() Ritesh Harjani
2020-05-10 7:08 ` [RFCv4 6/6] ext4: mballoc: Introduce pcpu seqcnt for freeing PA to improve ENOSPC handling Ritesh Harjani
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.