linux-xfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/3 v6] xfs: make CIL pipelining work
@ 2021-07-14  5:05 Dave Chinner
  2021-07-14  5:05 ` [PATCH 1/3] xfs: AIL needs asynchronous CIL forcing Dave Chinner
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: Dave Chinner @ 2021-07-14  5:05 UTC (permalink / raw)
  To: linux-xfs

This patchset improves the behaviour of the CIL by increasing
the processing capacity available for pushing changes into the
journal.

There are two aspects to this. The first is to reduce latency for
callers that require non-blocking log force behaviour such as the
AIL.

The AIL only needs to push on the CIL to get items unpinned, and it
doesn't need to wait for it to complete, either, before it continues
onwards trying to push out items to disk. The AIL will back off when
it reaches it's push target, so it doesn't need to wait on log
forces to back off when there are pinned items in the AIL.

Hence we add a mechanism to async pushes on the CIL that do not
block and convert the AIL to use it. This results in the AIL backing
off on it's own short timeouts and trying to make progress
repeatedly instead of stalling for seconds waiting for log large CIL
forces to complete.

This ability to run async CIL pushes then highlights a problem with
pipelining of the CIL pushes. The pipelining isn't working as
intended, it's actually serialising and only allowing a single CIL
push work to be in progress at once.

This can result in the CIL push work being CPU bound and limiting
the rate at which items can be pushed to the journal. It is also
creating excessive push latency where the CIL fills and hits the
hard throttle while waiting for the push work to finish the current
push and then start on the new push and swap in a new CIL context
that can be committed to.

Essentially, the problem is an implementation problem, not a design
flaw. The implementation has a single work attached to the CIL,
meaning we can only have a single outstanding push work in progress
at any time. The workqueue can handle more, but we only have a
single work. So the fix is to move the work to the CIL context so we
can queue and process multiple works at the same time, thereby
actually allowing the CIL push work to pipeline in the intended
manner.

With this change, it's also very clear that the CIL workqueue really
belongs to the CIL, not the xfs_mount. Having the CIL push have to
reference through the log and the xfs_mount to reach it's private
workqueue is quite the layering violation, so fix this up, too.

This has been run through thousands of cycles of generic/019 and
generic/0475 since the start record ordering issues were fixed by
"xfs: strictly order log start records" without any log recovery
failures or corruptions being recorded.

Version 6:
- split out from aggregated patchset
- add dependency on "xfs: strictly order log start records" for
  correct log recovery and runtime AIL ordering behaviour.
- rebase on 5.14-rc1 + "xfs: strictly order log start records"
- add patch moving CIL push workqueue into the CIL itself rather
  than having to go back up to the xfs_mount to access it at
  runtime.

Version 5:
- https://lore.kernel.org/linux-xfs/20210603052240.171998-1-david@fromorbit.com/


^ permalink raw reply	[flat|nested] 6+ messages in thread
* [PATCH 0/3 v7] xfs: make CIL pipelining work
@ 2021-08-10  5:22 Dave Chinner
  2021-08-10  5:22 ` [PATCH 1/3] xfs: AIL needs asynchronous CIL forcing Dave Chinner
  0 siblings, 1 reply; 6+ messages in thread
From: Dave Chinner @ 2021-08-10  5:22 UTC (permalink / raw)
  To: linux-xfs

This patchset improves the behaviour of the CIL by increasing
the processing capacity available for pushing changes into the
journal.

There are two aspects to this. The first is to reduce latency for
callers that require non-blocking log force behaviour such as the
AIL.

The AIL only needs to push on the CIL to get items unpinned, and it
doesn't need to wait for it to complete, either, before it continues
onwards trying to push out items to disk. The AIL will back off when
it reaches it's push target, so it doesn't need to wait on log
forces to back off when there are pinned items in the AIL.

Hence we add a mechanism to async pushes on the CIL
that do not block and convert the AIL to use it. This results in
the AIL backing off on it's own short timeouts and trying to make
progress repeatedly instead of stalling for seconds waiting for log
large CIL forces to complete.

This ability to run async CIL pushes then highlights a problem with
pipelining of the CIL pushes. The pipelining isn't working as
intended, it's actually serialising and only allowing a single CIL
push work to be in progress at once.

This can result in the CIL push work being CPU bound and limiting
the rate at which items can be pushed to the journal. It is also
creating excessive push latency where the CIL fills and hits the
hard throttle while waiting for the push work to finish the current
push and then start on the new push and swap in a new CIL context
that can be committed to.

Essentially, the problem is an implementation problem, not a design
flaw. The implementation has a single work attached to the CIL,
meaning we can only have a single outstanding push work in progress
at any time. The workqueue can handle more, but we only have a
single work. So the fix is to move the work to the CIL context so we
can queue and process multiple works at the same time, thereby
actually allowing the CIL push work to pipeline in the intended
manner.

With this change, it's also very clear that the CIL workqueue really
belongs to the CIL, not the xfs_mount. Having the CIL push have to
reference through the log and the xfs_mount to reach it's private
workqueue is quite the layering violation, so fix this up, too.

This has been run through thousands of cycles of generic/019 and
generic/0475 since the start record ordering issues were fixed by
"xfs: strictly order log start records" without any log recovery
failures or corruptions being recorded.

Version 7:
- rebase on 5.14-rc4 + for-next + "xfs: strictly order log start records"

Version 6:
- https://lore.kernel.org/linux-xfs/20210714050600.2632218-1-david@fromorbit.com/
- split out from aggregated patchset
- add dependency on "xfs: strictly order log start records" for
  correct log recovery and runtime AIL ordering behaviour.
- rebase on 5.14-rc1 + "xfs: strictly order log start records"
- add patch moving CIL push workqueue into the CIL itself rather
  than having to go back up to the xfs_mount to access it at
  runtime.

Version 5:
- https://lore.kernel.org/linux-xfs/20210603052240.171998-1-david@fromorbit.com/


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2021-08-10  5:23 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-14  5:05 [PATCH 0/3 v6] xfs: make CIL pipelining work Dave Chinner
2021-07-14  5:05 ` [PATCH 1/3] xfs: AIL needs asynchronous CIL forcing Dave Chinner
2021-07-14  5:05 ` [PATCH 2/3] xfs: CIL work is serialised, not pipelined Dave Chinner
2021-07-14  5:06 ` [PATCH 3/3] xfs: move the CIL workqueue to the CIL Dave Chinner
2021-07-14 23:25   ` Darrick J. Wong
2021-08-10  5:22 [PATCH 0/3 v7] xfs: make CIL pipelining work Dave Chinner
2021-08-10  5:22 ` [PATCH 1/3] xfs: AIL needs asynchronous CIL forcing Dave Chinner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).