From: "Darrick J. Wong" <darrick.wong@oracle.com>
To: tytso@mit.edu, darrick.wong@oracle.com
Cc: linux-ext4@vger.kernel.org, gregor herrmann <gregoa@debian.org>
Subject: [PATCH 2/2] e2scrub_all: fix broken stdin redirection
Date: Mon, 04 Nov 2019 17:54:20 -0800 [thread overview]
Message-ID: <157291886085.328601.12219484583340581878.stgit@magnolia> (raw)
In-Reply-To: <157291884852.328601.5452592601628272222.stgit@magnolia>
From: Darrick J. Wong <darrick.wong@oracle.com>
gregor herrmann reports that the weekly e2scrub cronjob emits these
errors:
/sbin/e2scrub_all: line 173: /proc/8234/fd/pipe:[90083173]: No such file or directory
The root cause of this is that the ls_targets stdout is piped to stdin
to the entire ls_targets loop body to prevent the loop body from reading
the loop iteration items. Remove all the broken hackery by reading the
target list into a bash array and iterating the bash array.
Addresses-Debian-Bug: #944033
Reported-by: gregor herrmann <gregoa@debian.org>
Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
---
scrub/e2scrub_all.in | 14 ++++++++++----
1 file changed, 10 insertions(+), 4 deletions(-)
diff --git a/scrub/e2scrub_all.in b/scrub/e2scrub_all.in
index 72e66ff6..f0336711 100644
--- a/scrub/e2scrub_all.in
+++ b/scrub/e2scrub_all.in
@@ -101,6 +101,12 @@ exec 3<&-
# indicating success to avoid spamming the sysadmin with fail messages
# when e2scrub_all is run out of cron or a systemd timer.
+if ! type mapfile >& /dev/null ; then
+ test -n "${SERVICE_MODE}" && exitcode 0
+ echo "e2scrub_all: can't find mapfile --- is bash 4.xx installed?"
+ exitcode 1
+fi
+
if ! type lsblk >& /dev/null ; then
test -n "${SERVICE_MODE}" && exitcode 0
echo "e2scrub_all: can't find lsblk --- is util-linux installed?"
@@ -165,13 +171,13 @@ escape_path_for_systemd() {
}
# Scrub any mounted fs on lvm by creating a snapshot and fscking that.
-stdin="$(realpath /dev/stdin)"
-ls_targets | while read tgt; do
+mapfile -t targets < <(ls_targets)
+for tgt in "${targets[@]}"; do
# If we're not reaping and systemd is present, try invoking the
# systemd service.
if [ "${reap}" -ne 1 ] && type systemctl > /dev/null 2>&1; then
tgt_esc="$(escape_path_for_systemd "${tgt}")"
- ${DBG} systemctl start "e2scrub@${tgt_esc}" 2> /dev/null < "${stdin}"
+ ${DBG} systemctl start "e2scrub@${tgt_esc}" 2> /dev/null
res=$?
if [ "${res}" -eq 0 ] || [ "${res}" -eq 1 ]; then
continue;
@@ -179,7 +185,7 @@ ls_targets | while read tgt; do
fi
# Otherwise use direct invocation
- ${DBG} "@root_sbindir@/e2scrub" ${scrub_args} "${tgt}" < "${stdin}"
+ ${DBG} "@root_sbindir@/e2scrub" ${scrub_args} "${tgt}"
done
exitcode 0
next prev parent reply other threads:[~2019-11-05 1:54 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-11-05 1:54 [PATCH 0/2] e2scrub: fix some problems Darrick J. Wong
2019-11-05 1:54 ` [PATCH 1/2] e2scrub_all: don't even reap if the config file doesn't allow it Darrick J. Wong
2019-11-05 1:54 ` Darrick J. Wong [this message]
2019-11-10 3:37 ` [PATCH 0/2] e2scrub: fix some problems Theodore Y. Ts'o
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=157291886085.328601.12219484583340581878.stgit@magnolia \
--to=darrick.wong@oracle.com \
--cc=gregoa@debian.org \
--cc=linux-ext4@vger.kernel.org \
--cc=tytso@mit.edu \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).