All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/4] xfs_repair: check quota counters
@ 2020-05-14 16:56 Darrick J. Wong
  2020-05-14 16:56 ` [PATCH 1/4] xfs_repair: alphabetize HFILES and CFILES Darrick J. Wong
                   ` (3 more replies)
  0 siblings, 4 replies; 7+ messages in thread
From: Darrick J. Wong @ 2020-05-14 16:56 UTC (permalink / raw)
  To: sandeen, darrick.wong; +Cc: linux-xfs

Hi all,

One of the larger gaps between xfs_check and xfs_repair is that repair
doesn't have the ability to check quota counters against the filesystem
contents.  This series adds that checking ability to repair pass7, which
should suffice to drop xfs_check in fstests.  It also means that repair
no longer forces a quotacheck on the next mount even if the quota counts
were correct.

For v2, I moved the makefile changes to a separate patch; fixed various
labelling and error message things that Eric pointed out; and also fixed
a bug where repair wasn't clearing the CHKD flags correctly on V4 fses.

If you're going to start using this mess, you probably ought to just
pull from my git trees, which are linked below.

This is an extraordinary way to destroy everything.  Enjoy!
Comments and questions are, as always, welcome.

--D

xfsprogs git tree:
https://git.kernel.org/cgit/linux/kernel/git/djwong/xfsprogs-dev.git/log/?h=repair-quotacheck

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 1/4] xfs_repair: alphabetize HFILES and CFILES
  2020-05-14 16:56 [PATCH v2 0/4] xfs_repair: check quota counters Darrick J. Wong
@ 2020-05-14 16:56 ` Darrick J. Wong
  2020-05-14 16:56 ` [PATCH 2/4] xfs_repair: fix clearing of quota CHKD flags Darrick J. Wong
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 7+ messages in thread
From: Darrick J. Wong @ 2020-05-14 16:56 UTC (permalink / raw)
  To: sandeen, darrick.wong; +Cc: linux-xfs

From: Darrick J. Wong <darrick.wong@oracle.com>

Convert the definitions of HFILES and CFILES to lists that can be sorted
easily (in vim, anyway), then fix the alphabetization of the makefile
targets.

Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
---
 repair/Makefile |   67 +++++++++++++++++++++++++++++++++++++++++++++++--------
 1 file changed, 57 insertions(+), 10 deletions(-)


diff --git a/repair/Makefile b/repair/Makefile
index 8cc1ee68..78a521c5 100644
--- a/repair/Makefile
+++ b/repair/Makefile
@@ -9,16 +9,63 @@ LSRCFILES = README
 
 LTCOMMAND = xfs_repair
 
-HFILES = agheader.h attr_repair.h avl.h bload.h bmap.h btree.h \
-	da_util.h dinode.h dir2.h err_protos.h globals.h incore.h protos.h \
-	rt.h progress.h scan.h versions.h prefetch.h rmap.h slab.h threads.h
-
-CFILES = agheader.c attr_repair.c avl.c bload.c bmap.c btree.c \
-	da_util.c dino_chunks.c dinode.c dir2.c globals.c incore.c \
-	incore_bmc.c init.c incore_ext.c incore_ino.c phase1.c \
-	phase2.c phase3.c phase4.c phase5.c phase6.c phase7.c \
-	progress.c prefetch.c rmap.c rt.c sb.c scan.c slab.c threads.c \
-	versions.c xfs_repair.c
+HFILES = \
+	agheader.h \
+	attr_repair.h \
+	avl.h \
+	bload.h \
+	bmap.h \
+	btree.h \
+	da_util.h \
+	dinode.h \
+	dir2.h \
+	err_protos.h \
+	globals.h \
+	incore.h \
+	prefetch.h \
+	progress.h \
+	protos.h \
+	rmap.h \
+	rt.h \
+	scan.h \
+	slab.h \
+	threads.h \
+	versions.h
+
+CFILES = \
+	agheader.c \
+	attr_repair.c \
+	avl.c \
+	bload.c \
+	bmap.c \
+	btree.c \
+	da_util.c \
+	dino_chunks.c \
+	dinode.c \
+	dir2.c \
+	globals.c \
+	incore_bmc.c \
+	incore.c \
+	incore_ext.c \
+	incore_ino.c \
+	init.c \
+	phase1.c \
+	phase2.c \
+	phase3.c \
+	phase4.c \
+	phase5.c \
+	phase6.c \
+	phase7.c \
+	prefetch.c \
+	progress.c \
+	rmap.c \
+	rt.c \
+	sb.c \
+	scan.c \
+	slab.c \
+	threads.c \
+	versions.c \
+	xfs_repair.c
 
 LLDLIBS = $(LIBXFS) $(LIBXLOG) $(LIBXCMD) $(LIBFROG) $(LIBUUID) $(LIBRT) \
 	$(LIBPTHREAD) $(LIBBLKID)


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 2/4] xfs_repair: fix clearing of quota CHKD flags
  2020-05-14 16:56 [PATCH v2 0/4] xfs_repair: check quota counters Darrick J. Wong
  2020-05-14 16:56 ` [PATCH 1/4] xfs_repair: alphabetize HFILES and CFILES Darrick J. Wong
@ 2020-05-14 16:56 ` Darrick J. Wong
  2020-05-14 16:56 ` [PATCH 3/4] xfs_repair: check quota values if quota was loaded Darrick J. Wong
  2020-05-14 16:56 ` [PATCH 4/4] xfs_repair: skip mount time quotacheck if our quotacheck was ok Darrick J. Wong
  3 siblings, 0 replies; 7+ messages in thread
From: Darrick J. Wong @ 2020-05-14 16:56 UTC (permalink / raw)
  To: sandeen, darrick.wong; +Cc: linux-xfs

From: Darrick J. Wong <darrick.wong@oracle.com>

XFS_ALL_QUOTA_CHKD, being a OR of [UGP]QUOTA_CHKD, is a bitset of the
possible *incore* quota checked flags.  This means that it cannot be
used in a comparison with the *ondisk* quota checked flags because V4
filesystems set OQUOTA_CHKD, not the [GU]QUOTA_CHKD flags (which are V5
flags).

If you have a V4 filesystem with user quotas disabled but either group
or project quotas enabled, xfs_repair will /not/ claim that the quota
info will be regenerated on the next mount like it does in any other
situation.  This is because the ondisk qflags field has OQUOTA_CHKD set
but repair fails to notice.

Worse, if you have a V4 filesystem with user and group quotas enabled
and mild corruption, repair will claim that the quota info will be
regenerated.  If you then mount the fs with only group quotas enabled,
quotacheck will not run to correct the data because repair failed to
clear OQUOTA_CHKD properly.

These are fairly benign and unlikely scenarios, but when we add
quotacheck capabilities to xfs_repair, it will complain about the
incorrect quota counts, which causes regressions in xfs/278.

Fixes: 342aef1ec0ec ("xfsprogs: Remove incore use of XFS_OQUOTA_ENFD and XFS_OQUOTA_CHKD")
Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
---
 repair/xfs_repair.c |    7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)


diff --git a/repair/xfs_repair.c b/repair/xfs_repair.c
index 8fbd3649..9ab3cafa 100644
--- a/repair/xfs_repair.c
+++ b/repair/xfs_repair.c
@@ -1100,10 +1100,13 @@ _("Warning:  project quota information would be cleared.\n"
 
 	dsb = sbp->b_addr;
 
-	if (be16_to_cpu(dsb->sb_qflags) & XFS_ALL_QUOTA_CHKD) {
+	if (mp->m_sb.sb_qflags & XFS_ALL_QUOTA_CHKD) {
 		do_warn(_("Note - quota info will be regenerated on next "
 			"quota mount.\n"));
-		dsb->sb_qflags &= cpu_to_be16(~XFS_ALL_QUOTA_CHKD);
+		dsb->sb_qflags &= cpu_to_be16(~(XFS_UQUOTA_CHKD |
+						XFS_GQUOTA_CHKD |
+						XFS_PQUOTA_CHKD |
+						XFS_OQUOTA_CHKD));
 	}
 
 	if (copied_sunit) {


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 3/4] xfs_repair: check quota values if quota was loaded
  2020-05-14 16:56 [PATCH v2 0/4] xfs_repair: check quota counters Darrick J. Wong
  2020-05-14 16:56 ` [PATCH 1/4] xfs_repair: alphabetize HFILES and CFILES Darrick J. Wong
  2020-05-14 16:56 ` [PATCH 2/4] xfs_repair: fix clearing of quota CHKD flags Darrick J. Wong
@ 2020-05-14 16:56 ` Darrick J. Wong
  2020-05-14 16:56 ` [PATCH 4/4] xfs_repair: skip mount time quotacheck if our quotacheck was ok Darrick J. Wong
  3 siblings, 0 replies; 7+ messages in thread
From: Darrick J. Wong @ 2020-05-14 16:56 UTC (permalink / raw)
  To: sandeen, darrick.wong; +Cc: linux-xfs

From: Darrick J. Wong <darrick.wong@oracle.com>

If the filesystem looks like it had up to date quota information, check
it against what's in the filesystem and report if we find discrepancies.
This closes one of the major gaps in corruptions that are detected by
xfs_check vs. xfs_repair.

Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
---
 man/man8/xfs_repair.8 |    4 
 repair/Makefile       |    2 
 repair/phase7.c       |   21 ++
 repair/quotacheck.c   |  540 +++++++++++++++++++++++++++++++++++++++++++++++++
 repair/quotacheck.h   |   15 +
 repair/xfs_repair.c   |    6 +
 6 files changed, 584 insertions(+), 4 deletions(-)
 create mode 100644 repair/quotacheck.c
 create mode 100644 repair/quotacheck.h


diff --git a/man/man8/xfs_repair.8 b/man/man8/xfs_repair.8
index 75d36e62..cc6a2be8 100644
--- a/man/man8/xfs_repair.8
+++ b/man/man8/xfs_repair.8
@@ -155,6 +155,10 @@ if there are two allocation groups and the two superblocks do not
 agree on the filesystem geometry.  Only use this option if you validated
 the geometry yourself and know what you are doing.  If In doubt run
 in no modify mode first.
+.TP
+.BI noquota
+Don't validate quota counters at all.
+Quotacheck will be run during the next mount to recalculate all values.
 .RE
 .TP
 .B \-t " interval"
diff --git a/repair/Makefile b/repair/Makefile
index 78a521c5..e3a74adc 100644
--- a/repair/Makefile
+++ b/repair/Makefile
@@ -25,6 +25,7 @@ HFILES = \
 	prefetch.h \
 	progress.h \
 	protos.h \
+	quotacheck.h \
 	rmap.h \
 	rt.h \
 	scan.h \
@@ -58,6 +59,7 @@ CFILES = \
 	phase7.c \
 	prefetch.c \
 	progress.c \
+	quotacheck.c \
 	rmap.c \
 	rt.c \
 	sb.c \
diff --git a/repair/phase7.c b/repair/phase7.c
index c2996470..47e76b56 100644
--- a/repair/phase7.c
+++ b/repair/phase7.c
@@ -15,6 +15,7 @@
 #include "versions.h"
 #include "progress.h"
 #include "threads.h"
+#include "quotacheck.h"
 
 static void
 update_inode_nlinks(
@@ -97,6 +98,10 @@ do_link_updates(
 
 	for (irec = findfirst_inode_rec(agno); irec;
 	     irec = next_ino_rec(irec)) {
+		xfs_ino_t	ino;
+
+		ino = XFS_AGINO_TO_INO(mp, agno, irec->ino_startnum);
+
 		for (j = 0; j < XFS_INODES_PER_CHUNK; j++)  {
 			ASSERT(is_inode_confirmed(irec, j));
 
@@ -109,10 +114,8 @@ do_link_updates(
 			ASSERT(no_modify || nrefs > 0);
 
 			if (get_inode_disk_nlinks(irec, j) != nrefs)
-				update_inode_nlinks(wq->wq_ctx,
-					XFS_AGINO_TO_INO(mp, agno,
-						irec->ino_startnum + j),
-					nrefs);
+				update_inode_nlinks(wq->wq_ctx, ino + j, nrefs);
+			quotacheck_adjust(mp, ino + j);
 		}
 	}
 
@@ -126,6 +129,7 @@ phase7(
 {
 	struct workqueue	wq;
 	int			agno;
+	int			ret;
 
 	if (!no_modify)
 		do_log(_("Phase 7 - verify and correct link counts...\n"));
@@ -134,6 +138,9 @@ phase7(
 
 	set_progress_msg(PROGRESS_FMT_CORR_LINK, (uint64_t) glob_agcount);
 
+	ret = quotacheck_setup(mp);
+	if (ret)
+		do_error(_("unable to set up quotacheck, err=%d\n"), ret);
 	create_work_queue(&wq, mp, scan_threads);
 
 	for (agno = 0; agno < mp->m_sb.sb_agcount; agno++)
@@ -141,5 +148,11 @@ phase7(
 
 	destroy_work_queue(&wq);
 
+	quotacheck_verify(mp, XFS_DQ_USER);
+	quotacheck_verify(mp, XFS_DQ_GROUP);
+	quotacheck_verify(mp, XFS_DQ_PROJ);
+
+	quotacheck_teardown();
+
 	print_final_rpt();
 }
diff --git a/repair/quotacheck.c b/repair/quotacheck.c
new file mode 100644
index 00000000..a9e4f1ba
--- /dev/null
+++ b/repair/quotacheck.c
@@ -0,0 +1,540 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * Copyright (C) 2020 Oracle.  All Rights Reserved.
+ * Author: Darrick J. Wong <darrick.wong@oracle.com>
+ */
+#include <libxfs.h>
+#include "globals.h"
+#include "versions.h"
+#include "err_protos.h"
+#include "libfrog/avl64.h"
+#include "quotacheck.h"
+
+/* Allow the xfs_repair caller to skip quotacheck entirely. */
+static bool noquota;
+
+void quotacheck_skip(void)
+{
+	noquota = true;
+}
+
+/*
+ * XFS_*QUOTA_CHKD flags for all the quota types that we verified.  This field
+ * will be cleared if we encounter any problems (runtime errors, mismatches).
+ */
+static uint16_t chkd_flags;
+
+/* Global incore dquot tree */
+struct qc_dquots {
+	pthread_mutex_t		lock;
+	struct avl64tree_desc	tree;
+
+	/* One of XFS_DQ_{USER,GROUP,PROJ} */
+	uint16_t		type;
+};
+
+#define qc_dquots_foreach(dquots, pos, n) \
+	for (pos = (dquots)->tree.avl_firstino, n = pos ? pos->avl_nextino : NULL; \
+			pos != NULL; \
+			pos = n, n = pos ? pos->avl_nextino : NULL)
+
+static struct qc_dquots *user_dquots;
+static struct qc_dquots *group_dquots;
+static struct qc_dquots *proj_dquots;
+
+/* This record was found in the on-disk dquot information. */
+#define QC_REC_ONDISK		(1U << 31)
+
+struct qc_rec {
+	struct avl64node	node;
+	pthread_mutex_t		lock;
+
+	uint32_t		id;
+	uint32_t		flags;
+	uint64_t		bcount;
+	uint64_t		rtbcount;
+	uint64_t		icount;
+};
+
+static const char *
+qflags_typestr(
+	unsigned int		type)
+{
+	if (type & XFS_DQ_USER)
+		return _("user quota");
+	else if (type & XFS_DQ_GROUP)
+		return _("group quota");
+	else if (type & XFS_DQ_PROJ)
+		return _("project quota");
+	return NULL;
+}
+
+/* Operations for the avl64 tree. */
+
+static uint64_t
+qc_avl_start(
+	struct avl64node	*node)
+{
+	struct qc_rec		*qrec;
+
+	qrec = container_of(node, struct qc_rec, node);
+	return qrec->id;
+}
+
+static uint64_t
+qc_avl_end(
+	struct avl64node	*node)
+{
+	return qc_avl_start(node) + 1;
+}
+
+static struct avl64ops qc_cache_ops = {
+	.avl_start		= qc_avl_start,
+	.avl_end		= qc_avl_end,
+};
+
+/* Find a qc_rec in the incore cache, or allocate one if need be. */
+static struct qc_rec *
+qc_rec_get(
+	struct qc_dquots	*dquots,
+	uint32_t		id,
+	bool			can_alloc)
+{
+	struct qc_rec		*qrec;
+	struct avl64node	*node;
+
+	pthread_mutex_lock(&dquots->lock);
+	node = avl64_find(&dquots->tree, id);
+	if (!node && can_alloc) {
+		qrec = calloc(sizeof(struct qc_rec), 1);
+		if (qrec) {
+			qrec->id = id;
+			node = avl64_insert(&dquots->tree, &qrec->node);
+			if (!node)
+				free(qrec);
+			else
+				pthread_mutex_init(&qrec->lock, NULL);
+		}
+	}
+	pthread_mutex_unlock(&dquots->lock);
+
+	return node ? container_of(node, struct qc_rec, node) : NULL;
+}
+
+/* Bump up an incore dquot's counters. */
+static void
+qc_adjust(
+	struct qc_dquots	*dquots,
+	uint32_t		id,
+	uint64_t		bcount,
+	uint64_t		rtbcount)
+{
+	struct qc_rec		*qrec = qc_rec_get(dquots, id, true);
+
+	if (!qrec) {
+		do_warn(_("Ran out of memory while running quotacheck!\n"));
+		chkd_flags = 0;
+		return;
+	}
+
+	pthread_mutex_lock(&qrec->lock);
+	qrec->bcount += bcount;
+	qrec->rtbcount += rtbcount;
+	qrec->icount++;
+	pthread_mutex_unlock(&qrec->lock);
+}
+
+/* Count the realtime blocks allocated to a file. */
+static xfs_filblks_t
+qc_count_rtblocks(
+	struct xfs_inode	*ip)
+{
+	struct xfs_iext_cursor	icur;
+	struct xfs_bmbt_irec	got;
+	xfs_filblks_t		count = 0;
+	struct xfs_ifork	*ifp = XFS_IFORK_PTR(ip, XFS_DATA_FORK);
+	int			error;
+
+	if (!(ifp->if_flags & XFS_IFEXTENTS)) {
+		error = -libxfs_iread_extents(NULL, ip, XFS_DATA_FORK);
+		if (error) {
+			do_warn(
+_("could not read ino %"PRIu64" extents, err=%d\n"),
+				ip->i_ino, error);
+			chkd_flags = 0;
+			return 0;
+		}
+	}
+
+	for_each_xfs_iext(ifp, &icur, &got)
+		if (!isnullstartblock(got.br_startblock))
+			count += got.br_blockcount;
+	return count;
+}
+
+/* Add this inode's information to the quota counts. */
+void
+quotacheck_adjust(
+	struct xfs_mount	*mp,
+	xfs_ino_t		ino)
+{
+	struct xfs_inode	*ip;
+	uint64_t		blocks;
+	uint64_t		rtblks = 0;
+	int			error;
+
+	/*
+	 * If the fs doesn't have any quota files to check against, skip this
+	 * step.
+	 */
+	if (!user_dquots && !group_dquots && !proj_dquots)
+		return;
+
+	/* Skip if a previous quotacheck adjustment failed. */
+	if (chkd_flags == 0)
+		return;
+
+	/* Quota files are not included in quota counts. */
+	if (ino == mp->m_sb.sb_uquotino ||
+	    ino == mp->m_sb.sb_gquotino ||
+	    ino == mp->m_sb.sb_pquotino)
+		return;
+
+	error = -libxfs_iget(mp, NULL, ino, 0, &ip, &xfs_default_ifork_ops);
+	if (error) {
+		do_warn(
+	_("could not open file %"PRIu64" for quotacheck, err=%d\n"),
+				ino, error);
+		chkd_flags = 0;
+		return;
+	}
+
+	/* Count the file's blocks. */
+	if (XFS_IS_REALTIME_INODE(ip))
+		rtblks = qc_count_rtblocks(ip);
+	blocks = ip->i_d.di_nblocks - rtblks;
+
+	if (user_dquots)
+		qc_adjust(user_dquots, i_uid_read(VFS_I(ip)), blocks, rtblks);
+	if (group_dquots)
+		qc_adjust(group_dquots, i_gid_read(VFS_I(ip)), blocks, rtblks);
+	if (proj_dquots)
+		qc_adjust(proj_dquots, ip->i_d.di_projid, blocks, rtblks);
+
+	libxfs_irele(ip);
+}
+
+/* Compare this on-disk dquot against whatever we observed. */
+static void
+qc_check_dquot(
+	struct xfs_disk_dquot	*ddq,
+	struct qc_dquots	*dquots)
+{
+	struct qc_rec		*qrec;
+	struct qc_rec		empty = {
+		.bcount		= 0,
+		.rtbcount	= 0,
+		.icount		= 0,
+	};
+	uint32_t		id = be32_to_cpu(ddq->d_id);
+
+	qrec = qc_rec_get(dquots, id, false);
+	if (!qrec)
+		qrec = &empty;
+
+	if (be64_to_cpu(ddq->d_bcount) != qrec->bcount) {
+		do_warn(_("%s id %u has bcount %llu, expected %"PRIu64"\n"),
+				qflags_typestr(dquots->type), id,
+				be64_to_cpu(ddq->d_bcount), qrec->bcount);
+		chkd_flags = 0;
+	}
+
+	if (be64_to_cpu(ddq->d_rtbcount) != qrec->rtbcount) {
+		do_warn(_("%s id %u has rtbcount %llu, expected %"PRIu64"\n"),
+				qflags_typestr(dquots->type), id,
+				be64_to_cpu(ddq->d_rtbcount), qrec->rtbcount);
+		chkd_flags = 0;
+	}
+
+	if (be64_to_cpu(ddq->d_icount) != qrec->icount) {
+		do_warn(_("%s id %u has icount %llu, expected %"PRIu64"\n"),
+				qflags_typestr(dquots->type), id,
+				be64_to_cpu(ddq->d_icount), qrec->icount);
+		chkd_flags = 0;
+	}
+
+	/*
+	 * Mark that we found the record on disk.  Skip locking here because
+	 * we're checking the dquots serially.
+	 */
+	qrec->flags |= QC_REC_ONDISK;
+}
+
+/* Walk every dquot in every block in this quota inode extent and compare. */
+static int
+qc_walk_dquot_extent(
+	struct xfs_inode	*ip,
+	struct xfs_bmbt_irec	*map,
+	struct qc_dquots	*dquots)
+{
+	struct xfs_mount	*mp = ip->i_mount;
+	struct xfs_buf		*bp;
+	struct xfs_dqblk	*dqb;
+	xfs_filblks_t		dqchunklen;
+	xfs_filblks_t		bno;
+	unsigned int		dqperchunk;
+	int			error = 0;
+
+	dqchunklen = XFS_FSB_TO_BB(mp, XFS_DQUOT_CLUSTER_SIZE_FSB);
+	dqperchunk = libxfs_calc_dquots_per_chunk(dqchunklen);
+
+	for (bno = 0; bno < map->br_blockcount; bno++) {
+		unsigned int	dqnr;
+		uint64_t	dqid;
+
+		error = -libxfs_buf_read(mp->m_dev,
+				XFS_FSB_TO_DADDR(mp, map->br_startblock + bno),
+				dqchunklen, 0, &bp, &xfs_dquot_buf_ops);
+		if (error) {
+			do_warn(
+_("cannot read %s inode %"PRIu64", block %"PRIu64", disk block %"PRIu64", err=%d\n"),
+				qflags_typestr(dquots->type), ip->i_ino,
+				map->br_startoff + bno,
+				map->br_startblock + bno, error);
+			chkd_flags = 0;
+			return error;
+		}
+
+		dqb = bp->b_addr;
+		dqid = map->br_startoff * dqperchunk;
+		for (dqnr = 0;
+		     dqnr < dqperchunk && dqid <= UINT_MAX;
+		     dqnr++, dqb++, dqid++)
+			qc_check_dquot(&dqb->dd_diskdq, dquots);
+		libxfs_buf_relse(bp);
+	}
+
+	return error;
+}
+
+/* Check the incore quota counts with what's on disk. */
+void
+quotacheck_verify(
+	struct xfs_mount	*mp,
+	unsigned int		type)
+{
+	struct xfs_bmbt_irec	map;
+	struct xfs_iext_cursor	icur;
+	struct xfs_inode	*ip;
+	struct xfs_ifork	*ifp;
+	struct qc_dquots	*dquots = NULL;
+	struct avl64node	*node, *n;
+	xfs_ino_t		ino = NULLFSINO;
+	int			error;
+
+	switch (type) {
+	case XFS_DQ_USER:
+		ino = mp->m_sb.sb_uquotino;
+		dquots = user_dquots;
+		break;
+	case XFS_DQ_GROUP:
+		ino = mp->m_sb.sb_gquotino;
+		dquots = group_dquots;
+		break;
+	case XFS_DQ_PROJ:
+		ino = mp->m_sb.sb_pquotino;
+		dquots = proj_dquots;
+		break;
+	}
+
+	/*
+	 * If we decided not to build incore records or there were errors in
+	 * collecting them that caused us to clear chkd_flags, bail out early.
+	 * No sense in complaining more about garbage.
+	 */
+	if (!dquots || !chkd_flags)
+		return;
+
+	error = -libxfs_iget(mp, NULL, ino, 0, &ip, &xfs_default_ifork_ops);
+	if (error) {
+		do_warn(
+	_("could not open %s inode %"PRIu64" for quotacheck, err=%d\n"),
+			qflags_typestr(type), ino, error);
+		chkd_flags = 0;
+		return;
+	}
+
+	ifp = XFS_IFORK_PTR(ip, XFS_DATA_FORK);
+	if (!(ifp->if_flags & XFS_IFEXTENTS)) {
+		error = -libxfs_iread_extents(NULL, ip, XFS_DATA_FORK);
+		if (error) {
+			do_warn(
+	_("could not read %s inode %"PRIu64" extents, err=%d\n"),
+				qflags_typestr(type), ip->i_ino, error);
+			chkd_flags = 0;
+			goto err;
+		}
+	}
+
+	/* Walk each extent of the quota inode and compare counters. */
+	for_each_xfs_iext(ifp, &icur, &map) {
+		if (map.br_startblock != HOLESTARTBLOCK) {
+			error = qc_walk_dquot_extent(ip, &map, dquots);
+			if (error)
+				goto err;
+		}
+	}
+
+	/*
+	 * We constructed incore dquots to account for every file we saw on
+	 * disk, and then walked all on-disk dquots to compare.  Complain about
+	 * incore dquots that weren't touched during the comparison, because
+	 * that means something is missing from the dquot file.
+	 */
+	qc_dquots_foreach(dquots, node, n) {
+		struct qc_rec	*qrec;
+
+		qrec = container_of(node, struct qc_rec, node);
+		if (!(qrec->flags & QC_REC_ONDISK)) {
+			do_warn(
+_("%s record for id %u not found on disk (bcount %"PRIu64" rtbcount %"PRIu64" icount %"PRIu64")\n"),
+				qflags_typestr(type), qrec->id,
+				qrec->bcount, qrec->rtbcount, qrec->icount);
+			chkd_flags = 0;
+		}
+	}
+err:
+	libxfs_irele(ip);
+}
+
+/*
+ * Decide if we want to run quotacheck on a particular quota type.  Returns
+ * true only if the inode isn't lost, the fs says quotacheck ran, and the
+ * ondisk pointer to the quota inode isn't "unset" (e.g. NULL or zero).
+ */
+static inline bool
+qc_has_quotafile(
+	struct xfs_mount	*mp,
+	unsigned int		type)
+{
+	bool			lost;
+	xfs_ino_t		ino;
+	unsigned int		qflag;
+
+	switch (type) {
+	case XFS_DQ_USER:
+		lost = lost_uquotino;
+		ino = mp->m_sb.sb_uquotino;
+		qflag = XFS_UQUOTA_CHKD;
+		break;
+	case XFS_DQ_GROUP:
+		lost = lost_gquotino;
+		ino = mp->m_sb.sb_gquotino;
+		qflag = XFS_GQUOTA_CHKD;
+		break;
+	case XFS_DQ_PROJ:
+		lost = lost_pquotino;
+		ino = mp->m_sb.sb_pquotino;
+		qflag = XFS_PQUOTA_CHKD;
+		break;
+	default:
+		return false;
+	}
+
+	if (lost)
+		return false;
+	if (!(mp->m_sb.sb_qflags & qflag))
+		return false;
+	if (ino == NULLFSINO || ino == 0)
+		return false;
+	return true;
+}
+
+/* Initialize an incore dquot tree. */
+static struct qc_dquots *
+qc_dquots_init(
+	uint16_t		type)
+{
+	struct qc_dquots	*dquots;
+
+	dquots = calloc(1, sizeof(struct qc_dquots));
+	if (!dquots)
+		return NULL;
+
+	dquots->type = type;
+	pthread_mutex_init(&dquots->lock, NULL);
+	avl64_init_tree(&dquots->tree, &qc_cache_ops);
+	return dquots;
+}
+
+/* Set up incore context for quota checks. */
+int
+quotacheck_setup(
+	struct xfs_mount	*mp)
+{
+	chkd_flags = 0;
+
+	/*
+	 * If the superblock said quotas are disabled or was missing pointers
+	 * to any quota inodes, don't bother checking.
+	 */
+	if (!fs_quotas || lost_quotas || noquota)
+		return 0;
+
+	if (qc_has_quotafile(mp, XFS_DQ_USER)) {
+		user_dquots = qc_dquots_init(XFS_DQ_USER);
+		if (!user_dquots)
+			goto err;
+		chkd_flags |= XFS_UQUOTA_CHKD;
+	}
+
+	if (qc_has_quotafile(mp, XFS_DQ_GROUP)) {
+		group_dquots = qc_dquots_init(XFS_DQ_GROUP);
+		if (!group_dquots)
+			goto err;
+		chkd_flags |= XFS_GQUOTA_CHKD;
+	}
+
+	if (qc_has_quotafile(mp, XFS_DQ_PROJ)) {
+		proj_dquots = qc_dquots_init(XFS_DQ_PROJ);
+		if (!proj_dquots)
+			goto err;
+		chkd_flags |= XFS_PQUOTA_CHKD;
+	}
+
+	return 0;
+err:
+	chkd_flags = 0;
+	quotacheck_teardown();
+	return ENOMEM;
+}
+
+/* Purge all quotacheck records in a given cache. */
+static void
+qc_purge(
+	struct qc_dquots	**dquotsp)
+{
+	struct qc_dquots	*dquots = *dquotsp;
+	struct qc_rec		*qrec;
+	struct avl64node	*node;
+	struct avl64node	*n;
+
+	if (!dquots)
+		return;
+
+	qc_dquots_foreach(dquots, node, n) {
+		qrec = container_of(node, struct qc_rec, node);
+		free(qrec);
+	}
+	free(dquots);
+	*dquotsp = NULL;
+}
+
+/* Tear down all the incore context from quotacheck. */
+void
+quotacheck_teardown(void)
+{
+	qc_purge(&user_dquots);
+	qc_purge(&group_dquots);
+	qc_purge(&proj_dquots);
+}
diff --git a/repair/quotacheck.h b/repair/quotacheck.h
new file mode 100644
index 00000000..08e11d17
--- /dev/null
+++ b/repair/quotacheck.h
@@ -0,0 +1,15 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+/*
+ * Copyright (C) 2020 Oracle.  All Rights Reserved.
+ * Author: Darrick J. Wong <darrick.wong@oracle.com>
+ */
+#ifndef __XFS_REPAIR_QUOTACHECK_H__
+#define __XFS_REPAIR_QUOTACHECK_H__
+
+void quotacheck_skip(void);
+void quotacheck_adjust(struct xfs_mount *mp, xfs_ino_t ino);
+void quotacheck_verify(struct xfs_mount *mp, unsigned int type);
+int quotacheck_setup(struct xfs_mount *mp);
+void quotacheck_teardown(void);
+
+#endif /* __XFS_REPAIR_QUOTACHECK_H__ */
diff --git a/repair/xfs_repair.c b/repair/xfs_repair.c
index 9ab3cafa..6a796742 100644
--- a/repair/xfs_repair.c
+++ b/repair/xfs_repair.c
@@ -25,6 +25,7 @@
 #include "libfrog/fsgeom.h"
 #include "libfrog/platform.h"
 #include "bload.h"
+#include "quotacheck.h"
 
 /*
  * option tables for getsubopt calls
@@ -42,6 +43,7 @@ enum o_opt_nums {
 	PHASE2_THREADS,
 	BLOAD_LEAF_SLACK,
 	BLOAD_NODE_SLACK,
+	NOQUOTA,
 	O_MAX_OPTS,
 };
 
@@ -54,6 +56,7 @@ static char *o_opts[] = {
 	[PHASE2_THREADS]	= "phase2_threads",
 	[BLOAD_LEAF_SLACK]	= "debug_bload_leaf_slack",
 	[BLOAD_NODE_SLACK]	= "debug_bload_node_slack",
+	[NOQUOTA]		= "noquota",
 	[O_MAX_OPTS]		= NULL,
 };
 
@@ -277,6 +280,9 @@ process_args(int argc, char **argv)
 		_("-o debug_bload_node_slack requires a parameter\n"));
 					bload_node_slack = (int)strtol(val, NULL, 0);
 					break;
+				case NOQUOTA:
+					quotacheck_skip();
+					break;
 				default:
 					unknown('o', val);
 					break;


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 4/4] xfs_repair: skip mount time quotacheck if our quotacheck was ok
  2020-05-14 16:56 [PATCH v2 0/4] xfs_repair: check quota counters Darrick J. Wong
                   ` (2 preceding siblings ...)
  2020-05-14 16:56 ` [PATCH 3/4] xfs_repair: check quota values if quota was loaded Darrick J. Wong
@ 2020-05-14 16:56 ` Darrick J. Wong
  3 siblings, 0 replies; 7+ messages in thread
From: Darrick J. Wong @ 2020-05-14 16:56 UTC (permalink / raw)
  To: sandeen, darrick.wong; +Cc: linux-xfs

From: Darrick J. Wong <darrick.wong@oracle.com>

If we verified that the incore quota counts match the ondisk quota
contents, we can leave the CHKD flags set so that the next mount doesn't
have to repeat the quotacheck.

Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
---
 repair/quotacheck.c |   10 ++++++++++
 repair/quotacheck.h |    1 +
 repair/xfs_repair.c |    2 +-
 3 files changed, 12 insertions(+), 1 deletion(-)


diff --git a/repair/quotacheck.c b/repair/quotacheck.c
index a9e4f1ba..7a94f465 100644
--- a/repair/quotacheck.c
+++ b/repair/quotacheck.c
@@ -24,6 +24,16 @@ void quotacheck_skip(void)
  */
 static uint16_t chkd_flags;
 
+/*
+ * Return CHKD flags for the quota types that we checked.  If we encountered
+ * any errors at all, return zero.
+ */
+uint16_t
+quotacheck_results(void)
+{
+	return chkd_flags;
+}
+
 /* Global incore dquot tree */
 struct qc_dquots {
 	pthread_mutex_t		lock;
diff --git a/repair/quotacheck.h b/repair/quotacheck.h
index 08e11d17..d745696f 100644
--- a/repair/quotacheck.h
+++ b/repair/quotacheck.h
@@ -9,6 +9,7 @@
 void quotacheck_skip(void);
 void quotacheck_adjust(struct xfs_mount *mp, xfs_ino_t ino);
 void quotacheck_verify(struct xfs_mount *mp, unsigned int type);
+uint16_t quotacheck_results(void);
 int quotacheck_setup(struct xfs_mount *mp);
 void quotacheck_teardown(void);
 
diff --git a/repair/xfs_repair.c b/repair/xfs_repair.c
index 6a796742..a82f96f8 100644
--- a/repair/xfs_repair.c
+++ b/repair/xfs_repair.c
@@ -1106,7 +1106,7 @@ _("Warning:  project quota information would be cleared.\n"
 
 	dsb = sbp->b_addr;
 
-	if (mp->m_sb.sb_qflags & XFS_ALL_QUOTA_CHKD) {
+	if ((mp->m_sb.sb_qflags & XFS_ALL_QUOTA_CHKD) != quotacheck_results()) {
 		do_warn(_("Note - quota info will be regenerated on next "
 			"quota mount.\n"));
 		dsb->sb_qflags &= cpu_to_be16(~(XFS_UQUOTA_CHKD |


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH 4/4] xfs_repair: skip mount time quotacheck if our quotacheck was ok
  2020-07-14 21:46 ` [PATCH 4/4] xfs_repair: skip mount time quotacheck if our quotacheck was ok Darrick J. Wong
@ 2020-07-15 18:30   ` Christoph Hellwig
  0 siblings, 0 replies; 7+ messages in thread
From: Christoph Hellwig @ 2020-07-15 18:30 UTC (permalink / raw)
  To: Darrick J. Wong; +Cc: sandeen, linux-xfs

On Tue, Jul 14, 2020 at 02:46:34PM -0700, Darrick J. Wong wrote:
> From: Darrick J. Wong <darrick.wong@oracle.com>
> 
> If we verified that the incore quota counts match the ondisk quota
> contents, we can leave the CHKD flags set so that the next mount doesn't
> have to repeat the quotacheck.
> 
> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>

Looks good,

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 4/4] xfs_repair: skip mount time quotacheck if our quotacheck was ok
  2020-07-14 21:46 [PATCH v2.2 0/4] xfs_repair: check quota counters Darrick J. Wong
@ 2020-07-14 21:46 ` Darrick J. Wong
  2020-07-15 18:30   ` Christoph Hellwig
  0 siblings, 1 reply; 7+ messages in thread
From: Darrick J. Wong @ 2020-07-14 21:46 UTC (permalink / raw)
  To: sandeen, darrick.wong; +Cc: linux-xfs

From: Darrick J. Wong <darrick.wong@oracle.com>

If we verified that the incore quota counts match the ondisk quota
contents, we can leave the CHKD flags set so that the next mount doesn't
have to repeat the quotacheck.

Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
---
 repair/quotacheck.c |   10 ++++++++++
 repair/quotacheck.h |    1 +
 repair/xfs_repair.c |    2 +-
 3 files changed, 12 insertions(+), 1 deletion(-)


diff --git a/repair/quotacheck.c b/repair/quotacheck.c
index 481a1289..0df1f2be 100644
--- a/repair/quotacheck.c
+++ b/repair/quotacheck.c
@@ -24,6 +24,16 @@ void quotacheck_skip(void)
  */
 static uint16_t chkd_flags;
 
+/*
+ * Return CHKD flags for the quota types that we checked.  If we encountered
+ * any errors at all, return zero.
+ */
+uint16_t
+quotacheck_results(void)
+{
+	return chkd_flags;
+}
+
 /* Global incore dquot tree */
 struct qc_dquots {
 	pthread_mutex_t		lock;
diff --git a/repair/quotacheck.h b/repair/quotacheck.h
index 08e11d17..d745696f 100644
--- a/repair/quotacheck.h
+++ b/repair/quotacheck.h
@@ -9,6 +9,7 @@
 void quotacheck_skip(void);
 void quotacheck_adjust(struct xfs_mount *mp, xfs_ino_t ino);
 void quotacheck_verify(struct xfs_mount *mp, unsigned int type);
+uint16_t quotacheck_results(void);
 int quotacheck_setup(struct xfs_mount *mp);
 void quotacheck_teardown(void);
 
diff --git a/repair/xfs_repair.c b/repair/xfs_repair.c
index a787da4c..d687edea 100644
--- a/repair/xfs_repair.c
+++ b/repair/xfs_repair.c
@@ -1106,7 +1106,7 @@ _("Warning:  project quota information would be cleared.\n"
 
 	dsb = sbp->b_addr;
 
-	if (mp->m_sb.sb_qflags & XFS_ALL_QUOTA_CHKD) {
+	if ((mp->m_sb.sb_qflags & XFS_ALL_QUOTA_CHKD) != quotacheck_results()) {
 		do_warn(_("Note - quota info will be regenerated on next "
 			"quota mount.\n"));
 		dsb->sb_qflags &= cpu_to_be16(~(XFS_UQUOTA_CHKD |


^ permalink raw reply related	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2020-07-15 18:30 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-05-14 16:56 [PATCH v2 0/4] xfs_repair: check quota counters Darrick J. Wong
2020-05-14 16:56 ` [PATCH 1/4] xfs_repair: alphabetize HFILES and CFILES Darrick J. Wong
2020-05-14 16:56 ` [PATCH 2/4] xfs_repair: fix clearing of quota CHKD flags Darrick J. Wong
2020-05-14 16:56 ` [PATCH 3/4] xfs_repair: check quota values if quota was loaded Darrick J. Wong
2020-05-14 16:56 ` [PATCH 4/4] xfs_repair: skip mount time quotacheck if our quotacheck was ok Darrick J. Wong
2020-07-14 21:46 [PATCH v2.2 0/4] xfs_repair: check quota counters Darrick J. Wong
2020-07-14 21:46 ` [PATCH 4/4] xfs_repair: skip mount time quotacheck if our quotacheck was ok Darrick J. Wong
2020-07-15 18:30   ` Christoph Hellwig

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.