ceph-devel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jeff Layton <jlayton@kernel.org>
To: ceph-devel@vger.kernel.org
Cc: idryomov@gmail.com, Patrick Donnelly <pdonnell@redhat.com>
Subject: [PATCH] ceph: dump info about cap flushes when we're waiting too long for them
Date: Wed,  7 Jul 2021 13:19:42 -0400	[thread overview]
Message-ID: <20210707171942.38428-1-jlayton@kernel.org> (raw)

We've had some cases of hung umounts in teuthology testing. It looks
like client is waiting for cap flushes to complete, but they aren't.

Change wait_caps_flush to wait 60s, and then dump info about the
condition of the list after that point.

Reported-by: Patrick Donnelly <pdonnell@redhat.com>
URL: https://tracker.ceph.com/issues/51279
Signed-off-by: Jeff Layton <jlayton@kernel.org>
 fs/ceph/mds_client.c | 30 ++++++++++++++++++++++++++++--
 1 file changed, 28 insertions(+), 2 deletions(-)

I'm planning to drop this into the testing kernel to help us track down
the cause. I'm not sure if we'll want to keep it long term so I'll plan
to add a [DO NOT MERGE] tag when I do.

diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
index 7fc9432feece..b0fe5df7ef17 100644
--- a/fs/ceph/mds_client.c
+++ b/fs/ceph/mds_client.c
@@ -2064,6 +2064,23 @@ static int check_caps_flush(struct ceph_mds_client *mdsc,
 	return ret;
+static void dump_cap_flushes(struct ceph_mds_client *mdsc, u64 want_tid)
+	int index = 0;
+	struct ceph_cap_flush *cf;
+	pr_info("%s: still waiting for cap flushes through %llu\n:\n",
+		__func__, want_tid);
+	spin_lock(&mdsc->cap_dirty_lock);
+	list_for_each_entry(cf, &mdsc->cap_flush_list, g_list) {
+		if (cf->tid > want_tid)
+			break;
+		pr_info("%d: %llu %s %d\n", index++, cf->tid,
+			ceph_cap_string(cf->caps), cf->wake);
+	}
+	spin_unlock(&mdsc->cap_dirty_lock);
  * flush all dirty inode data to disk.
@@ -2072,10 +2089,19 @@ static int check_caps_flush(struct ceph_mds_client *mdsc,
 static void wait_caps_flush(struct ceph_mds_client *mdsc,
 			    u64 want_flush_tid)
+	long ret;
 	dout("check_caps_flush want %llu\n", want_flush_tid);
-	wait_event(mdsc->cap_flushing_wq,
-		   check_caps_flush(mdsc, want_flush_tid));
+	do {
+		ret = wait_event_timeout(mdsc->cap_flushing_wq,
+			   check_caps_flush(mdsc, want_flush_tid), 60 * HZ);
+		if (ret == 0)
+			dump_cap_flushes(mdsc, want_flush_tid);
+		else if (ret == 1)
+			pr_info("%s: condition evaluated to true after timeout!\n",
+				  __func__);
+	} while (ret == 0);
 	dout("check_caps_flush ok, flushed thru %llu\n", want_flush_tid);

             reply	other threads:[~2021-07-07 17:19 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-07 17:19 Jeff Layton [this message]
2021-07-21 13:06 ` [PATCH v2] ceph: dump info about cap flushes when we're waiting too long for them Jeff Layton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210707171942.38428-1-jlayton@kernel.org \
    --to=jlayton@kernel.org \
    --cc=ceph-devel@vger.kernel.org \
    --cc=idryomov@gmail.com \
    --cc=pdonnell@redhat.com \


* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).