All of lore.kernel.org
 help / color / mirror / Atom feed
* cephfs kclient session->s_mutex coverage
@ 2021-06-09 14:12 Jeff Layton
  0 siblings, 0 replies; only message in thread
From: Jeff Layton @ 2021-06-09 14:12 UTC (permalink / raw)
  To: Ilya Dryomov, Sage Weil, Yan, Zheng, Sadeh-Weinraub, Yehuda
  Cc: Ceph Development

Ancient history question for those who worked on the kclient long ago...

I've been seeing some problems during unmounting with the latest kclient
code and have opened a tracker here:

    https://tracker.ceph.com/issues/51158?issue_count=114&issue_position=1&next_issue_id=51112

The issue is that the client can end up queueing an asynchronous iput()
call to put an inode reference, but that can happen after the point
where the workqueues get flushed at unmount time.

We could try to do more workarounds for this, but I keep circling back
to the fact that the session->s_mutex seems to be held over _way_ too
much of the kclient code. Reducing its scope would be a much better fix,
and would probably be beneficial for performance, and we could eliminate
the workaround that added ceph_async_iput().

I know this is ancient history to most of you who originally worked on
this code, but does anyone remember what the session->s_mutex was
intended to protect, particularly in ceph_check_caps() but also in some 

When I've asked in the past, I've gotten some handwavy answers about
ordering of session messages, but I don't think that's a good enough
reason to do this. We shouldn't be relying on sleeping locks to order
messages on the wire. Most of the data that is accessed and altered in
this code seems to be protected by more fine-grained locks.

There's also this:

/*
 * Handle mds reply.
 *
 * We take the session mutex and parse and process the reply immediately.
 * This preserves the logical ordering of replies, capabilities, etc., sent
 * by the MDS as they are applied to our local cache.
 */

...but again, using sleeping locks to do this ordering is problematic.

Can anyone help identify why it was done this way in the first place? If
we drop the s_mutex from some of these codepaths, what sort of ordering
do we need to guarantee? The snap_rwsem seems to have similar issues, so
the same questions apply to it as well.

Thanks,
-- 
Jeff Layton <jlayton@redhat.com>


^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2021-06-09 14:12 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-09 14:12 cephfs kclient session->s_mutex coverage Jeff Layton

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.