* [PATCH] ceph: clean up kick_flushing_inode_caps
@ 2020-02-28 14:15 Jeff Layton
2020-02-28 14:27 ` Ilya Dryomov
0 siblings, 1 reply; 3+ messages in thread
From: Jeff Layton @ 2020-02-28 14:15 UTC (permalink / raw)
To: ceph-devel; +Cc: idryomov, sage
The last thing that this function does is release i_ceph_lock, so
have the caller do that instead. Add a lockdep assertion to
ensure that the function is always called with i_ceph_lock held.
Change the prototype to take a ceph_inode_info pointer and drop
the separate mdsc argument as we can get that from the session.
Signed-off-by: Jeff Layton <jlayton@kernel.org>
---
fs/ceph/caps.c | 23 +++++++++--------------
1 file changed, 9 insertions(+), 14 deletions(-)
diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c
index 9b3d5816c109..c02b63070e0a 100644
--- a/fs/ceph/caps.c
+++ b/fs/ceph/caps.c
@@ -2424,16 +2424,15 @@ void ceph_kick_flushing_caps(struct ceph_mds_client *mdsc,
}
}
-static void kick_flushing_inode_caps(struct ceph_mds_client *mdsc,
- struct ceph_mds_session *session,
- struct inode *inode)
- __releases(ci->i_ceph_lock)
+static void kick_flushing_inode_caps(struct ceph_mds_session *session,
+ struct ceph_inode_info *ci)
{
- struct ceph_inode_info *ci = ceph_inode(inode);
- struct ceph_cap *cap;
+ struct ceph_mds_client *mdsc = session->s_mdsc;
+ struct ceph_cap *cap = ci->i_auth_cap;
+
+ lockdep_assert_held(&ci->i_ceph_lock);
- cap = ci->i_auth_cap;
- dout("kick_flushing_inode_caps %p flushing %s\n", inode,
+ dout("%s %p flushing %s\n", __func__, &ci->vfs_inode,
ceph_cap_string(ci->i_flushing_caps));
if (!list_empty(&ci->i_cap_flush_list)) {
@@ -2445,9 +2444,6 @@ static void kick_flushing_inode_caps(struct ceph_mds_client *mdsc,
spin_unlock(&mdsc->cap_dirty_lock);
__kick_flushing_caps(mdsc, session, ci, oldest_flush_tid);
- spin_unlock(&ci->i_ceph_lock);
- } else {
- spin_unlock(&ci->i_ceph_lock);
}
}
@@ -3326,11 +3322,10 @@ static void handle_cap_grant(struct inode *inode,
if (le32_to_cpu(grant->op) == CEPH_CAP_OP_IMPORT) {
if (newcaps & ~extra_info->issued)
wake = true;
- kick_flushing_inode_caps(session->s_mdsc, session, inode);
+ kick_flushing_inode_caps(session, ci);
up_read(&session->s_mdsc->snap_rwsem);
- } else {
- spin_unlock(&ci->i_ceph_lock);
}
+ spin_unlock(&ci->i_ceph_lock);
if (fill_inline)
ceph_fill_inline_data(inode, NULL, extra_info->inline_data,
--
2.24.1
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [PATCH] ceph: clean up kick_flushing_inode_caps
2020-02-28 14:15 [PATCH] ceph: clean up kick_flushing_inode_caps Jeff Layton
@ 2020-02-28 14:27 ` Ilya Dryomov
2020-02-28 15:25 ` Jeff Layton
0 siblings, 1 reply; 3+ messages in thread
From: Ilya Dryomov @ 2020-02-28 14:27 UTC (permalink / raw)
To: Jeff Layton; +Cc: Ceph Development, Sage Weil
On Fri, Feb 28, 2020 at 3:15 PM Jeff Layton <jlayton@kernel.org> wrote:
>
> The last thing that this function does is release i_ceph_lock, so
> have the caller do that instead. Add a lockdep assertion to
> ensure that the function is always called with i_ceph_lock held.
> Change the prototype to take a ceph_inode_info pointer and drop
> the separate mdsc argument as we can get that from the session.
>
> Signed-off-by: Jeff Layton <jlayton@kernel.org>
> ---
> fs/ceph/caps.c | 23 +++++++++--------------
> 1 file changed, 9 insertions(+), 14 deletions(-)
>
> diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c
> index 9b3d5816c109..c02b63070e0a 100644
> --- a/fs/ceph/caps.c
> +++ b/fs/ceph/caps.c
> @@ -2424,16 +2424,15 @@ void ceph_kick_flushing_caps(struct ceph_mds_client *mdsc,
> }
> }
>
> -static void kick_flushing_inode_caps(struct ceph_mds_client *mdsc,
> - struct ceph_mds_session *session,
> - struct inode *inode)
> - __releases(ci->i_ceph_lock)
> +static void kick_flushing_inode_caps(struct ceph_mds_session *session,
> + struct ceph_inode_info *ci)
> {
> - struct ceph_inode_info *ci = ceph_inode(inode);
> - struct ceph_cap *cap;
> + struct ceph_mds_client *mdsc = session->s_mdsc;
> + struct ceph_cap *cap = ci->i_auth_cap;
> +
> + lockdep_assert_held(&ci->i_ceph_lock);
>
> - cap = ci->i_auth_cap;
> - dout("kick_flushing_inode_caps %p flushing %s\n", inode,
> + dout("%s %p flushing %s\n", __func__, &ci->vfs_inode,
> ceph_cap_string(ci->i_flushing_caps));
>
> if (!list_empty(&ci->i_cap_flush_list)) {
> @@ -2445,9 +2444,6 @@ static void kick_flushing_inode_caps(struct ceph_mds_client *mdsc,
> spin_unlock(&mdsc->cap_dirty_lock);
>
> __kick_flushing_caps(mdsc, session, ci, oldest_flush_tid);
> - spin_unlock(&ci->i_ceph_lock);
> - } else {
> - spin_unlock(&ci->i_ceph_lock);
> }
> }
>
> @@ -3326,11 +3322,10 @@ static void handle_cap_grant(struct inode *inode,
> if (le32_to_cpu(grant->op) == CEPH_CAP_OP_IMPORT) {
> if (newcaps & ~extra_info->issued)
> wake = true;
> - kick_flushing_inode_caps(session->s_mdsc, session, inode);
> + kick_flushing_inode_caps(session, ci);
> up_read(&session->s_mdsc->snap_rwsem);
> - } else {
> - spin_unlock(&ci->i_ceph_lock);
> }
> + spin_unlock(&ci->i_ceph_lock);
Hi Jeff,
I would keep the else clause here and release i_ceph_lock before
snap_rwsem for proper nesting. Otherwise kudos on getting rid of
another locking quirk!
Thanks,
Ilya
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH] ceph: clean up kick_flushing_inode_caps
2020-02-28 14:27 ` Ilya Dryomov
@ 2020-02-28 15:25 ` Jeff Layton
0 siblings, 0 replies; 3+ messages in thread
From: Jeff Layton @ 2020-02-28 15:25 UTC (permalink / raw)
To: Ilya Dryomov; +Cc: Ceph Development, Sage Weil
On Fri, 2020-02-28 at 15:27 +0100, Ilya Dryomov wrote:
> On Fri, Feb 28, 2020 at 3:15 PM Jeff Layton <jlayton@kernel.org> wrote:
> > The last thing that this function does is release i_ceph_lock, so
> > have the caller do that instead. Add a lockdep assertion to
> > ensure that the function is always called with i_ceph_lock held.
> > Change the prototype to take a ceph_inode_info pointer and drop
> > the separate mdsc argument as we can get that from the session.
> >
> > Signed-off-by: Jeff Layton <jlayton@kernel.org>
> > ---
> > fs/ceph/caps.c | 23 +++++++++--------------
> > 1 file changed, 9 insertions(+), 14 deletions(-)
> >
> > diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c
> > index 9b3d5816c109..c02b63070e0a 100644
> > --- a/fs/ceph/caps.c
> > +++ b/fs/ceph/caps.c
> > @@ -2424,16 +2424,15 @@ void ceph_kick_flushing_caps(struct ceph_mds_client *mdsc,
> > }
> > }
> >
> > -static void kick_flushing_inode_caps(struct ceph_mds_client *mdsc,
> > - struct ceph_mds_session *session,
> > - struct inode *inode)
> > - __releases(ci->i_ceph_lock)
> > +static void kick_flushing_inode_caps(struct ceph_mds_session *session,
> > + struct ceph_inode_info *ci)
> > {
> > - struct ceph_inode_info *ci = ceph_inode(inode);
> > - struct ceph_cap *cap;
> > + struct ceph_mds_client *mdsc = session->s_mdsc;
> > + struct ceph_cap *cap = ci->i_auth_cap;
> > +
> > + lockdep_assert_held(&ci->i_ceph_lock);
> >
> > - cap = ci->i_auth_cap;
> > - dout("kick_flushing_inode_caps %p flushing %s\n", inode,
> > + dout("%s %p flushing %s\n", __func__, &ci->vfs_inode,
> > ceph_cap_string(ci->i_flushing_caps));
> >
> > if (!list_empty(&ci->i_cap_flush_list)) {
> > @@ -2445,9 +2444,6 @@ static void kick_flushing_inode_caps(struct ceph_mds_client *mdsc,
> > spin_unlock(&mdsc->cap_dirty_lock);
> >
> > __kick_flushing_caps(mdsc, session, ci, oldest_flush_tid);
> > - spin_unlock(&ci->i_ceph_lock);
> > - } else {
> > - spin_unlock(&ci->i_ceph_lock);
> > }
> > }
> >
> > @@ -3326,11 +3322,10 @@ static void handle_cap_grant(struct inode *inode,
> > if (le32_to_cpu(grant->op) == CEPH_CAP_OP_IMPORT) {
> > if (newcaps & ~extra_info->issued)
> > wake = true;
> > - kick_flushing_inode_caps(session->s_mdsc, session, inode);
> > + kick_flushing_inode_caps(session, ci);
> > up_read(&session->s_mdsc->snap_rwsem);
> > - } else {
> > - spin_unlock(&ci->i_ceph_lock);
> > }
> > + spin_unlock(&ci->i_ceph_lock);
>
> Hi Jeff,
>
> I would keep the else clause here and release i_ceph_lock before
> snap_rwsem for proper nesting. Otherwise kudos on getting rid of
> another locking quirk!
>
> Thanks,
>
> Ilya
Meh, ok. I don't think the unlock order really matters much here, but
I'll make that change and push to testing.
Thanks,
--
Jeff Layton <jlayton@kernel.org>
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2020-02-28 15:25 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-28 14:15 [PATCH] ceph: clean up kick_flushing_inode_caps Jeff Layton
2020-02-28 14:27 ` Ilya Dryomov
2020-02-28 15:25 ` Jeff Layton
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.