From mboxrd@z Thu Jan 1 00:00:00 1970 From: Steven Whitehouse Date: Fri, 12 Jan 2018 10:55:11 +0000 Subject: [Cluster-devel] [GFS2 PATCH 4/6] GFS2: Simplify by checking fast_to_acquire in gfs2_rgrp_congested In-Reply-To: <20180110204227.20315-5-rpeterso@redhat.com> References: <20180110204227.20315-1-rpeterso@redhat.com> <20180110204227.20315-5-rpeterso@redhat.com> Message-ID: List-Id: To: cluster-devel.redhat.com MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Hi, On 10/01/18 20:42, Bob Peterson wrote: > This patch simply moves function fast_to_acquire to before function > gfs2_rgrp_congested, and checks it inside the function rather than > before calling it. As with function gfs2_rgrp_used_recently, function > fast_to_acquire was only called the first time gfs2_rgrp_congested > was used, but not the second, and it applies equally to both calls. > > Signed-off-by: Bob Peterson This definitely doesn't make any sense to move. The point here is that we want to know if the rgrp glock is likely to be fast to acquire. Once we have the glock we don't care any more.... so why check it again after the fact? It is just too late by then, Steve. > --- > fs/gfs2/rgrp.c | 68 +++++++++++++++++++++++++++++----------------------------- > 1 file changed, 34 insertions(+), 34 deletions(-) > > diff --git a/fs/gfs2/rgrp.c b/fs/gfs2/rgrp.c > index fa72e3e6ce4e..9d5f35b01c1d 100644 > --- a/fs/gfs2/rgrp.c > +++ b/fs/gfs2/rgrp.c > @@ -1819,6 +1819,37 @@ static void try_rgrp_unlink(struct gfs2_rgrpd *rgd, u64 *last_unlinked, u64 skip > return; > } > > +/** > + * fast_to_acquire - determine if a resource group will be fast to acquire > + * > + * 1. If this is one of our preferred rgrps, it should be quicker to acquire, > + * because we tried to set ourselves up as dlm lock master. > + * 2. If the glock is unlocked, consider it slow because it will take time for > + * the dlm and the glock state machine to transition it to a locked state. > + * 3. If there are glock holder records queued to the glock, consider it slow > + * because this process will need to be queued up behind another process: > + * Our request can't be enqueued until the other is dequeued. > + * 4. If the rgrp glock is being demoted, consider it slow because the glock > + * will need to be demoted, possibly used on another node, the promoted, > + * all of which will take time. > + * > + */ > +static inline bool fast_to_acquire(const struct gfs2_rgrpd *rgd) > +{ > + struct gfs2_glock *gl = rgd->rd_gl; > + > + if (rgd->rd_flags & GFS2_RDF_PREFERRED) > + return true; > + if (gl->gl_state == LM_ST_UNLOCKED) > + return false; > + if (!list_empty(&gl->gl_holders)) > + return false; > + if (test_bit(GLF_DEMOTE_IN_PROGRESS, &gl->gl_flags) || > + test_bit(GLF_DEMOTE, &gl->gl_flags)) > + return false; > + return true; > +} > + > /** > * gfs2_rgrp_used_recently - check if a rgrp has been used recently > * @rgd: The rgrp to test > @@ -1873,6 +1904,9 @@ static bool gfs2_rgrp_congested(const struct gfs2_rgrpd *rgd, int loops) > u64 var; > int cpu, nonzero = 0; > > + if (loops == 0 && !fast_to_acquire(rgd)) > + return true; > + > /* If it hasn't been used recently we can't judge the statistics, so > assume it's not congested. */ > if (!gfs2_rgrp_used_recently(rgd, HZ)) > @@ -1935,37 +1969,6 @@ static bool gfs2_select_rgrp(struct gfs2_rgrpd **pos, const struct gfs2_rgrpd *b > return false; > } > > -/** > - * fast_to_acquire - determine if a resource group will be fast to acquire > - * > - * 1. If this is one of our preferred rgrps, it should be quicker to acquire, > - * because we tried to set ourselves up as dlm lock master. > - * 2. If the glock is unlocked, consider it slow because it will take time for > - * the dlm and the glock state machine to transition it to a locked state. > - * 3. If there are glock holder records queued to the glock, consider it slow > - * because this process will need to be queued up behind another process: > - * Our request can't be enqueued until the other is dequeued. > - * 4. If the rgrp glock is being demoted, consider it slow because the glock > - * will need to be demoted, possibly used on another node, the promoted, > - * all of which will take time. > - * > - */ > -static inline bool fast_to_acquire(struct gfs2_rgrpd *rgd) > -{ > - struct gfs2_glock *gl = rgd->rd_gl; > - > - if (rgd->rd_flags & GFS2_RDF_PREFERRED) > - return true; > - if (gl->gl_state == LM_ST_UNLOCKED) > - return false; > - if (!list_empty(&gl->gl_holders)) > - return false; > - if (test_bit(GLF_DEMOTE_IN_PROGRESS, &gl->gl_flags) || > - test_bit(GLF_DEMOTE, &gl->gl_flags)) > - return false; > - return true; > -} > - > /** > * gfs2_inplace_reserve - Reserve space in the filesystem > * @ip: the inode to reserve space for > @@ -2017,9 +2020,6 @@ int gfs2_inplace_reserve(struct gfs2_inode *ip, struct gfs2_alloc_parms *ap) > if (skip && skip--) > goto next_rgrp; > if (!gfs2_rs_active(rs)) { > - if (loops == 0 && > - !fast_to_acquire(rs->rs_rbm.rgd)) > - goto next_rgrp; > if ((loops < 2) && > gfs2_rgrp_congested(rs->rs_rbm.rgd, loops)) > goto next_rgrp;