linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Imran Khan <imran.f.khan@oracle.com>
To: Tejun Heo <tj@kernel.org>
Cc: Greg KH <gregkh@linuxfoundation.org>, linux-kernel@vger.kernel.org
Subject: Re: [RFC PATCH v2 1/2] kernfs: use kernfs_node specific mutex and spinlock.
Date: Tue, 11 Jan 2022 10:42:31 +1100	[thread overview]
Message-ID: <989749c4-bae9-8055-39b4-ffc1cb6fc20b@oracle.com> (raw)
In-Reply-To: <YdivuA12i3VU8zO/@slm.duckdns.org>

Hi Tejun,

On 8/1/22 8:25 am, Tejun Heo wrote:
> Hello,
> 
> On Fri, Jan 07, 2022 at 11:01:55PM +1100, Imran Khan wrote:
>> Could you please suggest me some current users of hashed locks ? I can
>> check that code and modify my patches accordingly.
> 
> include/linux/blockgroup_lock.h seems to be one.
> 

Thanks for this.

>> As of now I have not found any standard benchmarks/workloads to show the
>> impact of this contention. We have some in house DB applications where
>> the impact can be easily seen.  Of course those applications can be
>> modified to get the needed data from somewhere else or access sysfs less
>> frequently but nonetheless I am trying to make the current locking
>> scheme more scalable.
> 
> I don't think it needs to show up in one of the common benchmarks but what
> the application does should make some sense. Which files are involved in the
> contentions?
> 

The database application has a health monitoring component which
regularly collects stats from sysfs. With small number of databases this
was not an issue but recently several customers did some consolidation
and ended up having hundreds of databases, all running on the same
server and in those setups the contention became more and more evident.
As more and more customers are consolidating we have started to get more
occurences of this issue and in this case it all depends on number of
running databases on the server.

I will have to reach out to application team to get a list of all sysfs
files being accessed but one of them is
"/sys/class/infiniband/<device>/ports/<port number>/gids/<gid index>".

Thanks
-- Imran


  reply	other threads:[~2022-01-10 23:42 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-01-03  8:45 [RFC PATCH v2 0/2] kernfs: use kernfs_node specific mutex and spinlock Imran Khan
2022-01-03  8:45 ` [RFC PATCH v2 1/2] " Imran Khan
2022-01-03  9:54   ` Greg KH
2022-01-03 22:16     ` Imran Khan
2022-01-04  5:48       ` Imran Khan
2022-01-04  7:40       ` Greg KH
2022-01-06 20:30         ` Tejun Heo
2022-01-07 12:01           ` Imran Khan
2022-01-07 13:30             ` Greg KH
2022-01-07 21:25             ` Tejun Heo
2022-01-10 23:42               ` Imran Khan [this message]
2022-01-12 20:08                 ` Tejun Heo
2022-01-13  8:48                   ` Greg KH
2022-01-13 10:51                     ` Imran Khan
2022-01-03  8:45 ` [RFC PATCH v2 2/2] kernfs: Reduce contention around global per-fs kernfs_rwsem Imran Khan
2022-01-05  2:17   ` [kernfs] 3dd2a5f81a: INFO:trying_to_register_non-static_key kernel test robot

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=989749c4-bae9-8055-39b4-ffc1cb6fc20b@oracle.com \
    --to=imran.f.khan@oracle.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=tj@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).