All of lore.kernel.org
 help / color / mirror / Atom feed
From: Parav Pandit <parav@mellanox.com>
To: Jason Gunthorpe <jgg@ziepe.ca>, Leon Romanovsky <leon@kernel.org>
Cc: Doug Ledford <dledford@redhat.com>,
	Leon Romanovsky <leonro@mellanox.com>,
	RDMA mailing list <linux-rdma@vger.kernel.org>
Subject: Re: [PATCH rdma-next v1 2/4] IB/core: Let IB core distribute cache update events
Date: Wed, 8 Jan 2020 11:35:30 +0000	[thread overview]
Message-ID: <ff0e5aa8-d931-0270-9aa1-0a8aacd2a253@mellanox.com> (raw)
In-Reply-To: <20200107210230.GA7774@ziepe.ca>

On 1/8/2020 2:32 AM, Jason Gunthorpe wrote:
> On Thu, Dec 12, 2019 at 01:30:22PM +0200, Leon Romanovsky wrote:
> 
>> @@ -2627,7 +2626,11 @@ struct ib_device {
>>  	struct rcu_head rcu_head;
>>
>>  	struct list_head              event_handler_list;
>> -	spinlock_t                    event_handler_lock;
>> +	/* Protects event_handler_list */
>> +	struct rw_semaphore event_handler_rwsem;
>> +
>> +	/* Protects QP's event_handler calls and open_qp list */
>> +	spinlock_t event_handler_lock;
> 
> This only protects the open_qp list really, the event handler call
> doesn't need a spinlock. So lets name it properly. open_list_lock ?
> 
Yes. it protects open_qp list and event handler is called for each list
item. So it doesn't really need to protect event handler calls.

> It is sort of weird that we globally serialize all the qp event
> handlers? ie that this lock isn't in the ib_qp.
> 
It probably isn't in each ib_qp because ib_qp is in hundred thousands
and xrc qp events are not so frequent event that can get contented.
So I think per device qp list lock seems fine.

  reply	other threads:[~2020-01-08 11:35 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-12-12 11:30 [PATCH rdma-next v1 0/4] Let IB core distribute cache update events Leon Romanovsky
2019-12-12 11:30 ` [PATCH rdma-next v1 1/4] IB/mlx5: Do reverse sequence during device removal Leon Romanovsky
2019-12-12 11:30 ` [PATCH rdma-next v1 2/4] IB/core: Let IB core distribute cache update events Leon Romanovsky
2020-01-07 21:02   ` Jason Gunthorpe
2020-01-08 11:35     ` Parav Pandit [this message]
2019-12-12 11:30 ` [PATCH rdma-next v1 3/4] IB/core: Cut down single member ib_cache structure Leon Romanovsky
2019-12-12 11:30 ` [PATCH rdma-next v1 4/4] IB/core: Prefix qp to event_handler_lock Leon Romanovsky
2020-01-08  0:28 ` [PATCH rdma-next v1 0/4] Let IB core distribute cache update events Jason Gunthorpe
2020-01-08 11:42   ` Parav Pandit

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ff0e5aa8-d931-0270-9aa1-0a8aacd2a253@mellanox.com \
    --to=parav@mellanox.com \
    --cc=dledford@redhat.com \
    --cc=jgg@ziepe.ca \
    --cc=leon@kernel.org \
    --cc=leonro@mellanox.com \
    --cc=linux-rdma@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.