Linux-RDMA Archive on lore.kernel.org
 help / color / Atom feed
From: Parav Pandit <parav@mellanox.com>
To: Jason Gunthorpe <jgg@ziepe.ca>, Leon Romanovsky <leon@kernel.org>
Cc: Doug Ledford <dledford@redhat.com>,
	Leon Romanovsky <leonro@mellanox.com>,
	RDMA mailing list <linux-rdma@vger.kernel.org>
Subject: Re: [PATCH rdma-next v1 2/4] IB/core: Let IB core distribute cache update events
Date: Wed, 8 Jan 2020 11:35:30 +0000
Message-ID: <ff0e5aa8-d931-0270-9aa1-0a8aacd2a253@mellanox.com> (raw)
In-Reply-To: <20200107210230.GA7774@ziepe.ca>

On 1/8/2020 2:32 AM, Jason Gunthorpe wrote:
> On Thu, Dec 12, 2019 at 01:30:22PM +0200, Leon Romanovsky wrote:
> 
>> @@ -2627,7 +2626,11 @@ struct ib_device {
>>  	struct rcu_head rcu_head;
>>
>>  	struct list_head              event_handler_list;
>> -	spinlock_t                    event_handler_lock;
>> +	/* Protects event_handler_list */
>> +	struct rw_semaphore event_handler_rwsem;
>> +
>> +	/* Protects QP's event_handler calls and open_qp list */
>> +	spinlock_t event_handler_lock;
> 
> This only protects the open_qp list really, the event handler call
> doesn't need a spinlock. So lets name it properly. open_list_lock ?
> 
Yes. it protects open_qp list and event handler is called for each list
item. So it doesn't really need to protect event handler calls.

> It is sort of weird that we globally serialize all the qp event
> handlers? ie that this lock isn't in the ib_qp.
> 
It probably isn't in each ib_qp because ib_qp is in hundred thousands
and xrc qp events are not so frequent event that can get contented.
So I think per device qp list lock seems fine.

  reply index

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-12-12 11:30 [PATCH rdma-next v1 0/4] " Leon Romanovsky
2019-12-12 11:30 ` [PATCH rdma-next v1 1/4] IB/mlx5: Do reverse sequence during device removal Leon Romanovsky
2019-12-12 11:30 ` [PATCH rdma-next v1 2/4] IB/core: Let IB core distribute cache update events Leon Romanovsky
2020-01-07 21:02   ` Jason Gunthorpe
2020-01-08 11:35     ` Parav Pandit [this message]
2019-12-12 11:30 ` [PATCH rdma-next v1 3/4] IB/core: Cut down single member ib_cache structure Leon Romanovsky
2019-12-12 11:30 ` [PATCH rdma-next v1 4/4] IB/core: Prefix qp to event_handler_lock Leon Romanovsky
2020-01-08  0:28 ` [PATCH rdma-next v1 0/4] Let IB core distribute cache update events Jason Gunthorpe
2020-01-08 11:42   ` Parav Pandit

Reply instructions:

You may reply publically to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ff0e5aa8-d931-0270-9aa1-0a8aacd2a253@mellanox.com \
    --to=parav@mellanox.com \
    --cc=dledford@redhat.com \
    --cc=jgg@ziepe.ca \
    --cc=leon@kernel.org \
    --cc=leonro@mellanox.com \
    --cc=linux-rdma@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

Linux-RDMA Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-rdma/0 linux-rdma/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-rdma linux-rdma/ https://lore.kernel.org/linux-rdma \
		linux-rdma@vger.kernel.org
	public-inbox-index linux-rdma

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-rdma


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git