linux-rdma.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Haakon Bugge <haakon.bugge@oracle.com>
To: Leon Romanovsky <leon@kernel.org>
Cc: Doug Ledford <dledford@redhat.com>,
	Jason Gunthorpe <jgg@nvidia.com>,
	Leon Romanovsky <leonro@nvidia.com>,
	Christian Benvenuti <benve@cisco.com>,
	Dan Carpenter <dan.carpenter@oracle.com>,
	OFED mailing list <linux-rdma@vger.kernel.org>,
	Nelson Escobar <neescoba@cisco.com>
Subject: Re: [PATCH rdma-rc] RDMA/usnic: Lock VF with mutex instead of spinlock
Date: Mon, 13 Sep 2021 08:17:00 +0000	[thread overview]
Message-ID: <ADF1D118-A29D-4B32-9D25-F3B1768C8924@oracle.com> (raw)
In-Reply-To: <2a0e295786c127e518ebee8bb7cafcb819a625f6.1631520231.git.leonro@nvidia.com>



> On 13 Sep 2021, at 10:04, Leon Romanovsky <leon@kernel.org> wrote:
> 
> From: Leon Romanovsky <leonro@nvidia.com>
> 
> Usnic VF doesn't need lock in atomic context to create QPs, so it is safe
> to use mutex instead of spinlock. Such change fixes the following smatch
> error.

s/GFP_ATOMIC/GFP_KERNEL/ in find_free_vf_and_create_qp_grp() as well?


Thxs, Håkon
> 
> Smatch static checker warning:
> 
>   lib/kobject.c:289 kobject_set_name_vargs()
>    warn: sleeping in atomic context
> 
> Fixes: 514aee660df4 ("RDMA: Globally allocate and release QP memory")
> Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
> ---
> drivers/infiniband/hw/usnic/usnic_ib.h       |  2 +-
> drivers/infiniband/hw/usnic/usnic_ib_main.c  |  2 +-
> drivers/infiniband/hw/usnic/usnic_ib_verbs.c | 16 ++++++++--------
> 3 files changed, 10 insertions(+), 10 deletions(-)
> 
> diff --git a/drivers/infiniband/hw/usnic/usnic_ib.h b/drivers/infiniband/hw/usnic/usnic_ib.h
> index 84dd682d2334..b350081aeb5a 100644
> --- a/drivers/infiniband/hw/usnic/usnic_ib.h
> +++ b/drivers/infiniband/hw/usnic/usnic_ib.h
> @@ -90,7 +90,7 @@ struct usnic_ib_dev {
> 
> struct usnic_ib_vf {
> 	struct usnic_ib_dev		*pf;
> -	spinlock_t			lock;
> +	struct mutex			lock;
> 	struct usnic_vnic		*vnic;
> 	unsigned int			qp_grp_ref_cnt;
> 	struct usnic_ib_pd		*pd;
> diff --git a/drivers/infiniband/hw/usnic/usnic_ib_main.c b/drivers/infiniband/hw/usnic/usnic_ib_main.c
> index 228e9a36dad0..d346dd48e731 100644
> --- a/drivers/infiniband/hw/usnic/usnic_ib_main.c
> +++ b/drivers/infiniband/hw/usnic/usnic_ib_main.c
> @@ -572,7 +572,7 @@ static int usnic_ib_pci_probe(struct pci_dev *pdev,
> 	}
> 
> 	vf->pf = pf;
> -	spin_lock_init(&vf->lock);
> +	mutex_init(&vf->lock);
> 	mutex_lock(&pf->usdev_lock);
> 	list_add_tail(&vf->link, &pf->vf_dev_list);
> 	/*
> diff --git a/drivers/infiniband/hw/usnic/usnic_ib_verbs.c b/drivers/infiniband/hw/usnic/usnic_ib_verbs.c
> index 06a4e9d4545d..756a83bcff58 100644
> --- a/drivers/infiniband/hw/usnic/usnic_ib_verbs.c
> +++ b/drivers/infiniband/hw/usnic/usnic_ib_verbs.c
> @@ -196,7 +196,7 @@ find_free_vf_and_create_qp_grp(struct ib_qp *qp,
> 		for (i = 0; dev_list[i]; i++) {
> 			dev = dev_list[i];
> 			vf = dev_get_drvdata(dev);
> -			spin_lock(&vf->lock);
> +			mutex_lock(&vf->lock);
> 			vnic = vf->vnic;
> 			if (!usnic_vnic_check_room(vnic, res_spec)) {
> 				usnic_dbg("Found used vnic %s from %s\n",
> @@ -208,10 +208,10 @@ find_free_vf_and_create_qp_grp(struct ib_qp *qp,
> 							     vf, pd, res_spec,
> 							     trans_spec);
> 
> -				spin_unlock(&vf->lock);
> +				mutex_unlock(&vf->lock);
> 				goto qp_grp_check;
> 			}
> -			spin_unlock(&vf->lock);
> +			mutex_unlock(&vf->lock);
> 
> 		}
> 		usnic_uiom_free_dev_list(dev_list);
> @@ -220,7 +220,7 @@ find_free_vf_and_create_qp_grp(struct ib_qp *qp,
> 
> 	/* Try to find resources on an unused vf */
> 	list_for_each_entry(vf, &us_ibdev->vf_dev_list, link) {
> -		spin_lock(&vf->lock);
> +		mutex_lock(&vf->lock);
> 		vnic = vf->vnic;
> 		if (vf->qp_grp_ref_cnt == 0 &&
> 		    usnic_vnic_check_room(vnic, res_spec) == 0) {
> @@ -228,10 +228,10 @@ find_free_vf_and_create_qp_grp(struct ib_qp *qp,
> 						     vf, pd, res_spec,
> 						     trans_spec);
> 
> -			spin_unlock(&vf->lock);
> +			mutex_unlock(&vf->lock);
> 			goto qp_grp_check;
> 		}
> -		spin_unlock(&vf->lock);
> +		mutex_unlock(&vf->lock);
> 	}
> 
> 	usnic_info("No free qp grp found on %s\n",
> @@ -253,9 +253,9 @@ static void qp_grp_destroy(struct usnic_ib_qp_grp *qp_grp)
> 
> 	WARN_ON(qp_grp->state != IB_QPS_RESET);
> 
> -	spin_lock(&vf->lock);
> +	mutex_lock(&vf->lock);
> 	usnic_ib_qp_grp_destroy(qp_grp);
> -	spin_unlock(&vf->lock);
> +	mutex_unlock(&vf->lock);
> }
> 
> static int create_qp_validate_user_data(struct usnic_ib_create_qp_cmd cmd)
> -- 
> 2.31.1
> 


  reply	other threads:[~2021-09-13  8:17 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-13  8:04 [PATCH rdma-rc] RDMA/usnic: Lock VF with mutex instead of spinlock Leon Romanovsky
2021-09-13  8:17 ` Haakon Bugge [this message]
2021-09-13  8:27   ` Leon Romanovsky
2021-09-13 12:50     ` Haakon Bugge
2021-09-23  5:34 ` Leon Romanovsky
2021-09-24 13:59 ` Jason Gunthorpe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ADF1D118-A29D-4B32-9D25-F3B1768C8924@oracle.com \
    --to=haakon.bugge@oracle.com \
    --cc=benve@cisco.com \
    --cc=dan.carpenter@oracle.com \
    --cc=dledford@redhat.com \
    --cc=jgg@nvidia.com \
    --cc=leon@kernel.org \
    --cc=leonro@nvidia.com \
    --cc=linux-rdma@vger.kernel.org \
    --cc=neescoba@cisco.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).