linux-rdma.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Saleem, Shiraz" <shiraz.saleem@intel.com>
To: Jason Gunthorpe <jgg@mellanox.com>,
	"Kirsher, Jeffrey T" <jeffrey.t.kirsher@intel.com>
Cc: "dledford@redhat.com" <dledford@redhat.com>,
	"davem@davemloft.net" <davem@davemloft.net>,
	"gregkh@linuxfoundation.org" <gregkh@linuxfoundation.org>,
	"linux-rdma@vger.kernel.org" <linux-rdma@vger.kernel.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"nhorman@redhat.com" <nhorman@redhat.com>,
	"sassmann@redhat.com" <sassmann@redhat.com>,
	"poswald@suse.com" <poswald@suse.com>
Subject: RE: [RDMA RFC v6 00/16] Intel RDMA Driver Updates 2020-05-19
Date: Wed, 27 May 2020 01:58:01 +0000	[thread overview]
Message-ID: <9DD61F30A802C4429A01CA4200E302A7EE04047F@fmsmsx124.amr.corp.intel.com> (raw)
In-Reply-To: <20200521141247.GQ24561@mellanox.com>

> Subject: Re: [RDMA RFC v6 00/16] Intel RDMA Driver Updates 2020-05-19
> 
> On Wed, May 20, 2020 at 12:03:59AM -0700, Jeff Kirsher wrote:
> > This patch set adds a unified Intel Ethernet Protocol Driver for RDMA
> > that supports a new network device E810 (iWARP and RoCEv2 capable) and
> > the existing X722 iWARP device. The driver architecture provides the
> > extensibility for future generations of Intel HW supporting RDMA.
> >
> > This driver replaces the legacy X722 driver i40iw and extends the ABI
> > already defined for i40iw. It is backward compatible with legacy X722
> > rdma-core provider (libi40iw).
> >
> > This series was built against the rdma for-next branch.  This series
> > is dependant upon the v4 100GbE Intel Wired LAN Driver Updates
> > 2020-05-19
> > 12 patch series, which adds virtual_bus interface and ice/i40e LAN
> > driver changes.
> >
> > v5-->v6:
> > *Convert irdma destroy QP to a synchronous API *Drop HMC obj macros
> > for use counts like IRDMA_INC_SD_REFCNT et al.
> > *cleanup unneccesary 'mem' variable in irdma_create_qp *cleanup unused
> > headers such as linux/moduleparam.h et. al *set kernel_ver in
> > irdma_ualloc_resp struct to current ABI ver. Placeholder to  support
> > user-space compatbility checks in future *GENMASK/FIELD_PREP scheme to
> > set WQE descriptor fields considered for irdma  driver but decision to
> > drop. The FIELD_PREP macro cannot be used on the device  bitfield mask
> > array maintained for common WQE descriptors and initialized  based on
> > HW generation. The macro expects compile time constants only.
> 
> The request was to use GENMASK for the #define constants. If you move to a
> code environment then the spot the constant appears in the C code should be
> FIELD_PREP'd into the something dynamic code can use.
>

Maybe I am missing something here, but from what I understood,
the vantage point of using GENMASK for the masks
was so that we could get rid of open coding the shift constants and use the
FIELD_PREP macro to place the value in the field of a descriptor.
This should work for the static masks. So something like --

-#define IRDMA_UDA_QPSQ_INLINEDATALEN_S 48
-#define IRDMA_UDA_QPSQ_INLINEDATALEN_M \
-       ((u64)0xff << IRDMA_UDA_QPSQ_INLINEDATALEN_S)
+#define IRDMA_UDA_QPSQ_INLINEDATALEN_M GENMASK_ULL(55, 48)

-#define LS_64(val, field)      (((u64)(val) << field ## _S) & (field ## _M))
+#define LS_64(val, field)      (FIELD_PREP(val,(field ## _M)))

However we have device's dynamically computed bitfield mask array and shifts
for some WQE descriptor fields --
see icrdma_init_hw.c
https://lore.kernel.org/linux-rdma/20200520070415.3392210-3-jeffrey.t.kirsher@intel.com/#Z30drivers:infiniband:hw:irdma:icrdma_hw.c

we still need to use the custom macro FLD_LS_64 without FIELD_PREP in this case
as FIELD_PREP expects compile time constants.
+#define FLD_LS_64(dev, val, field)	\
+	(((u64)(val) << (dev)->hw_shifts[field ## _S]) & (dev)->hw_masks[field 
+## _M])
And the shifts are still required for these fields which causes a bit of
inconsistency




  reply	other threads:[~2020-05-27  1:58 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-05-20  7:03 [RDMA RFC v6 00/16] Intel RDMA Driver Updates 2020-05-19 Jeff Kirsher
2020-05-20  7:04 ` [RDMA RFC v6 01/16] RDMA/irdma: Add driver framework definitions Jeff Kirsher
2020-05-20  7:26   ` Greg KH
2020-05-27  1:57     ` Saleem, Shiraz
2020-05-20  7:04 ` [RDMA RFC v6 02/16] RDMA/irdma: Implement device initialization definitions Jeff Kirsher
2020-05-20  7:04 ` [RDMA RFC v6 03/16] RDMA/irdma: Implement HW Admin Queue OPs Jeff Kirsher
2020-05-20  7:04 ` [RDMA RFC v6 04/16] RDMA/irdma: Add HMC backing store setup functions Jeff Kirsher
2020-05-20  7:04 ` [RDMA RFC v6 05/16] RDMA/irdma: Add privileged UDA queue implementation Jeff Kirsher
2020-05-20  7:04 ` [RDMA RFC v6 06/16] RDMA/irdma: Add QoS definitions Jeff Kirsher
2020-05-20  7:04 ` [RDMA RFC v6 07/16] RDMA/irdma: Add connection manager Jeff Kirsher
2020-05-20  7:04 ` [RDMA RFC v6 08/16] RDMA/irdma: Add PBLE resource manager Jeff Kirsher
2020-05-20  7:04 ` [RDMA RFC v6 09/16] RDMA/irdma: Implement device supported verb APIs Jeff Kirsher
2020-05-20  7:04 ` [RDMA RFC v6 10/16] RDMA/irdma: Add RoCEv2 UD OP support Jeff Kirsher
2020-05-20  7:04 ` [RDMA RFC v6 11/16] RDMA/irdma: Add user/kernel shared libraries Jeff Kirsher
2020-05-20  7:04 ` [RDMA RFC v6 12/16] RDMA/irdma: Add miscellaneous utility definitions Jeff Kirsher
2020-05-20  7:04 ` [RDMA RFC v6 13/16] RDMA/irdma: Add dynamic tracing for CM Jeff Kirsher
2020-05-20  7:04 ` [RDMA RFC v6 14/16] RDMA/irdma: Add ABI definitions Jeff Kirsher
2020-05-20  7:54   ` Gal Pressman
2020-05-20  8:52     ` Greg KH
2020-05-20  9:02       ` Gal Pressman
2020-05-20 12:37         ` Jason Gunthorpe
2020-05-27  1:58           ` Saleem, Shiraz
2020-05-20  7:04 ` [RDMA RFC v6 15/16] RDMA/irdma: Add irdma Kconfig/Makefile and remove i40iw Jeff Kirsher
2020-05-20  7:04 ` [RDMA RFC v6 16/16] RDMA/irdma: Update MAINTAINERS file Jeff Kirsher
2020-05-20  7:49   ` Gal Pressman
2020-05-27  1:58     ` Saleem, Shiraz
2020-05-21 14:12 ` [RDMA RFC v6 00/16] Intel RDMA Driver Updates 2020-05-19 Jason Gunthorpe
2020-05-27  1:58   ` Saleem, Shiraz [this message]
2020-05-27  5:08     ` Leon Romanovsky
2020-05-29 15:21       ` Saleem, Shiraz
2020-06-01 14:28         ` Jason Gunthorpe
2020-06-02 22:59           ` Saleem, Shiraz
2020-06-02 23:29             ` Jason Gunthorpe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=9DD61F30A802C4429A01CA4200E302A7EE04047F@fmsmsx124.amr.corp.intel.com \
    --to=shiraz.saleem@intel.com \
    --cc=davem@davemloft.net \
    --cc=dledford@redhat.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=jeffrey.t.kirsher@intel.com \
    --cc=jgg@mellanox.com \
    --cc=linux-rdma@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=nhorman@redhat.com \
    --cc=poswald@suse.com \
    --cc=sassmann@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).