linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Tom Zanussi <tom.zanussi@linux.intel.com>
To: Dave Jiang <dave.jiang@intel.com>,
	herbert@gondor.apana.org.au, davem@davemloft.net,
	fenghua.yu@intel.com, vkoul@kernel.org
Cc: tony.luck@intel.com, wajdi.k.feghali@intel.com,
	james.guilford@intel.com, kanchana.p.sridhar@intel.com,
	giovanni.cabiddu@intel.com, linux-kernel@vger.kernel.org,
	linux-crypto@vger.kernel.org, dmaengine@vger.kernel.org
Subject: Re: [PATCH v2 04/15] dmaengine: idxd: Export descriptor management functions
Date: Tue, 28 Mar 2023 11:12:16 -0500	[thread overview]
Message-ID: <7c2665ab7e8560f705eb9b6b21c5f6eeebd85eb8.camel@linux.intel.com> (raw)
In-Reply-To: <79d0618f-950c-f2f0-7286-41e199ba0edb@intel.com>

Hi Dave,

On Tue, 2023-03-28 at 09:04 -0700, Dave Jiang wrote:
> 
> 
> On 3/28/23 8:35 AM, Tom Zanussi wrote:
> > To allow idxd sub-drivers to access the descriptor management
> > functions, export them.
> > 
> > Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
> > ---
> >   drivers/dma/idxd/submit.c | 3 +++
> >   1 file changed, 3 insertions(+)
> > 
> > diff --git a/drivers/dma/idxd/submit.c b/drivers/dma/idxd/submit.c
> > index c01db23e3333..9d9ec0b76ccd 100644
> > --- a/drivers/dma/idxd/submit.c
> > +++ b/drivers/dma/idxd/submit.c
> > @@ -61,6 +61,7 @@ struct idxd_desc *idxd_alloc_desc(struct idxd_wq
> > *wq, enum idxd_op_type optype)
> >   
> >         return __get_desc(wq, idx, cpu);
> >   }
> > +EXPORT_SYMBOL_NS_GPL(idxd_alloc_desc, IDXD);
> >   
> >   void idxd_free_desc(struct idxd_wq *wq, struct idxd_desc *desc)
> >   {
> > @@ -69,6 +70,7 @@ void idxd_free_desc(struct idxd_wq *wq, struct
> > idxd_desc *desc)
> >         desc->cpu = -1;
> >         sbitmap_queue_clear(&wq->sbq, desc->id, cpu);
> >   }
> > +EXPORT_SYMBOL_NS_GPL(idxd_free_desc, IDXD);
> >   
> >   static struct idxd_desc *list_abort_desc(struct idxd_wq *wq,
> > struct idxd_irq_entry *ie,
> >                                          struct idxd_desc *desc)
> > @@ -215,3 +217,4 @@ int idxd_submit_desc(struct idxd_wq *wq, struct
> > idxd_desc *desc)
> >         percpu_ref_put(&wq->wq_active);
> >         return 0;
> >   }
> > +EXPORT_SYMBOL_GPL(idxd_submit_desc);
> 
> This one should use the EXPORT_SYMBOL_NS_GPL() as above?

Yeah, not sure how I missed that one ;-/

Thanks for pointing it out.

Tom


  reply	other threads:[~2023-03-28 16:12 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-03-28 15:35 [PATCH v2 00/15] crypto: Add Intel Analytics Accelerator (IAA) crypto compression driver Tom Zanussi
2023-03-28 15:35 ` [PATCH v2 01/15] dmaengine: idxd: add wq driver name support for accel-config user tool Tom Zanussi
2023-03-28 15:35 ` [PATCH v2 02/15] dmaengine: idxd: add external module driver support for dsa_bus_type Tom Zanussi
2023-03-28 15:35 ` [PATCH v2 03/15] dmaengine: idxd: Export drv_enable/disable and related functions Tom Zanussi
2023-03-28 16:02   ` Dave Jiang
2023-03-28 15:35 ` [PATCH v2 04/15] dmaengine: idxd: Export descriptor management functions Tom Zanussi
2023-03-28 16:04   ` Dave Jiang
2023-03-28 16:12     ` Tom Zanussi [this message]
2023-03-28 15:35 ` [PATCH v2 05/15] dmaengine: idxd: Export wq resource " Tom Zanussi
2023-03-28 16:04   ` Dave Jiang
2023-03-28 15:35 ` [PATCH v2 06/15] dmaengine: idxd: Add private_data to struct idxd_wq Tom Zanussi
2023-03-28 16:06   ` Dave Jiang
2023-03-28 16:13     ` Tom Zanussi
2023-03-28 15:35 ` [PATCH v2 07/15] dmaengine: idxd: add callback support for iaa crypto Tom Zanussi
2023-03-28 15:35 ` [PATCH v2 08/15] crypto: iaa - Add IAA Compression Accelerator Documentation Tom Zanussi
2023-03-28 15:35 ` [PATCH v2 09/15] crypto: iaa - Add Intel IAA Compression Accelerator crypto driver core Tom Zanussi
2023-03-28 15:35 ` [PATCH v2 10/15] crypto: iaa - Add per-cpu workqueue table with rebalancing Tom Zanussi
2023-03-28 15:35 ` [PATCH v2 11/15] crypto: iaa - Add compression mode management along with fixed mode Tom Zanussi
2023-03-28 15:35 ` [PATCH v2 12/15] crypto: iaa - Add support for iaa_crypto deflate compression algorithm Tom Zanussi
2023-04-06  8:00   ` Herbert Xu
2023-04-06 14:43     ` Tom Zanussi
2023-03-28 15:35 ` [PATCH v2 13/15] crypto: iaa - Add support for default IAA 'canned' compression mode Tom Zanussi
2023-03-28 15:35 ` [PATCH v2 14/15] crypto: iaa - Add irq support for the crypto async interface Tom Zanussi
2023-03-28 15:35 ` [PATCH v2 15/15] crypto: iaa - Add IAA Compression Accelerator stats Tom Zanussi
     [not found] ` <20230329075149.2736-1-hdanton@sina.com>
2023-03-29 14:58   ` [PATCH v2 10/15] crypto: iaa - Add per-cpu workqueue table with rebalancing Tom Zanussi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=7c2665ab7e8560f705eb9b6b21c5f6eeebd85eb8.camel@linux.intel.com \
    --to=tom.zanussi@linux.intel.com \
    --cc=dave.jiang@intel.com \
    --cc=davem@davemloft.net \
    --cc=dmaengine@vger.kernel.org \
    --cc=fenghua.yu@intel.com \
    --cc=giovanni.cabiddu@intel.com \
    --cc=herbert@gondor.apana.org.au \
    --cc=james.guilford@intel.com \
    --cc=kanchana.p.sridhar@intel.com \
    --cc=linux-crypto@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=tony.luck@intel.com \
    --cc=vkoul@kernel.org \
    --cc=wajdi.k.feghali@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).