linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: kernel test robot <lkp@intel.com>
To: George Cherian <george.cherian@marvell.com>,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org
Cc: kbuild-all@lists.01.org, kuba@kernel.org, davem@davemloft.net,
	sgoutham@marvell.com, lcherian@marvell.com, gakula@marvell.com,
	masahiroy@kernel.org, george.cherian@marvell.com
Subject: Re: [net-next PATCH 3/3] octeontx2-af: Add devlink health reporters for NIX
Date: Tue, 3 Nov 2020 15:30:14 +0800	[thread overview]
Message-ID: <202011031544.I4qbSnOY-lkp@intel.com> (raw)
In-Reply-To: <20201102050649.2188434-4-george.cherian@marvell.com>

[-- Attachment #1: Type: text/plain, Size: 9672 bytes --]

Hi George,

I love your patch! Perhaps something to improve:

[auto build test WARNING on net-next/master]

url:    https://github.com/0day-ci/linux/commits/George-Cherian/Add-devlink-and-devlink-health-reporters-to/20201102-130844
base:   https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git c43fd36f7fec6c227c5e8a8ddd7d3fe97472182f
config: x86_64-allyesconfig (attached as .config)
compiler: gcc-9 (Debian 9.3.0-15) 9.3.0
reproduce (this is a W=1 build):
        # https://github.com/0day-ci/linux/commit/bdffba84e2716a5f218840ac6a80052587e48c59
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review George-Cherian/Add-devlink-and-devlink-health-reporters-to/20201102-130844
        git checkout bdffba84e2716a5f218840ac6a80052587e48c59
        # save the attached .config to linux build tree
        make W=1 ARCH=x86_64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

   drivers/net/ethernet/marvell/octeontx2/af/rvu_devlink.c:19:5: warning: no previous prototype for 'rvu_report_pair_start' [-Wmissing-prototypes]
      19 | int rvu_report_pair_start(struct devlink_fmsg *fmsg, const char *name)
         |     ^~~~~~~~~~~~~~~~~~~~~
   drivers/net/ethernet/marvell/octeontx2/af/rvu_devlink.c:30:5: warning: no previous prototype for 'rvu_report_pair_end' [-Wmissing-prototypes]
      30 | int rvu_report_pair_end(struct devlink_fmsg *fmsg)
         |     ^~~~~~~~~~~~~~~~~~~
>> drivers/net/ethernet/marvell/octeontx2/af/rvu_devlink.c:41:13: warning: no previous prototype for 'rvu_nix_af_rvu_intr_handler' [-Wmissing-prototypes]
      41 | irqreturn_t rvu_nix_af_rvu_intr_handler(int irq, void *rvu_irq)
         |             ^~~~~~~~~~~~~~~~~~~~~~~~~~~
>> drivers/net/ethernet/marvell/octeontx2/af/rvu_devlink.c:65:13: warning: no previous prototype for 'rvu_nix_af_err_intr_handler' [-Wmissing-prototypes]
      65 | irqreturn_t rvu_nix_af_err_intr_handler(int irq, void *rvu_irq)
         |             ^~~~~~~~~~~~~~~~~~~~~~~~~~~
>> drivers/net/ethernet/marvell/octeontx2/af/rvu_devlink.c:107:13: warning: no previous prototype for 'rvu_nix_af_ras_intr_handler' [-Wmissing-prototypes]
     107 | irqreturn_t rvu_nix_af_ras_intr_handler(int irq, void *rvu_irq)
         |             ^~~~~~~~~~~~~~~~~~~~~~~~~~~
>> drivers/net/ethernet/marvell/octeontx2/af/rvu_devlink.c:208:5: warning: no previous prototype for 'rvu_nix_register_interrupts' [-Wmissing-prototypes]
     208 | int rvu_nix_register_interrupts(struct rvu *rvu)
         |     ^~~~~~~~~~~~~~~~~~~~~~~~~~~
>> drivers/net/ethernet/marvell/octeontx2/af/rvu_devlink.c:376:5: warning: no previous prototype for 'rvu_nix_health_reporters_create' [-Wmissing-prototypes]
     376 | int rvu_nix_health_reporters_create(struct rvu_devlink *rvu_dl)
         |     ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>> drivers/net/ethernet/marvell/octeontx2/af/rvu_devlink.c:400:6: warning: no previous prototype for 'rvu_nix_health_reporters_destroy' [-Wmissing-prototypes]
     400 | void rvu_nix_health_reporters_destroy(struct rvu_devlink *rvu_dl)
         |      ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   drivers/net/ethernet/marvell/octeontx2/af/rvu_devlink.c:569:5: warning: no previous prototype for 'rvu_npa_register_interrupts' [-Wmissing-prototypes]
     569 | int rvu_npa_register_interrupts(struct rvu *rvu)
         |     ^~~~~~~~~~~~~~~~~~~~~~~~~~~

vim +/rvu_nix_af_rvu_intr_handler +41 drivers/net/ethernet/marvell/octeontx2/af/rvu_devlink.c

    40	
  > 41	irqreturn_t rvu_nix_af_rvu_intr_handler(int irq, void *rvu_irq)
    42	{
    43		struct rvu_nix_event_cnt *nix_event_count;
    44		struct rvu_devlink *rvu_dl = rvu_irq;
    45		struct rvu *rvu;
    46		int blkaddr;
    47		u64 intr;
    48	
    49		rvu = rvu_dl->rvu;
    50		blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NIX, 0);
    51		if (blkaddr < 0)
    52			return IRQ_NONE;
    53	
    54		nix_event_count = rvu_dl->nix_event_cnt;
    55		intr = rvu_read64(rvu, blkaddr, NIX_AF_RVU_INT);
    56	
    57		if (intr & BIT_ULL(0))
    58			nix_event_count->unmap_slot_count++;
    59	
    60		/* Clear interrupts */
    61		rvu_write64(rvu, blkaddr, NIX_AF_RVU_INT, intr);
    62		return IRQ_HANDLED;
    63	}
    64	
  > 65	irqreturn_t rvu_nix_af_err_intr_handler(int irq, void *rvu_irq)
    66	{
    67		struct rvu_nix_event_cnt *nix_event_count;
    68		struct rvu_devlink *rvu_dl = rvu_irq;
    69		struct rvu *rvu;
    70		int blkaddr;
    71		u64 intr;
    72	
    73		rvu = rvu_dl->rvu;
    74		blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NIX, 0);
    75		if (blkaddr < 0)
    76			return IRQ_NONE;
    77	
    78		nix_event_count = rvu_dl->nix_event_cnt;
    79		intr = rvu_read64(rvu, blkaddr, NIX_AF_ERR_INT);
    80	
    81		if (intr & BIT_ULL(14))
    82			nix_event_count->aq_inst_count++;
    83		if (intr & BIT_ULL(13))
    84			nix_event_count->aq_res_count++;
    85		if (intr & BIT_ULL(12))
    86			nix_event_count->aq_db_count++;
    87		if (intr & BIT_ULL(6))
    88			nix_event_count->rx_on_unmap_pf_count++;
    89		if (intr & BIT_ULL(5))
    90			nix_event_count->rx_mcast_repl_count++;
    91		if (intr & BIT_ULL(4))
    92			nix_event_count->rx_mcast_memfault_count++;
    93		if (intr & BIT_ULL(3))
    94			nix_event_count->rx_mcast_wqe_memfault_count++;
    95		if (intr & BIT_ULL(2))
    96			nix_event_count->rx_mirror_wqe_memfault_count++;
    97		if (intr & BIT_ULL(1))
    98			nix_event_count->rx_mirror_pktw_memfault_count++;
    99		if (intr & BIT_ULL(0))
   100			nix_event_count->rx_mcast_pktw_memfault_count++;
   101	
   102		/* Clear interrupts */
   103		rvu_write64(rvu, blkaddr, NIX_AF_ERR_INT, intr);
   104		return IRQ_HANDLED;
   105	}
   106	
 > 107	irqreturn_t rvu_nix_af_ras_intr_handler(int irq, void *rvu_irq)
   108	{
   109		struct rvu_nix_event_cnt *nix_event_count;
   110		struct rvu_devlink *rvu_dl = rvu_irq;
   111		struct rvu *rvu;
   112		int blkaddr;
   113		u64 intr;
   114	
   115		rvu = rvu_dl->rvu;
   116		blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NIX, 0);
   117		if (blkaddr < 0)
   118			return IRQ_NONE;
   119	
   120		nix_event_count = rvu_dl->nix_event_cnt;
   121		intr = rvu_read64(rvu, blkaddr, NIX_AF_RAS);
   122	
   123		if (intr & BIT_ULL(34))
   124			nix_event_count->poison_aq_inst_count++;
   125		if (intr & BIT_ULL(33))
   126			nix_event_count->poison_aq_res_count++;
   127		if (intr & BIT_ULL(32))
   128			nix_event_count->poison_aq_cxt_count++;
   129		if (intr & BIT_ULL(4))
   130			nix_event_count->rx_mirror_data_poison_count++;
   131		if (intr & BIT_ULL(3))
   132			nix_event_count->rx_mcast_data_poison_count++;
   133		if (intr & BIT_ULL(2))
   134			nix_event_count->rx_mirror_wqe_poison_count++;
   135		if (intr & BIT_ULL(1))
   136			nix_event_count->rx_mcast_wqe_poison_count++;
   137		if (intr & BIT_ULL(0))
   138			nix_event_count->rx_mce_poison_count++;
   139	
   140		/* Clear interrupts */
   141		rvu_write64(rvu, blkaddr, NIX_AF_RAS, intr);
   142		return IRQ_HANDLED;
   143	}
   144	
   145	static bool rvu_nix_af_request_irq(struct rvu *rvu, int offset,
   146					   const char *name, irq_handler_t fn)
   147	{
   148		struct rvu_devlink *rvu_dl = rvu->rvu_dl;
   149		int rc;
   150	
   151		WARN_ON(rvu->irq_allocated[offset]);
   152		rvu->irq_allocated[offset] = false;
   153		sprintf(&rvu->irq_name[offset * NAME_SIZE], name);
   154		rc = request_irq(pci_irq_vector(rvu->pdev, offset), fn, 0,
   155				 &rvu->irq_name[offset * NAME_SIZE], rvu_dl);
   156		if (rc)
   157			dev_warn(rvu->dev, "Failed to register %s irq\n", name);
   158		else
   159			rvu->irq_allocated[offset] = true;
   160	
   161		return rvu->irq_allocated[offset];
   162	}
   163	
   164	static int rvu_nix_blk_register_interrupts(struct rvu *rvu,
   165						   int blkaddr)
   166	{
   167		int base;
   168		bool rc;
   169	
   170		/* Get NIX AF MSIX vectors offset. */
   171		base = rvu_read64(rvu, blkaddr, NIX_PRIV_AF_INT_CFG) & 0x3ff;
   172		if (!base) {
   173			dev_warn(rvu->dev,
   174				 "Failed to get NIX%d NIX_AF_INT vector offsets\n",
   175				 blkaddr - BLKADDR_NIX0);
   176			return 0;
   177		}
   178		/* Register and enable NIX_AF_RVU_INT interrupt */
   179		rc = rvu_nix_af_request_irq(rvu, base +  NIX_AF_INT_VEC_RVU,
   180					    "NIX_AF_RVU_INT",
   181					    rvu_nix_af_rvu_intr_handler);
   182		if (!rc)
   183			goto err;
   184		rvu_write64(rvu, blkaddr, NIX_AF_RVU_INT_ENA_W1S, ~0ULL);
   185	
   186		/* Register and enable NIX_AF_ERR_INT interrupt */
   187		rc = rvu_nix_af_request_irq(rvu, base + NIX_AF_INT_VEC_AF_ERR,
   188					    "NIX_AF_ERR_INT",
   189					    rvu_nix_af_err_intr_handler);
   190		if (!rc)
   191			goto err;
   192		rvu_write64(rvu, blkaddr, NIX_AF_ERR_INT_ENA_W1S, ~0ULL);
   193	
   194		/* Register and enable NIX_AF_RAS interrupt */
   195		rc = rvu_nix_af_request_irq(rvu, base + NIX_AF_INT_VEC_POISON,
   196					    "NIX_AF_RAS",
   197					    rvu_nix_af_ras_intr_handler);
   198		if (!rc)
   199			goto err;
   200		rvu_write64(rvu, blkaddr, NIX_AF_RAS_ENA_W1S, ~0ULL);
   201	
   202		return 0;
   203	err:
   204		rvu_nix_unregister_interrupts(rvu);
   205		return -1;
   206	}
   207	
 > 208	int rvu_nix_register_interrupts(struct rvu *rvu)
   209	{
   210		int blkaddr = 0;
   211	
   212		blkaddr = rvu_get_blkaddr(rvu, blkaddr, 0);
   213		if (blkaddr < 0)
   214			return blkaddr;
   215	
   216		rvu_nix_blk_register_interrupts(rvu, blkaddr);
   217	
   218		return 0;
   219	}
   220	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 77062 bytes --]

  reply	other threads:[~2020-11-03  7:30 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-02  5:06 [net-next PATCH 0/3] Add devlink and devlink health reporters to George Cherian
2020-11-02  5:06 ` [net-next PATCH 1/3] octeontx2-af: Add devlink suppoort to af driver George Cherian
2020-11-02 13:31   ` Willem de Bruijn
2020-11-02  5:06 ` [net-next PATCH 2/3] octeontx2-af: Add devlink health reporters for NPA George Cherian
2020-11-02 13:42   ` Willem de Bruijn
2020-11-03  7:26   ` kernel test robot
2020-11-02  5:06 ` [net-next PATCH 3/3] octeontx2-af: Add devlink health reporters for NIX George Cherian
2020-11-03  7:30   ` kernel test robot [this message]
2020-11-02 18:00 ` [net-next PATCH 0/3] Add devlink and devlink health reporters to Jakub Kicinski

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=202011031544.I4qbSnOY-lkp@intel.com \
    --to=lkp@intel.com \
    --cc=davem@davemloft.net \
    --cc=gakula@marvell.com \
    --cc=george.cherian@marvell.com \
    --cc=kbuild-all@lists.01.org \
    --cc=kuba@kernel.org \
    --cc=lcherian@marvell.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=masahiroy@kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=sgoutham@marvell.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).