* [linux-stable-rc:linux-5.4.y 4127/5583] drivers/staging/octeon/ethernet-tx.c:358:2: error: implicit declaration of function 'skb_reset_tc'
@ 2020-12-02 9:23 kernel test robot
2020-12-02 9:53 ` Pablo Neira Ayuso
0 siblings, 1 reply; 3+ messages in thread
From: kernel test robot @ 2020-12-02 9:23 UTC (permalink / raw)
To: kbuild-all
[-- Attachment #1: Type: text/plain, Size: 33130 bytes --]
tree: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.4.y
head: 12a5ce113626ce8208aef76d4d2e9fc93ea48ddf
commit: f8c60f7a00516820589c4c9da5614e4b7f4d0b2f [4127/5583] net: Fix CONFIG_NET_CLS_ACT=n and CONFIG_NFT_FWD_NETDEV={y, m} build
config: x86_64-randconfig-a016-20201202 (attached as .config)
compiler: clang version 12.0.0 (https://github.com/llvm/llvm-project 2671fccf0381769276ca8246ec0499adcb9b0355)
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# install x86_64 cross compiling tool for clang build
# apt-get install binutils-x86-64-linux-gnu
# https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git/commit/?id=f8c60f7a00516820589c4c9da5614e4b7f4d0b2f
git remote add linux-stable-rc https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git
git fetch --no-tags linux-stable-rc linux-5.4.y
git checkout f8c60f7a00516820589c4c9da5614e4b7f4d0b2f
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross ARCH=x86_64
If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>
All errors (new ones prefixed by >>):
>> drivers/staging/octeon/ethernet-tx.c:358:2: error: implicit declaration of function 'skb_reset_tc' [-Werror,-Wimplicit-function-declaration]
skb_reset_tc(skb);
^
drivers/staging/octeon/ethernet-tx.c:358:2: note: did you mean 'skb_reserve'?
include/linux/skbuff.h:2340:20: note: 'skb_reserve' declared here
static inline void skb_reserve(struct sk_buff *skb, int len)
^
1 error generated.
vim +/skb_reset_tc +358 drivers/staging/octeon/ethernet-tx.c
80ff0fd3ab64514 David Daney 2009-05-05 146
80ff0fd3ab64514 David Daney 2009-05-05 147 /*
215c47c931d2e22 Justin P. Mattock 2012-03-26 148 * Prefetch the private data structure. It is larger than the
215c47c931d2e22 Justin P. Mattock 2012-03-26 149 * one cache line.
80ff0fd3ab64514 David Daney 2009-05-05 150 */
80ff0fd3ab64514 David Daney 2009-05-05 151 prefetch(priv);
80ff0fd3ab64514 David Daney 2009-05-05 152
80ff0fd3ab64514 David Daney 2009-05-05 153 /*
80ff0fd3ab64514 David Daney 2009-05-05 154 * The check on CVMX_PKO_QUEUES_PER_PORT_* is designed to
80ff0fd3ab64514 David Daney 2009-05-05 155 * completely remove "qos" in the event neither interface
80ff0fd3ab64514 David Daney 2009-05-05 156 * supports multiple queues per port.
80ff0fd3ab64514 David Daney 2009-05-05 157 */
80ff0fd3ab64514 David Daney 2009-05-05 158 if ((CVMX_PKO_QUEUES_PER_PORT_INTERFACE0 > 1) ||
80ff0fd3ab64514 David Daney 2009-05-05 159 (CVMX_PKO_QUEUES_PER_PORT_INTERFACE1 > 1)) {
80ff0fd3ab64514 David Daney 2009-05-05 160 qos = GET_SKBUFF_QOS(skb);
80ff0fd3ab64514 David Daney 2009-05-05 161 if (qos <= 0)
80ff0fd3ab64514 David Daney 2009-05-05 162 qos = 0;
80ff0fd3ab64514 David Daney 2009-05-05 163 else if (qos >= cvmx_pko_get_num_queues(priv->port))
80ff0fd3ab64514 David Daney 2009-05-05 164 qos = 0;
32680d9319ad7ee Laura Garcia Liebana 2016-02-28 165 } else {
80ff0fd3ab64514 David Daney 2009-05-05 166 qos = 0;
32680d9319ad7ee Laura Garcia Liebana 2016-02-28 167 }
80ff0fd3ab64514 David Daney 2009-05-05 168
80ff0fd3ab64514 David Daney 2009-05-05 169 if (USE_ASYNC_IOBDMA) {
80ff0fd3ab64514 David Daney 2009-05-05 170 /* Save scratch in case userspace is using it */
80ff0fd3ab64514 David Daney 2009-05-05 171 CVMX_SYNCIOBDMA;
80ff0fd3ab64514 David Daney 2009-05-05 172 old_scratch = cvmx_scratch_read64(CVMX_SCR_SCRATCH);
80ff0fd3ab64514 David Daney 2009-05-05 173 old_scratch2 = cvmx_scratch_read64(CVMX_SCR_SCRATCH + 8);
80ff0fd3ab64514 David Daney 2009-05-05 174
80ff0fd3ab64514 David Daney 2009-05-05 175 /*
a620c1632629b42 David Daney 2009-06-23 176 * Fetch and increment the number of packets to be
a620c1632629b42 David Daney 2009-06-23 177 * freed.
80ff0fd3ab64514 David Daney 2009-05-05 178 */
80ff0fd3ab64514 David Daney 2009-05-05 179 cvmx_fau_async_fetch_and_add32(CVMX_SCR_SCRATCH + 8,
80ff0fd3ab64514 David Daney 2009-05-05 180 FAU_NUM_PACKET_BUFFERS_TO_FREE,
80ff0fd3ab64514 David Daney 2009-05-05 181 0);
80ff0fd3ab64514 David Daney 2009-05-05 182 cvmx_fau_async_fetch_and_add32(CVMX_SCR_SCRATCH,
a620c1632629b42 David Daney 2009-06-23 183 priv->fau + qos * 4,
a620c1632629b42 David Daney 2009-06-23 184 MAX_SKB_TO_FREE);
80ff0fd3ab64514 David Daney 2009-05-05 185 }
80ff0fd3ab64514 David Daney 2009-05-05 186
924cc2680fbe181 David Daney 2010-01-07 187 /*
924cc2680fbe181 David Daney 2010-01-07 188 * We have space for 6 segment pointers, If there will be more
924cc2680fbe181 David Daney 2010-01-07 189 * than that, we must linearize.
924cc2680fbe181 David Daney 2010-01-07 190 */
924cc2680fbe181 David Daney 2010-01-07 191 if (unlikely(skb_shinfo(skb)->nr_frags > 5)) {
924cc2680fbe181 David Daney 2010-01-07 192 if (unlikely(__skb_linearize(skb))) {
924cc2680fbe181 David Daney 2010-01-07 193 queue_type = QUEUE_DROP;
924cc2680fbe181 David Daney 2010-01-07 194 if (USE_ASYNC_IOBDMA) {
a012649d6b6ddba Ebru Akagunduz 2013-10-10 195 /*
a012649d6b6ddba Ebru Akagunduz 2013-10-10 196 * Get the number of skbuffs in use
a012649d6b6ddba Ebru Akagunduz 2013-10-10 197 * by the hardware
a012649d6b6ddba Ebru Akagunduz 2013-10-10 198 */
924cc2680fbe181 David Daney 2010-01-07 199 CVMX_SYNCIOBDMA;
a012649d6b6ddba Ebru Akagunduz 2013-10-10 200 skb_to_free =
a012649d6b6ddba Ebru Akagunduz 2013-10-10 201 cvmx_scratch_read64(CVMX_SCR_SCRATCH);
924cc2680fbe181 David Daney 2010-01-07 202 } else {
a012649d6b6ddba Ebru Akagunduz 2013-10-10 203 /*
a012649d6b6ddba Ebru Akagunduz 2013-10-10 204 * Get the number of skbuffs in use
a012649d6b6ddba Ebru Akagunduz 2013-10-10 205 * by the hardware
a012649d6b6ddba Ebru Akagunduz 2013-10-10 206 */
715a7148d774fac Branden Bonaby 2019-03-11 207 skb_to_free =
715a7148d774fac Branden Bonaby 2019-03-11 208 cvmx_fau_fetch_and_add32(priv->fau +
715a7148d774fac Branden Bonaby 2019-03-11 209 qos * 4,
715a7148d774fac Branden Bonaby 2019-03-11 210 MAX_SKB_TO_FREE);
924cc2680fbe181 David Daney 2010-01-07 211 }
a012649d6b6ddba Ebru Akagunduz 2013-10-10 212 skb_to_free = cvm_oct_adjust_skb_to_free(skb_to_free,
ac05a587c8a7b6a Laura Garcia Liebana 2016-03-12 213 priv->fau +
ac05a587c8a7b6a Laura Garcia Liebana 2016-03-12 214 qos * 4);
924cc2680fbe181 David Daney 2010-01-07 215 spin_lock_irqsave(&priv->tx_free_list[qos].lock, flags);
924cc2680fbe181 David Daney 2010-01-07 216 goto skip_xmit;
924cc2680fbe181 David Daney 2010-01-07 217 }
924cc2680fbe181 David Daney 2010-01-07 218 }
924cc2680fbe181 David Daney 2010-01-07 219
80ff0fd3ab64514 David Daney 2009-05-05 220 /*
80ff0fd3ab64514 David Daney 2009-05-05 221 * The CN3XXX series of parts has an errata (GMX-401) which
80ff0fd3ab64514 David Daney 2009-05-05 222 * causes the GMX block to hang if a collision occurs towards
80ff0fd3ab64514 David Daney 2009-05-05 223 * the end of a <68 byte packet. As a workaround for this, we
80ff0fd3ab64514 David Daney 2009-05-05 224 * pad packets to be 68 bytes whenever we are in half duplex
80ff0fd3ab64514 David Daney 2009-05-05 225 * mode. We don't handle the case of having a small packet but
80ff0fd3ab64514 David Daney 2009-05-05 226 * no room to add the padding. The kernel should always give
80ff0fd3ab64514 David Daney 2009-05-05 227 * us@least a cache line
80ff0fd3ab64514 David Daney 2009-05-05 228 */
80ff0fd3ab64514 David Daney 2009-05-05 229 if ((skb->len < 64) && OCTEON_IS_MODEL(OCTEON_CN3XXX)) {
80ff0fd3ab64514 David Daney 2009-05-05 230 union cvmx_gmxx_prtx_cfg gmx_prt_cfg;
80ff0fd3ab64514 David Daney 2009-05-05 231 int interface = INTERFACE(priv->port);
80ff0fd3ab64514 David Daney 2009-05-05 232 int index = INDEX(priv->port);
80ff0fd3ab64514 David Daney 2009-05-05 233
80ff0fd3ab64514 David Daney 2009-05-05 234 if (interface < 2) {
80ff0fd3ab64514 David Daney 2009-05-05 235 /* We only need to pad packet in half duplex mode */
80ff0fd3ab64514 David Daney 2009-05-05 236 gmx_prt_cfg.u64 =
80ff0fd3ab64514 David Daney 2009-05-05 237 cvmx_read_csr(CVMX_GMXX_PRTX_CFG(index, interface));
80ff0fd3ab64514 David Daney 2009-05-05 238 if (gmx_prt_cfg.s.duplex == 0) {
80ff0fd3ab64514 David Daney 2009-05-05 239 int add_bytes = 64 - skb->len;
b9fc9cf29e5d5a5 Roberto Medina 2014-10-08 240
80ff0fd3ab64514 David Daney 2009-05-05 241 if ((skb_tail_pointer(skb) + add_bytes) <=
80ff0fd3ab64514 David Daney 2009-05-05 242 skb_end_pointer(skb))
de77b966ce8adcb yuan linyu 2017-06-18 243 __skb_put_zero(skb, add_bytes);
80ff0fd3ab64514 David Daney 2009-05-05 244 }
80ff0fd3ab64514 David Daney 2009-05-05 245 }
80ff0fd3ab64514 David Daney 2009-05-05 246 }
80ff0fd3ab64514 David Daney 2009-05-05 247
80ff0fd3ab64514 David Daney 2009-05-05 248 /* Build the PKO command */
80ff0fd3ab64514 David Daney 2009-05-05 249 pko_command.u64 = 0;
8a5cc923af4298e Paul Martin 2015-03-30 250 #ifdef __LITTLE_ENDIAN
8a5cc923af4298e Paul Martin 2015-03-30 251 pko_command.s.le = 1;
8a5cc923af4298e Paul Martin 2015-03-30 252 #endif
80ff0fd3ab64514 David Daney 2009-05-05 253 pko_command.s.n2 = 1; /* Don't pollute L2 with the outgoing packet */
80ff0fd3ab64514 David Daney 2009-05-05 254 pko_command.s.segs = 1;
80ff0fd3ab64514 David Daney 2009-05-05 255 pko_command.s.total_bytes = skb->len;
80ff0fd3ab64514 David Daney 2009-05-05 256 pko_command.s.size0 = CVMX_FAU_OP_SIZE_32;
80ff0fd3ab64514 David Daney 2009-05-05 257 pko_command.s.subone0 = 1;
80ff0fd3ab64514 David Daney 2009-05-05 258
80ff0fd3ab64514 David Daney 2009-05-05 259 pko_command.s.dontfree = 1;
924cc2680fbe181 David Daney 2010-01-07 260
924cc2680fbe181 David Daney 2010-01-07 261 /* Build the PKO buffer pointer */
924cc2680fbe181 David Daney 2010-01-07 262 hw_buffer.u64 = 0;
924cc2680fbe181 David Daney 2010-01-07 263 if (skb_shinfo(skb)->nr_frags == 0) {
7d4dea95f8281fc Geert Uytterhoeven 2019-09-19 264 hw_buffer.s.addr = XKPHYS_TO_PHYS((uintptr_t)skb->data);
924cc2680fbe181 David Daney 2010-01-07 265 hw_buffer.s.pool = 0;
924cc2680fbe181 David Daney 2010-01-07 266 hw_buffer.s.size = skb->len;
924cc2680fbe181 David Daney 2010-01-07 267 } else {
7d4dea95f8281fc Geert Uytterhoeven 2019-09-19 268 hw_buffer.s.addr = XKPHYS_TO_PHYS((uintptr_t)skb->data);
924cc2680fbe181 David Daney 2010-01-07 269 hw_buffer.s.pool = 0;
924cc2680fbe181 David Daney 2010-01-07 270 hw_buffer.s.size = skb_headlen(skb);
924cc2680fbe181 David Daney 2010-01-07 271 CVM_OCT_SKB_CB(skb)[0] = hw_buffer.u64;
924cc2680fbe181 David Daney 2010-01-07 272 for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
d7840976e391566 Matthew Wilcox (Oracle 2019-07-22 273) skb_frag_t *fs = skb_shinfo(skb)->frags + i;
b9fc9cf29e5d5a5 Roberto Medina 2014-10-08 274
715a7148d774fac Branden Bonaby 2019-03-11 275 hw_buffer.s.addr =
7d4dea95f8281fc Geert Uytterhoeven 2019-09-19 276 XKPHYS_TO_PHYS((uintptr_t)skb_frag_address(fs));
1fbf400b58fa70c David S. Miller 2019-07-26 277 hw_buffer.s.size = skb_frag_size(fs);
924cc2680fbe181 David Daney 2010-01-07 278 CVM_OCT_SKB_CB(skb)[i + 1] = hw_buffer.u64;
924cc2680fbe181 David Daney 2010-01-07 279 }
7d4dea95f8281fc Geert Uytterhoeven 2019-09-19 280 hw_buffer.s.addr =
7d4dea95f8281fc Geert Uytterhoeven 2019-09-19 281 XKPHYS_TO_PHYS((uintptr_t)CVM_OCT_SKB_CB(skb));
924cc2680fbe181 David Daney 2010-01-07 282 hw_buffer.s.size = skb_shinfo(skb)->nr_frags + 1;
924cc2680fbe181 David Daney 2010-01-07 283 pko_command.s.segs = skb_shinfo(skb)->nr_frags + 1;
924cc2680fbe181 David Daney 2010-01-07 284 pko_command.s.gather = 1;
924cc2680fbe181 David Daney 2010-01-07 285 goto dont_put_skbuff_in_hw;
924cc2680fbe181 David Daney 2010-01-07 286 }
924cc2680fbe181 David Daney 2010-01-07 287
80ff0fd3ab64514 David Daney 2009-05-05 288 /*
80ff0fd3ab64514 David Daney 2009-05-05 289 * See if we can put this skb in the FPA pool. Any strange
80ff0fd3ab64514 David Daney 2009-05-05 290 * behavior from the Linux networking stack will most likely
80ff0fd3ab64514 David Daney 2009-05-05 291 * be caused by a bug in the following code. If some field is
215c47c931d2e22 Justin P. Mattock 2012-03-26 292 * in use by the network stack and gets carried over when a
215c47c931d2e22 Justin P. Mattock 2012-03-26 293 * buffer is reused, bad things may happen. If in doubt and
80ff0fd3ab64514 David Daney 2009-05-05 294 * you dont need the absolute best performance, disable the
80ff0fd3ab64514 David Daney 2009-05-05 295 * define REUSE_SKBUFFS_WITHOUT_FREE. The reuse of buffers has
80ff0fd3ab64514 David Daney 2009-05-05 296 * shown a 25% increase in performance under some loads.
80ff0fd3ab64514 David Daney 2009-05-05 297 */
80ff0fd3ab64514 David Daney 2009-05-05 298 #if REUSE_SKBUFFS_WITHOUT_FREE
166bdaa9aad9903 David Daney 2010-01-27 299 fpa_head = skb->head + 256 - ((unsigned long)skb->head & 0x7f);
80ff0fd3ab64514 David Daney 2009-05-05 300 if (unlikely(skb->data < fpa_head)) {
b4ede7922e82f95 Laura Garcia Liebana 2016-02-28 301 /* TX buffer beginning can't meet FPA alignment constraints */
80ff0fd3ab64514 David Daney 2009-05-05 302 goto dont_put_skbuff_in_hw;
80ff0fd3ab64514 David Daney 2009-05-05 303 }
80ff0fd3ab64514 David Daney 2009-05-05 304 if (unlikely
80ff0fd3ab64514 David Daney 2009-05-05 305 ((skb_end_pointer(skb) - fpa_head) < CVMX_FPA_PACKET_POOL_SIZE)) {
b4ede7922e82f95 Laura Garcia Liebana 2016-02-28 306 /* TX buffer isn't large enough for the FPA */
80ff0fd3ab64514 David Daney 2009-05-05 307 goto dont_put_skbuff_in_hw;
80ff0fd3ab64514 David Daney 2009-05-05 308 }
80ff0fd3ab64514 David Daney 2009-05-05 309 if (unlikely(skb_shared(skb))) {
b4ede7922e82f95 Laura Garcia Liebana 2016-02-28 310 /* TX buffer sharing data with someone else */
80ff0fd3ab64514 David Daney 2009-05-05 311 goto dont_put_skbuff_in_hw;
80ff0fd3ab64514 David Daney 2009-05-05 312 }
80ff0fd3ab64514 David Daney 2009-05-05 313 if (unlikely(skb_cloned(skb))) {
b4ede7922e82f95 Laura Garcia Liebana 2016-02-28 314 /* TX buffer has been cloned */
80ff0fd3ab64514 David Daney 2009-05-05 315 goto dont_put_skbuff_in_hw;
80ff0fd3ab64514 David Daney 2009-05-05 316 }
80ff0fd3ab64514 David Daney 2009-05-05 317 if (unlikely(skb_header_cloned(skb))) {
b4ede7922e82f95 Laura Garcia Liebana 2016-02-28 318 /* TX buffer header has been cloned */
80ff0fd3ab64514 David Daney 2009-05-05 319 goto dont_put_skbuff_in_hw;
80ff0fd3ab64514 David Daney 2009-05-05 320 }
80ff0fd3ab64514 David Daney 2009-05-05 321 if (unlikely(skb->destructor)) {
b4ede7922e82f95 Laura Garcia Liebana 2016-02-28 322 /* TX buffer has a destructor */
80ff0fd3ab64514 David Daney 2009-05-05 323 goto dont_put_skbuff_in_hw;
80ff0fd3ab64514 David Daney 2009-05-05 324 }
80ff0fd3ab64514 David Daney 2009-05-05 325 if (unlikely(skb_shinfo(skb)->nr_frags)) {
b4ede7922e82f95 Laura Garcia Liebana 2016-02-28 326 /* TX buffer has fragments */
80ff0fd3ab64514 David Daney 2009-05-05 327 goto dont_put_skbuff_in_hw;
80ff0fd3ab64514 David Daney 2009-05-05 328 }
80ff0fd3ab64514 David Daney 2009-05-05 329 if (unlikely
80ff0fd3ab64514 David Daney 2009-05-05 330 (skb->truesize !=
ec47ea824774046 Alexander Duyck 2012-05-04 331 sizeof(*skb) + skb_end_offset(skb))) {
b4ede7922e82f95 Laura Garcia Liebana 2016-02-28 332 /* TX buffer truesize has been changed */
80ff0fd3ab64514 David Daney 2009-05-05 333 goto dont_put_skbuff_in_hw;
80ff0fd3ab64514 David Daney 2009-05-05 334 }
80ff0fd3ab64514 David Daney 2009-05-05 335
80ff0fd3ab64514 David Daney 2009-05-05 336 /*
80ff0fd3ab64514 David Daney 2009-05-05 337 * We can use this buffer in the FPA. We don't need the FAU
80ff0fd3ab64514 David Daney 2009-05-05 338 * update anymore
80ff0fd3ab64514 David Daney 2009-05-05 339 */
80ff0fd3ab64514 David Daney 2009-05-05 340 pko_command.s.dontfree = 0;
80ff0fd3ab64514 David Daney 2009-05-05 341
a012649d6b6ddba Ebru Akagunduz 2013-10-10 342 hw_buffer.s.back = ((unsigned long)skb->data >> 7) -
a012649d6b6ddba Ebru Akagunduz 2013-10-10 343 ((unsigned long)fpa_head >> 7);
a012649d6b6ddba Ebru Akagunduz 2013-10-10 344
80ff0fd3ab64514 David Daney 2009-05-05 345 *(struct sk_buff **)(fpa_head - sizeof(void *)) = skb;
80ff0fd3ab64514 David Daney 2009-05-05 346
80ff0fd3ab64514 David Daney 2009-05-05 347 /*
80ff0fd3ab64514 David Daney 2009-05-05 348 * The skbuff will be reused without ever being freed. We must
f696a10838ffab8 David Daney 2009-06-23 349 * cleanup a bunch of core things.
80ff0fd3ab64514 David Daney 2009-05-05 350 */
f696a10838ffab8 David Daney 2009-06-23 351 dst_release(skb_dst(skb));
f696a10838ffab8 David Daney 2009-06-23 352 skb_dst_set(skb, NULL);
895b5c9f206eb7d Florian Westphal 2019-09-29 353 skb_ext_reset(skb);
895b5c9f206eb7d Florian Westphal 2019-09-29 354 nf_reset_ct(skb);
80ff0fd3ab64514 David Daney 2009-05-05 355
80ff0fd3ab64514 David Daney 2009-05-05 356 #ifdef CONFIG_NET_SCHED
80ff0fd3ab64514 David Daney 2009-05-05 357 skb->tc_index = 0;
a5135bcfba73450 Willem de Bruijn 2017-01-07 @358 skb_reset_tc(skb);
80ff0fd3ab64514 David Daney 2009-05-05 359 #endif /* CONFIG_NET_SCHED */
6888fc87768eaa2 David Daney 2010-01-07 360 #endif /* REUSE_SKBUFFS_WITHOUT_FREE */
80ff0fd3ab64514 David Daney 2009-05-05 361
80ff0fd3ab64514 David Daney 2009-05-05 362 dont_put_skbuff_in_hw:
80ff0fd3ab64514 David Daney 2009-05-05 363
80ff0fd3ab64514 David Daney 2009-05-05 364 /* Check if we can use the hardware checksumming */
6646baf7041214a Aaro Koskinen 2015-04-04 365 if ((skb->protocol == htons(ETH_P_IP)) &&
861e82d5b5a42d2 Jacob Kiefer 2015-07-10 366 (ip_hdr(skb)->version == 4) &&
861e82d5b5a42d2 Jacob Kiefer 2015-07-10 367 (ip_hdr(skb)->ihl == 5) &&
861e82d5b5a42d2 Jacob Kiefer 2015-07-10 368 ((ip_hdr(skb)->frag_off == 0) ||
861e82d5b5a42d2 Jacob Kiefer 2015-07-10 369 (ip_hdr(skb)->frag_off == htons(1 << 14))) &&
861e82d5b5a42d2 Jacob Kiefer 2015-07-10 370 ((ip_hdr(skb)->protocol == IPPROTO_TCP) ||
861e82d5b5a42d2 Jacob Kiefer 2015-07-10 371 (ip_hdr(skb)->protocol == IPPROTO_UDP))) {
80ff0fd3ab64514 David Daney 2009-05-05 372 /* Use hardware checksum calc */
5a89a875c96a9d0 Hamish Martin 2015-12-22 373 pko_command.s.ipoffp1 = skb_network_offset(skb) + 1;
80ff0fd3ab64514 David Daney 2009-05-05 374 }
80ff0fd3ab64514 David Daney 2009-05-05 375
80ff0fd3ab64514 David Daney 2009-05-05 376 if (USE_ASYNC_IOBDMA) {
80ff0fd3ab64514 David Daney 2009-05-05 377 /* Get the number of skbuffs in use by the hardware */
80ff0fd3ab64514 David Daney 2009-05-05 378 CVMX_SYNCIOBDMA;
a620c1632629b42 David Daney 2009-06-23 379 skb_to_free = cvmx_scratch_read64(CVMX_SCR_SCRATCH);
80ff0fd3ab64514 David Daney 2009-05-05 380 buffers_to_free = cvmx_scratch_read64(CVMX_SCR_SCRATCH + 8);
80ff0fd3ab64514 David Daney 2009-05-05 381 } else {
80ff0fd3ab64514 David Daney 2009-05-05 382 /* Get the number of skbuffs in use by the hardware */
a620c1632629b42 David Daney 2009-06-23 383 skb_to_free = cvmx_fau_fetch_and_add32(priv->fau + qos * 4,
a620c1632629b42 David Daney 2009-06-23 384 MAX_SKB_TO_FREE);
80ff0fd3ab64514 David Daney 2009-05-05 385 buffers_to_free =
80ff0fd3ab64514 David Daney 2009-05-05 386 cvmx_fau_fetch_and_add32(FAU_NUM_PACKET_BUFFERS_TO_FREE, 0);
80ff0fd3ab64514 David Daney 2009-05-05 387 }
80ff0fd3ab64514 David Daney 2009-05-05 388
beb6e57b50dcccf Janani Ravichandran 2016-02-10 389 skb_to_free = cvm_oct_adjust_skb_to_free(skb_to_free,
beb6e57b50dcccf Janani Ravichandran 2016-02-10 390 priv->fau + qos * 4);
a620c1632629b42 David Daney 2009-06-23 391
80ff0fd3ab64514 David Daney 2009-05-05 392 /*
80ff0fd3ab64514 David Daney 2009-05-05 393 * If we're sending faster than the receive can free them then
80ff0fd3ab64514 David Daney 2009-05-05 394 * don't do the HW free.
80ff0fd3ab64514 David Daney 2009-05-05 395 */
4898c560103fb80 David Daney 2010-02-15 396 if ((buffers_to_free < -100) && !pko_command.s.dontfree)
80ff0fd3ab64514 David Daney 2009-05-05 397 pko_command.s.dontfree = 1;
80ff0fd3ab64514 David Daney 2009-05-05 398
4898c560103fb80 David Daney 2010-02-15 399 if (pko_command.s.dontfree) {
6888fc87768eaa2 David Daney 2010-01-07 400 queue_type = QUEUE_CORE;
4898c560103fb80 David Daney 2010-02-15 401 pko_command.s.reg0 = priv->fau + qos * 4;
4898c560103fb80 David Daney 2010-02-15 402 } else {
6888fc87768eaa2 David Daney 2010-01-07 403 queue_type = QUEUE_HW;
4898c560103fb80 David Daney 2010-02-15 404 }
4898c560103fb80 David Daney 2010-02-15 405 if (USE_ASYNC_IOBDMA)
715a7148d774fac Branden Bonaby 2019-03-11 406 cvmx_fau_async_fetch_and_add32(CVMX_SCR_SCRATCH,
715a7148d774fac Branden Bonaby 2019-03-11 407 FAU_TOTAL_TX_TO_CLEAN, 1);
6888fc87768eaa2 David Daney 2010-01-07 408
6888fc87768eaa2 David Daney 2010-01-07 409 spin_lock_irqsave(&priv->tx_free_list[qos].lock, flags);
80ff0fd3ab64514 David Daney 2009-05-05 410
80ff0fd3ab64514 David Daney 2009-05-05 411 /* Drop this packet if we have too many already queued to the HW */
a012649d6b6ddba Ebru Akagunduz 2013-10-10 412 if (unlikely(skb_queue_len(&priv->tx_free_list[qos]) >=
a012649d6b6ddba Ebru Akagunduz 2013-10-10 413 MAX_OUT_QUEUE_DEPTH)) {
6888fc87768eaa2 David Daney 2010-01-07 414 if (dev->tx_queue_len != 0) {
6888fc87768eaa2 David Daney 2010-01-07 415 /* Drop the lock when notifying the core. */
a012649d6b6ddba Ebru Akagunduz 2013-10-10 416 spin_unlock_irqrestore(&priv->tx_free_list[qos].lock,
a012649d6b6ddba Ebru Akagunduz 2013-10-10 417 flags);
6888fc87768eaa2 David Daney 2010-01-07 418 netif_stop_queue(dev);
a012649d6b6ddba Ebru Akagunduz 2013-10-10 419 spin_lock_irqsave(&priv->tx_free_list[qos].lock,
a012649d6b6ddba Ebru Akagunduz 2013-10-10 420 flags);
6888fc87768eaa2 David Daney 2010-01-07 421 } else {
6888fc87768eaa2 David Daney 2010-01-07 422 /* If not using normal queueing. */
6888fc87768eaa2 David Daney 2010-01-07 423 queue_type = QUEUE_DROP;
6888fc87768eaa2 David Daney 2010-01-07 424 goto skip_xmit;
6888fc87768eaa2 David Daney 2010-01-07 425 }
80ff0fd3ab64514 David Daney 2009-05-05 426 }
6888fc87768eaa2 David Daney 2010-01-07 427
6888fc87768eaa2 David Daney 2010-01-07 428 cvmx_pko_send_packet_prepare(priv->port, priv->queue + qos,
6888fc87768eaa2 David Daney 2010-01-07 429 CVMX_PKO_LOCK_NONE);
6888fc87768eaa2 David Daney 2010-01-07 430
80ff0fd3ab64514 David Daney 2009-05-05 431 /* Send the packet to the output queue */
6888fc87768eaa2 David Daney 2010-01-07 432 if (unlikely(cvmx_pko_send_packet_finish(priv->port,
6888fc87768eaa2 David Daney 2010-01-07 433 priv->queue + qos,
6888fc87768eaa2 David Daney 2010-01-07 434 pko_command, hw_buffer,
6888fc87768eaa2 David Daney 2010-01-07 435 CVMX_PKO_LOCK_NONE))) {
a012649d6b6ddba Ebru Akagunduz 2013-10-10 436 printk_ratelimited("%s: Failed to send the packet\n",
a012649d6b6ddba Ebru Akagunduz 2013-10-10 437 dev->name);
6888fc87768eaa2 David Daney 2010-01-07 438 queue_type = QUEUE_DROP;
80ff0fd3ab64514 David Daney 2009-05-05 439 }
6888fc87768eaa2 David Daney 2010-01-07 440 skip_xmit:
6888fc87768eaa2 David Daney 2010-01-07 441 to_free_list = NULL;
80ff0fd3ab64514 David Daney 2009-05-05 442
6888fc87768eaa2 David Daney 2010-01-07 443 switch (queue_type) {
6888fc87768eaa2 David Daney 2010-01-07 444 case QUEUE_DROP:
6888fc87768eaa2 David Daney 2010-01-07 445 skb->next = to_free_list;
6888fc87768eaa2 David Daney 2010-01-07 446 to_free_list = skb;
66812da3a689e3f Tobias Klauser 2017-02-15 447 dev->stats.tx_dropped++;
6888fc87768eaa2 David Daney 2010-01-07 448 break;
6888fc87768eaa2 David Daney 2010-01-07 449 case QUEUE_HW:
6888fc87768eaa2 David Daney 2010-01-07 450 cvmx_fau_atomic_add32(FAU_NUM_PACKET_BUFFERS_TO_FREE, -1);
6888fc87768eaa2 David Daney 2010-01-07 451 break;
6888fc87768eaa2 David Daney 2010-01-07 452 case QUEUE_CORE:
6888fc87768eaa2 David Daney 2010-01-07 453 __skb_queue_tail(&priv->tx_free_list[qos], skb);
6888fc87768eaa2 David Daney 2010-01-07 454 break;
6888fc87768eaa2 David Daney 2010-01-07 455 default:
6888fc87768eaa2 David Daney 2010-01-07 456 BUG();
80ff0fd3ab64514 David Daney 2009-05-05 457 }
80ff0fd3ab64514 David Daney 2009-05-05 458
6888fc87768eaa2 David Daney 2010-01-07 459 while (skb_to_free > 0) {
6888fc87768eaa2 David Daney 2010-01-07 460 struct sk_buff *t = __skb_dequeue(&priv->tx_free_list[qos]);
b9fc9cf29e5d5a5 Roberto Medina 2014-10-08 461
6888fc87768eaa2 David Daney 2010-01-07 462 t->next = to_free_list;
6888fc87768eaa2 David Daney 2010-01-07 463 to_free_list = t;
6888fc87768eaa2 David Daney 2010-01-07 464 skb_to_free--;
80ff0fd3ab64514 David Daney 2009-05-05 465 }
6888fc87768eaa2 David Daney 2010-01-07 466
6888fc87768eaa2 David Daney 2010-01-07 467 spin_unlock_irqrestore(&priv->tx_free_list[qos].lock, flags);
6888fc87768eaa2 David Daney 2010-01-07 468
6888fc87768eaa2 David Daney 2010-01-07 469 /* Do the actual freeing outside of the lock. */
6888fc87768eaa2 David Daney 2010-01-07 470 while (to_free_list) {
6888fc87768eaa2 David Daney 2010-01-07 471 struct sk_buff *t = to_free_list;
b9fc9cf29e5d5a5 Roberto Medina 2014-10-08 472
6888fc87768eaa2 David Daney 2010-01-07 473 to_free_list = to_free_list->next;
6888fc87768eaa2 David Daney 2010-01-07 474 dev_kfree_skb_any(t);
80ff0fd3ab64514 David Daney 2009-05-05 475 }
80ff0fd3ab64514 David Daney 2009-05-05 476
6888fc87768eaa2 David Daney 2010-01-07 477 if (USE_ASYNC_IOBDMA) {
4898c560103fb80 David Daney 2010-02-15 478 CVMX_SYNCIOBDMA;
4898c560103fb80 David Daney 2010-02-15 479 total_to_clean = cvmx_scratch_read64(CVMX_SCR_SCRATCH);
6888fc87768eaa2 David Daney 2010-01-07 480 /* Restore the scratch area */
6888fc87768eaa2 David Daney 2010-01-07 481 cvmx_scratch_write64(CVMX_SCR_SCRATCH, old_scratch);
6888fc87768eaa2 David Daney 2010-01-07 482 cvmx_scratch_write64(CVMX_SCR_SCRATCH + 8, old_scratch2);
4898c560103fb80 David Daney 2010-02-15 483 } else {
715a7148d774fac Branden Bonaby 2019-03-11 484 total_to_clean =
715a7148d774fac Branden Bonaby 2019-03-11 485 cvmx_fau_fetch_and_add32(FAU_TOTAL_TX_TO_CLEAN, 1);
4898c560103fb80 David Daney 2010-02-15 486 }
4898c560103fb80 David Daney 2010-02-15 487
4898c560103fb80 David Daney 2010-02-15 488 if (total_to_clean & 0x3ff) {
4898c560103fb80 David Daney 2010-02-15 489 /*
4898c560103fb80 David Daney 2010-02-15 490 * Schedule the cleanup tasklet every 1024 packets for
4898c560103fb80 David Daney 2010-02-15 491 * the pathological case of high traffic on one port
4898c560103fb80 David Daney 2010-02-15 492 * delaying clean up of packets on a different port
4898c560103fb80 David Daney 2010-02-15 493 * that is blocked waiting for the cleanup.
4898c560103fb80 David Daney 2010-02-15 494 */
4898c560103fb80 David Daney 2010-02-15 495 tasklet_schedule(&cvm_oct_tx_cleanup_tasklet);
80ff0fd3ab64514 David Daney 2009-05-05 496 }
80ff0fd3ab64514 David Daney 2009-05-05 497
4898c560103fb80 David Daney 2010-02-15 498 cvm_oct_kick_tx_poll_watchdog();
4898c560103fb80 David Daney 2010-02-15 499
6888fc87768eaa2 David Daney 2010-01-07 500 return NETDEV_TX_OK;
80ff0fd3ab64514 David Daney 2009-05-05 501 }
80ff0fd3ab64514 David Daney 2009-05-05 502
:::::: The code at line 358 was first introduced by commit
:::::: a5135bcfba7345031df45e02cd150a45add47cf8 net-tc: convert tc_verd to integer bitfields
:::::: TO: Willem de Bruijn <willemb@google.com>
:::::: CC: David S. Miller <davem@davemloft.net>
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org
[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 42722 bytes --]
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [linux-stable-rc:linux-5.4.y 4127/5583] drivers/staging/octeon/ethernet-tx.c:358:2: error: implicit declaration of function 'skb_reset_tc'
2020-12-02 9:23 [linux-stable-rc:linux-5.4.y 4127/5583] drivers/staging/octeon/ethernet-tx.c:358:2: error: implicit declaration of function 'skb_reset_tc' kernel test robot
@ 2020-12-02 9:53 ` Pablo Neira Ayuso
2020-12-02 10:05 ` Greg Kroah-Hartman
0 siblings, 1 reply; 3+ messages in thread
From: Pablo Neira Ayuso @ 2020-12-02 9:53 UTC (permalink / raw)
To: kbuild-all
[-- Attachment #1: Type: text/plain, Size: 1975 bytes --]
On Wed, Dec 02, 2020 at 05:23:16PM +0800, kernel test robot wrote:
> tree: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.4.y
> head: 12a5ce113626ce8208aef76d4d2e9fc93ea48ddf
> commit: f8c60f7a00516820589c4c9da5614e4b7f4d0b2f [4127/5583] net: Fix CONFIG_NET_CLS_ACT=n and CONFIG_NFT_FWD_NETDEV={y, m} build
> config: x86_64-randconfig-a016-20201202 (attached as .config)
> compiler: clang version 12.0.0 (https://github.com/llvm/llvm-project 2671fccf0381769276ca8246ec0499adcb9b0355)
> reproduce (this is a W=1 build):
> wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
> chmod +x ~/bin/make.cross
> # install x86_64 cross compiling tool for clang build
> # apt-get install binutils-x86-64-linux-gnu
> # https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git/commit/?id=f8c60f7a00516820589c4c9da5614e4b7f4d0b2f
> git remote add linux-stable-rc https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git
> git fetch --no-tags linux-stable-rc linux-5.4.y
> git checkout f8c60f7a00516820589c4c9da5614e4b7f4d0b2f
> # save the attached .config to linux build tree
> COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross ARCH=x86_64
>
> If you fix the issue, kindly add following tag as appropriate
> Reported-by: kernel test robot <lkp@intel.com>
>
> All errors (new ones prefixed by >>):
>
> >> drivers/staging/octeon/ethernet-tx.c:358:2: error: implicit declaration of function 'skb_reset_tc' [-Werror,-Wimplicit-function-declaration]
> skb_reset_tc(skb);
> ^
commit 673b41e04a035d760bc0aff83fa9ee24fd9c2779
Author: Randy Dunlap <rdunlap@infradead.org>
Date: Sun Mar 29 09:12:31 2020 -0700
staging/octeon: fix up merge error
Greg, you might want to cherry-pick this commit to 5.4.y to fix this issue.
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [linux-stable-rc:linux-5.4.y 4127/5583] drivers/staging/octeon/ethernet-tx.c:358:2: error: implicit declaration of function 'skb_reset_tc'
2020-12-02 9:53 ` Pablo Neira Ayuso
@ 2020-12-02 10:05 ` Greg Kroah-Hartman
0 siblings, 0 replies; 3+ messages in thread
From: Greg Kroah-Hartman @ 2020-12-02 10:05 UTC (permalink / raw)
To: kbuild-all
[-- Attachment #1: Type: text/plain, Size: 2149 bytes --]
On Wed, Dec 02, 2020 at 10:53:37AM +0100, Pablo Neira Ayuso wrote:
> On Wed, Dec 02, 2020 at 05:23:16PM +0800, kernel test robot wrote:
> > tree: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.4.y
> > head: 12a5ce113626ce8208aef76d4d2e9fc93ea48ddf
> > commit: f8c60f7a00516820589c4c9da5614e4b7f4d0b2f [4127/5583] net: Fix CONFIG_NET_CLS_ACT=n and CONFIG_NFT_FWD_NETDEV={y, m} build
> > config: x86_64-randconfig-a016-20201202 (attached as .config)
> > compiler: clang version 12.0.0 (https://github.com/llvm/llvm-project 2671fccf0381769276ca8246ec0499adcb9b0355)
> > reproduce (this is a W=1 build):
> > wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
> > chmod +x ~/bin/make.cross
> > # install x86_64 cross compiling tool for clang build
> > # apt-get install binutils-x86-64-linux-gnu
> > # https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git/commit/?id=f8c60f7a00516820589c4c9da5614e4b7f4d0b2f
> > git remote add linux-stable-rc https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git
> > git fetch --no-tags linux-stable-rc linux-5.4.y
> > git checkout f8c60f7a00516820589c4c9da5614e4b7f4d0b2f
> > # save the attached .config to linux build tree
> > COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross ARCH=x86_64
> >
> > If you fix the issue, kindly add following tag as appropriate
> > Reported-by: kernel test robot <lkp@intel.com>
> >
> > All errors (new ones prefixed by >>):
> >
> > >> drivers/staging/octeon/ethernet-tx.c:358:2: error: implicit declaration of function 'skb_reset_tc' [-Werror,-Wimplicit-function-declaration]
> > skb_reset_tc(skb);
> > ^
>
> commit 673b41e04a035d760bc0aff83fa9ee24fd9c2779
> Author: Randy Dunlap <rdunlap@infradead.org>
> Date: Sun Mar 29 09:12:31 2020 -0700
>
> staging/octeon: fix up merge error
>
> Greg, you might want to cherry-pick this commit to 5.4.y to fix this issue.
Now queued up, thanks!
greg k-h
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2020-12-02 10:05 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-12-02 9:23 [linux-stable-rc:linux-5.4.y 4127/5583] drivers/staging/octeon/ethernet-tx.c:358:2: error: implicit declaration of function 'skb_reset_tc' kernel test robot
2020-12-02 9:53 ` Pablo Neira Ayuso
2020-12-02 10:05 ` Greg Kroah-Hartman
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.