From: kernel test robot <lkp@intel.com>
To: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: kbuild-all@lists.01.org, linux-kernel@vger.kernel.org,
Vlastimil Babka <vbabka@suse.cz>
Subject: [vbabka:slub-local-lock-v4r1 29/35] mm/slub.c:4396:5: warning: no previous prototype for '__kmem_cache_do_shrink'
Date: Wed, 11 Aug 2021 07:37:04 +0800 [thread overview]
Message-ID: <202108110756.eUwdOgmR-lkp@intel.com> (raw)
[-- Attachment #1: Type: text/plain, Size: 3881 bytes --]
tree: https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git slub-local-lock-v4r1
head: cb9b19ebd40aa23420fd5f8889fb4cd81ff346bc
commit: f141ef3ae9b1a6421e47136f93500ae74e1076e1 [29/35] mm: slub: Move flush_cpu_slab() invocations __free_slab() invocations out of IRQ context
config: xtensa-randconfig-p002-20210810 (attached as .config)
compiler: xtensa-linux-gcc (GCC) 10.3.0
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/commit/?id=f141ef3ae9b1a6421e47136f93500ae74e1076e1
git remote add vbabka https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git
git fetch --no-tags vbabka slub-local-lock-v4r1
git checkout f141ef3ae9b1a6421e47136f93500ae74e1076e1
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-10.3.0 make.cross ARCH=xtensa
If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>
All warnings (new ones prefixed by >>):
>> mm/slub.c:4396:5: warning: no previous prototype for '__kmem_cache_do_shrink' [-Wmissing-prototypes]
4396 | int __kmem_cache_do_shrink(struct kmem_cache *s)
| ^~~~~~~~~~~~~~~~~~~~~~
vim +/__kmem_cache_do_shrink +4396 mm/slub.c
4386
4387 /*
4388 * kmem_cache_shrink discards empty slabs and promotes the slabs filled
4389 * up most to the head of the partial lists. New allocations will then
4390 * fill those up and thus they can be removed from the partial lists.
4391 *
4392 * The slabs with the least items are placed last. This results in them
4393 * being allocated from last increasing the chance that the last objects
4394 * are freed in them.
4395 */
> 4396 int __kmem_cache_do_shrink(struct kmem_cache *s)
4397 {
4398 int node;
4399 int i;
4400 struct kmem_cache_node *n;
4401 struct page *page;
4402 struct page *t;
4403 struct list_head discard;
4404 struct list_head promote[SHRINK_PROMOTE_MAX];
4405 unsigned long flags;
4406 int ret = 0;
4407
4408 for_each_kmem_cache_node(s, node, n) {
4409 INIT_LIST_HEAD(&discard);
4410 for (i = 0; i < SHRINK_PROMOTE_MAX; i++)
4411 INIT_LIST_HEAD(promote + i);
4412
4413 spin_lock_irqsave(&n->list_lock, flags);
4414
4415 /*
4416 * Build lists of slabs to discard or promote.
4417 *
4418 * Note that concurrent frees may occur while we hold the
4419 * list_lock. page->inuse here is the upper limit.
4420 */
4421 list_for_each_entry_safe(page, t, &n->partial, slab_list) {
4422 int free = page->objects - page->inuse;
4423
4424 /* Do not reread page->inuse */
4425 barrier();
4426
4427 /* We do not keep full slabs on the list */
4428 BUG_ON(free <= 0);
4429
4430 if (free == page->objects) {
4431 list_move(&page->slab_list, &discard);
4432 n->nr_partial--;
4433 } else if (free <= SHRINK_PROMOTE_MAX)
4434 list_move(&page->slab_list, promote + free - 1);
4435 }
4436
4437 /*
4438 * Promote the slabs filled up most to the head of the
4439 * partial list.
4440 */
4441 for (i = SHRINK_PROMOTE_MAX - 1; i >= 0; i--)
4442 list_splice(promote + i, &n->partial);
4443
4444 spin_unlock_irqrestore(&n->list_lock, flags);
4445
4446 /* Release empty slabs */
4447 list_for_each_entry_safe(page, t, &discard, slab_list)
4448 discard_slab(s, page);
4449
4450 if (slabs_node(s, node))
4451 ret = 1;
4452 }
4453
4454 return ret;
4455 }
4456
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 24190 bytes --]
reply other threads:[~2021-08-10 23:38 UTC|newest]
Thread overview: [no followups] expand[flat|nested] mbox.gz Atom feed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=202108110756.eUwdOgmR-lkp@intel.com \
--to=lkp@intel.com \
--cc=bigeasy@linutronix.de \
--cc=kbuild-all@lists.01.org \
--cc=linux-kernel@vger.kernel.org \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).