From: kernel test robot <lkp@intel.com>
To: kbuild-all@lists.01.org
Subject: Re: [RFC][PATCH] netfs, afs, ceph: Use folios
Date: Wed, 11 Aug 2021 23:50:53 +0800 [thread overview]
Message-ID: <202108112318.tCOvn1MG-lkp@intel.com> (raw)
In-Reply-To: <2408234.1628687271@warthog.procyon.org.uk>
[-- Attachment #1: Type: text/plain, Size: 5140 bytes --]
Hi David,
[FYI, it's a private test report for your RFC patch.]
[auto build test ERROR on next-20210811]
[cannot apply to tip/perf/core linux/master linus/master v5.14-rc5 v5.14-rc4 v5.14-rc3 v5.14-rc5]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]
url: https://github.com/0day-ci/linux/commits/David-Howells/netfs-afs-ceph-Use-folios/20210811-210906
base: 8ca403f3e7a23c4513046ad8d107adfbe4703362
config: xtensa-randconfig-r026-20210811 (attached as .config)
compiler: xtensa-linux-gcc (GCC) 10.3.0
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# https://github.com/0day-ci/linux/commit/a665390ce411c517db3f70ae59cdaa874cb914ba
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review David-Howells/netfs-afs-ceph-Use-folios/20210811-210906
git checkout a665390ce411c517db3f70ae59cdaa874cb914ba
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-10.3.0 make.cross ARCH=xtensa
If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>
All errors (new ones prefixed by >>):
fs/netfs/read_helper.c: In function 'netfs_rreq_unlock':
>> fs/netfs/read_helper.c:435:4: error: implicit declaration of function 'flush_dcache_folio'; did you mean 'flush_dcache_page'? [-Werror=implicit-function-declaration]
435 | flush_dcache_folio(folio);
| ^~~~~~~~~~~~~~~~~~
| flush_dcache_page
cc1: some warnings being treated as errors
vim +435 fs/netfs/read_helper.c
368
369 /*
370 * Unlock the folios in a read operation. We need to set PG_fscache on any
371 * folios we're going to write back before we unlock them.
372 */
373 static void netfs_rreq_unlock(struct netfs_read_request *rreq)
374 {
375 struct netfs_read_subrequest *subreq;
376 struct folio *folio;
377 unsigned int iopos, account = 0;
378 pgoff_t start_page = rreq->start / PAGE_SIZE;
379 pgoff_t last_page = ((rreq->start + rreq->len) / PAGE_SIZE) - 1;
380 bool subreq_failed = false;
381
382 XA_STATE(xas, &rreq->mapping->i_pages, start_page);
383
384 if (test_bit(NETFS_RREQ_FAILED, &rreq->flags)) {
385 __clear_bit(NETFS_RREQ_WRITE_TO_CACHE, &rreq->flags);
386 list_for_each_entry(subreq, &rreq->subrequests, rreq_link) {
387 __clear_bit(NETFS_SREQ_WRITE_TO_CACHE, &subreq->flags);
388 }
389 }
390
391 /* Walk through the pagecache and the I/O request lists simultaneously.
392 * We may have a mixture of cached and uncached sections and we only
393 * really want to write out the uncached sections. This is slightly
394 * complicated by the possibility that we might have huge pages with a
395 * mixture inside.
396 */
397 subreq = list_first_entry(&rreq->subrequests,
398 struct netfs_read_subrequest, rreq_link);
399 iopos = 0;
400 subreq_failed = (subreq->error < 0);
401
402 trace_netfs_rreq(rreq, netfs_rreq_trace_unlock);
403
404 rcu_read_lock();
405 xas_for_each(&xas, folio, last_page) {
406 unsigned int pgpos = (folio_index(folio) - start_page) * PAGE_SIZE;
407 unsigned int pgend = pgpos + folio_size(folio);
408 bool pg_failed = false;
409
410 for (;;) {
411 if (!subreq) {
412 pg_failed = true;
413 break;
414 }
415 if (test_bit(NETFS_SREQ_WRITE_TO_CACHE, &subreq->flags))
416 folio_start_fscache(folio);
417 pg_failed |= subreq_failed;
418 if (pgend < iopos + subreq->len)
419 break;
420
421 account += subreq->transferred;
422 iopos += subreq->len;
423 if (!list_is_last(&subreq->rreq_link, &rreq->subrequests)) {
424 subreq = list_next_entry(subreq, rreq_link);
425 subreq_failed = (subreq->error < 0);
426 } else {
427 subreq = NULL;
428 subreq_failed = false;
429 }
430 if (pgend == iopos)
431 break;
432 }
433
434 if (!pg_failed) {
> 435 flush_dcache_folio(folio);
436 folio_mark_uptodate(folio);
437 }
438
439 if (!test_bit(NETFS_RREQ_DONT_UNLOCK_FOLIOS, &rreq->flags)) {
440 if (folio_index(folio) == rreq->no_unlock_folio &&
441 test_bit(NETFS_RREQ_NO_UNLOCK_FOLIO, &rreq->flags))
442 _debug("no unlock");
443 else
444 folio_unlock(folio);
445 }
446 }
447 rcu_read_unlock();
448
449 task_io_account_read(account);
450 if (rreq->netfs_ops->done)
451 rreq->netfs_ops->done(rreq);
452 }
453
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org
[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 31952 bytes --]
next prev parent reply other threads:[~2021-08-11 15:50 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-08-11 13:07 [RFC][PATCH] netfs, afs, ceph: Use folios David Howells
2021-08-11 13:54 ` Matthew Wilcox
2021-08-11 15:50 ` kernel test robot [this message]
2021-08-11 16:33 ` kernel test robot
2021-08-11 21:05 ` [RFC][PATCH] afs: Use folios in directory handling David Howells
2021-08-12 16:07 ` [RFC][PATCH] netfs, afs, ceph: Use folios Matthew Wilcox
2021-08-13 6:53 ` Christoph Hellwig
2021-08-13 8:17 ` David Howells
2021-08-12 20:47 ` David Howells
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=202108112318.tCOvn1MG-lkp@intel.com \
--to=lkp@intel.com \
--cc=kbuild-all@lists.01.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.