* [luto:x86/fixes 14/19] arch/x86/mm/tlb.c:537:34: error: 'struct mm_struct' has no member named 'membarrier_state'
@ 2020-12-28 1:43 kernel test robot
0 siblings, 0 replies; only message in thread
From: kernel test robot @ 2020-12-28 1:43 UTC (permalink / raw)
To: kbuild-all
[-- Attachment #1: Type: text/plain, Size: 10118 bytes --]
tree: https://git.kernel.org/pub/scm/linux/kernel/git/luto/linux.git x86/fixes
head: 94afdc4603de67c1cd3195b3ff3d49ee30898b2a
commit: ff7d4d734d212919749ff79b8c08e1615f7de83b [14/19] x86/mm: Handle unlazying membarrier core sync in the arch code
config: i386-tinyconfig (attached as .config)
compiler: gcc-9 (Debian 9.3.0-15) 9.3.0
reproduce (this is a W=1 build):
# https://git.kernel.org/pub/scm/linux/kernel/git/luto/linux.git/commit/?id=ff7d4d734d212919749ff79b8c08e1615f7de83b
git remote add luto https://git.kernel.org/pub/scm/linux/kernel/git/luto/linux.git
git fetch --no-tags luto x86/fixes
git checkout ff7d4d734d212919749ff79b8c08e1615f7de83b
# save the attached .config to linux build tree
make W=1 ARCH=i386
If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>
All errors (new ones prefixed by >>):
In file included from include/linux/init.h:5,
from arch/x86/mm/tlb.c:2:
arch/x86/mm/tlb.c: In function 'switch_mm_irqs_off':
>> arch/x86/mm/tlb.c:537:34: error: 'struct mm_struct' has no member named 'membarrier_state'
537 | if (unlikely(atomic_read(&next->membarrier_state) &
| ^~
include/linux/compiler.h:78:42: note: in definition of macro 'unlikely'
78 | # define unlikely(x) __builtin_expect(!!(x), 0)
| ^
>> arch/x86/mm/tlb.c:538:10: error: 'MEMBARRIER_STATE_PRIVATE_EXPEDITED_SYNC_CORE' undeclared (first use in this function)
538 | MEMBARRIER_STATE_PRIVATE_EXPEDITED_SYNC_CORE))
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/compiler.h:78:42: note: in definition of macro 'unlikely'
78 | # define unlikely(x) __builtin_expect(!!(x), 0)
| ^
arch/x86/mm/tlb.c:538:10: note: each undeclared identifier is reported only once for each function it appears in
538 | MEMBARRIER_STATE_PRIVATE_EXPEDITED_SYNC_CORE))
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/compiler.h:78:42: note: in definition of macro 'unlikely'
78 | # define unlikely(x) __builtin_expect(!!(x), 0)
| ^
vim +537 arch/x86/mm/tlb.c
422
423 void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
424 struct task_struct *tsk)
425 {
426 struct mm_struct *real_prev = this_cpu_read(cpu_tlbstate.loaded_mm);
427 u16 prev_asid = this_cpu_read(cpu_tlbstate.loaded_mm_asid);
428 bool was_lazy = this_cpu_read(cpu_tlbstate.is_lazy);
429 unsigned cpu = smp_processor_id();
430 u64 next_tlb_gen;
431 bool need_flush;
432 u16 new_asid;
433
434 /*
435 * NB: The scheduler will call us with prev == next when switching
436 * from lazy TLB mode to normal mode if active_mm isn't changing.
437 * When this happens, we don't assume that CR3 (and hence
438 * cpu_tlbstate.loaded_mm) matches next.
439 *
440 * NB: leave_mm() calls us with prev == NULL and tsk == NULL.
441 */
442
443 /* We don't want flush_tlb_func_* to run concurrently with us. */
444 if (IS_ENABLED(CONFIG_PROVE_LOCKING))
445 WARN_ON_ONCE(!irqs_disabled());
446
447 /*
448 * Verify that CR3 is what we think it is. This will catch
449 * hypothetical buggy code that directly switches to swapper_pg_dir
450 * without going through leave_mm() / switch_mm_irqs_off() or that
451 * does something like write_cr3(read_cr3_pa()).
452 *
453 * Only do this check if CONFIG_DEBUG_VM=y because __read_cr3()
454 * isn't free.
455 */
456 #ifdef CONFIG_DEBUG_VM
457 if (WARN_ON_ONCE(__read_cr3() != build_cr3(real_prev->pgd, prev_asid))) {
458 /*
459 * If we were to BUG here, we'd be very likely to kill
460 * the system so hard that we don't see the call trace.
461 * Try to recover instead by ignoring the error and doing
462 * a global flush to minimize the chance of corruption.
463 *
464 * (This is far from being a fully correct recovery.
465 * Architecturally, the CPU could prefetch something
466 * back into an incorrect ASID slot and leave it there
467 * to cause trouble down the road. It's better than
468 * nothing, though.)
469 */
470 __flush_tlb_all();
471 }
472 #endif
473 this_cpu_write(cpu_tlbstate.is_lazy, false);
474
475 /*
476 * membarrier() support requires that, when we change rq->curr->mm:
477 *
478 * - If next->mm has membarrier registered, a full memory barrier
479 * after writing rq->curr (or rq->curr->mm if we switched the mm
480 * without switching tasks) and before returning to user mode.
481 *
482 * - If next->mm has SYNC_CORE registered, then we sync core before
483 * returning to user mode.
484 *
485 * In the case where prev->mm == next->mm, membarrier() uses an IPI
486 * instead, and no particular barriers are needed while context
487 * switching.
488 *
489 * x86 gets all of this as a side-effect of writing to CR3 except
490 * in the case where we unlazy without flushing.
491 *
492 * All other architectures are civilized and do all of this implicitly
493 * when transitioning from kernel to user mode.
494 */
495 if (real_prev == next) {
496 VM_WARN_ON(this_cpu_read(cpu_tlbstate.ctxs[prev_asid].ctx_id) !=
497 next->context.ctx_id);
498
499 /*
500 * Even in lazy TLB mode, the CPU should stay set in the
501 * mm_cpumask. The TLB shootdown code can figure out from
502 * from cpu_tlbstate.is_lazy whether or not to send an IPI.
503 */
504 if (WARN_ON_ONCE(real_prev != &init_mm &&
505 !cpumask_test_cpu(cpu, mm_cpumask(next))))
506 cpumask_set_cpu(cpu, mm_cpumask(next));
507
508 /*
509 * If the CPU is not in lazy TLB mode, we are just switching
510 * from one thread in a process to another thread in the same
511 * process. No TLB flush or membarrier() synchronization
512 * is required.
513 */
514 if (!was_lazy)
515 return;
516
517 /*
518 * Read the tlb_gen to check whether a flush is needed.
519 * If the TLB is up to date, just use it.
520 * The barrier synchronizes with the tlb_gen increment in
521 * the TLB shootdown code.
522 *
523 * As a future optimization opportunity, it's plausible
524 * that the x86 memory model is strong enough that this
525 * smp_mb() isn't needed.
526 */
527 smp_mb();
528 next_tlb_gen = atomic64_read(&next->context.tlb_gen);
529 if (this_cpu_read(cpu_tlbstate.ctxs[prev_asid].tlb_gen) ==
530 next_tlb_gen) {
531 /*
532 * We switched logical mm but we're not going to
533 * write to CR3. We already did smp_mb() above,
534 * but membarrier() might require a sync_core()
535 * as well.
536 */
> 537 if (unlikely(atomic_read(&next->membarrier_state) &
> 538 MEMBARRIER_STATE_PRIVATE_EXPEDITED_SYNC_CORE))
539 sync_core_before_usermode();
540
541 return;
542 }
543
544 /*
545 * TLB contents went out of date while we were in lazy
546 * mode. Fall through to the TLB switching code below.
547 * No need for an explicit membarrier invocation -- the CR3
548 * write will serialize.
549 */
550 new_asid = prev_asid;
551 need_flush = true;
552 } else {
553 /*
554 * Avoid user/user BTB poisoning by flushing the branch
555 * predictor when switching between processes. This stops
556 * one process from doing Spectre-v2 attacks on another.
557 */
558 cond_ibpb(tsk);
559
560 /*
561 * Stop remote flushes for the previous mm.
562 * Skip kernel threads; we never send init_mm TLB flushing IPIs,
563 * but the bitmap manipulation can cause cache line contention.
564 */
565 if (real_prev != &init_mm) {
566 VM_WARN_ON_ONCE(!cpumask_test_cpu(cpu,
567 mm_cpumask(real_prev)));
568 cpumask_clear_cpu(cpu, mm_cpumask(real_prev));
569 }
570
571 /*
572 * Start remote flushes and then read tlb_gen.
573 */
574 if (next != &init_mm)
575 cpumask_set_cpu(cpu, mm_cpumask(next));
576 next_tlb_gen = atomic64_read(&next->context.tlb_gen);
577
578 choose_new_asid(next, next_tlb_gen, &new_asid, &need_flush);
579
580 /* Let nmi_uaccess_okay() know that we're changing CR3. */
581 this_cpu_write(cpu_tlbstate.loaded_mm, LOADED_MM_SWITCHING);
582 barrier();
583 }
584
585 if (need_flush) {
586 this_cpu_write(cpu_tlbstate.ctxs[new_asid].ctx_id, next->context.ctx_id);
587 this_cpu_write(cpu_tlbstate.ctxs[new_asid].tlb_gen, next_tlb_gen);
588 load_new_mm_cr3(next->pgd, new_asid, true);
589
590 trace_tlb_flush(TLB_FLUSH_ON_TASK_SWITCH, TLB_FLUSH_ALL);
591 } else {
592 /* The new ASID is already up to date. */
593 load_new_mm_cr3(next->pgd, new_asid, false);
594
595 trace_tlb_flush(TLB_FLUSH_ON_TASK_SWITCH, 0);
596 }
597
598 /* Make sure we write CR3 before loaded_mm. */
599 barrier();
600
601 this_cpu_write(cpu_tlbstate.loaded_mm, next);
602 this_cpu_write(cpu_tlbstate.loaded_mm_asid, new_asid);
603
604 if (next != real_prev) {
605 cr4_update_pce_mm(next);
606 switch_ldt(real_prev, next);
607 }
608 }
609
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org
[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 7262 bytes --]
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2020-12-28 1:43 UTC | newest]
Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-12-28 1:43 [luto:x86/fixes 14/19] arch/x86/mm/tlb.c:537:34: error: 'struct mm_struct' has no member named 'membarrier_state' kernel test robot
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.