All of lore.kernel.org
 help / color / mirror / Atom feed
* [linux-next:master 14900/14955] arch/riscv/mm/fault.c:275 do_page_fault() warn: inconsistent returns 'mm->mmap_lock'.
@ 2020-06-03 23:23 kernel test robot
  0 siblings, 0 replies; only message in thread
From: kernel test robot @ 2020-06-03 23:23 UTC (permalink / raw)
  To: kbuild

[-- Attachment #1: Type: text/plain, Size: 20851 bytes --]

CC: kbuild-all(a)lists.01.org
TO: "Michel, Lespinasse," <walken@google.com>
CC: Vlastimil Babka <vbabka@suse.cz>
CC: Davidlohr Bueso <dbueso@suse.de>
CC: Andrew Morton <akpm@linux-foundation.org>
CC: Linux Memory Management List <linux-mm@kvack.org>

tree:   https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
head:   0e21d4620dd047da7952f44a2e1ac777ded2d57e
commit: 4601e502817fe919dab54c779f7a037eb4e92c78 [14900/14955] mmap locking API: rename mmap_sem to mmap_lock
:::::: branch date: 2 days ago
:::::: commit date: 2 days ago
config: riscv-randconfig-m031-20200603 (attached as .config)
compiler: riscv64-linux-gcc (GCC) 9.3.0

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>

smatch warnings:
arch/riscv/mm/fault.c:275 do_page_fault() warn: inconsistent returns 'mm->mmap_lock'.

# https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/commit/?id=4601e502817fe919dab54c779f7a037eb4e92c78
git remote add linux-next https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git
git remote update linux-next
git checkout 4601e502817fe919dab54c779f7a037eb4e92c78
vim +275 arch/riscv/mm/fault.c

ffaee2728f9b27 Paul Walmsley     2019-10-17   22  
07037db5d479f9 Palmer Dabbelt    2017-07-10   23  /*
07037db5d479f9 Palmer Dabbelt    2017-07-10   24   * This routine handles page faults.  It determines the address and the
07037db5d479f9 Palmer Dabbelt    2017-07-10   25   * problem, and then passes it off to one of the appropriate routines.
07037db5d479f9 Palmer Dabbelt    2017-07-10   26   */
07037db5d479f9 Palmer Dabbelt    2017-07-10   27  asmlinkage void do_page_fault(struct pt_regs *regs)
07037db5d479f9 Palmer Dabbelt    2017-07-10   28  {
07037db5d479f9 Palmer Dabbelt    2017-07-10   29  	struct task_struct *tsk;
07037db5d479f9 Palmer Dabbelt    2017-07-10   30  	struct vm_area_struct *vma;
07037db5d479f9 Palmer Dabbelt    2017-07-10   31  	struct mm_struct *mm;
07037db5d479f9 Palmer Dabbelt    2017-07-10   32  	unsigned long addr, cause;
dde1607248328c Peter Xu          2020-04-01   33  	unsigned int flags = FAULT_FLAG_DEFAULT;
50a7ca3c6fc869 Souptick Joarder  2018-08-17   34  	int code = SEGV_MAPERR;
50a7ca3c6fc869 Souptick Joarder  2018-08-17   35  	vm_fault_t fault;
07037db5d479f9 Palmer Dabbelt    2017-07-10   36  
a4c3733d32a72f Christoph Hellwig 2019-10-28   37  	cause = regs->cause;
a4c3733d32a72f Christoph Hellwig 2019-10-28   38  	addr = regs->badaddr;
07037db5d479f9 Palmer Dabbelt    2017-07-10   39  
07037db5d479f9 Palmer Dabbelt    2017-07-10   40  	tsk = current;
07037db5d479f9 Palmer Dabbelt    2017-07-10   41  	mm = tsk->mm;
07037db5d479f9 Palmer Dabbelt    2017-07-10   42  
07037db5d479f9 Palmer Dabbelt    2017-07-10   43  	/*
07037db5d479f9 Palmer Dabbelt    2017-07-10   44  	 * Fault-in kernel-space virtual memory on-demand.
07037db5d479f9 Palmer Dabbelt    2017-07-10   45  	 * The 'reference' page table is init_mm.pgd.
07037db5d479f9 Palmer Dabbelt    2017-07-10   46  	 *
07037db5d479f9 Palmer Dabbelt    2017-07-10   47  	 * NOTE! We MUST NOT take any locks for this case. We may
07037db5d479f9 Palmer Dabbelt    2017-07-10   48  	 * be in an interrupt or a critical region, and should
07037db5d479f9 Palmer Dabbelt    2017-07-10   49  	 * only copy the information from the master page table,
07037db5d479f9 Palmer Dabbelt    2017-07-10   50  	 * nothing more.
07037db5d479f9 Palmer Dabbelt    2017-07-10   51  	 */
07037db5d479f9 Palmer Dabbelt    2017-07-10   52  	if (unlikely((addr >= VMALLOC_START) && (addr <= VMALLOC_END)))
07037db5d479f9 Palmer Dabbelt    2017-07-10   53  		goto vmalloc_fault;
07037db5d479f9 Palmer Dabbelt    2017-07-10   54  
07037db5d479f9 Palmer Dabbelt    2017-07-10   55  	/* Enable interrupts if they were enabled in the parent context. */
a4c3733d32a72f Christoph Hellwig 2019-10-28   56  	if (likely(regs->status & SR_PIE))
07037db5d479f9 Palmer Dabbelt    2017-07-10   57  		local_irq_enable();
07037db5d479f9 Palmer Dabbelt    2017-07-10   58  
07037db5d479f9 Palmer Dabbelt    2017-07-10   59  	/*
07037db5d479f9 Palmer Dabbelt    2017-07-10   60  	 * If we're in an interrupt, have no user context, or are running
07037db5d479f9 Palmer Dabbelt    2017-07-10   61  	 * in an atomic region, then we must not take the fault.
07037db5d479f9 Palmer Dabbelt    2017-07-10   62  	 */
07037db5d479f9 Palmer Dabbelt    2017-07-10   63  	if (unlikely(faulthandler_disabled() || !mm))
07037db5d479f9 Palmer Dabbelt    2017-07-10   64  		goto no_context;
07037db5d479f9 Palmer Dabbelt    2017-07-10   65  
07037db5d479f9 Palmer Dabbelt    2017-07-10   66  	if (user_mode(regs))
07037db5d479f9 Palmer Dabbelt    2017-07-10   67  		flags |= FAULT_FLAG_USER;
07037db5d479f9 Palmer Dabbelt    2017-07-10   68  
07037db5d479f9 Palmer Dabbelt    2017-07-10   69  	perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr);
07037db5d479f9 Palmer Dabbelt    2017-07-10   70  
07037db5d479f9 Palmer Dabbelt    2017-07-10   71  retry:
00b0a4590383ec Michel Lespinasse 2020-05-30   72  	mmap_read_lock(mm);
07037db5d479f9 Palmer Dabbelt    2017-07-10   73  	vma = find_vma(mm, addr);
07037db5d479f9 Palmer Dabbelt    2017-07-10   74  	if (unlikely(!vma))
07037db5d479f9 Palmer Dabbelt    2017-07-10   75  		goto bad_area;
07037db5d479f9 Palmer Dabbelt    2017-07-10   76  	if (likely(vma->vm_start <= addr))
07037db5d479f9 Palmer Dabbelt    2017-07-10   77  		goto good_area;
07037db5d479f9 Palmer Dabbelt    2017-07-10   78  	if (unlikely(!(vma->vm_flags & VM_GROWSDOWN)))
07037db5d479f9 Palmer Dabbelt    2017-07-10   79  		goto bad_area;
07037db5d479f9 Palmer Dabbelt    2017-07-10   80  	if (unlikely(expand_stack(vma, addr)))
07037db5d479f9 Palmer Dabbelt    2017-07-10   81  		goto bad_area;
07037db5d479f9 Palmer Dabbelt    2017-07-10   82  
07037db5d479f9 Palmer Dabbelt    2017-07-10   83  	/*
07037db5d479f9 Palmer Dabbelt    2017-07-10   84  	 * Ok, we have a good vm_area for this memory access, so
07037db5d479f9 Palmer Dabbelt    2017-07-10   85  	 * we can handle it.
07037db5d479f9 Palmer Dabbelt    2017-07-10   86  	 */
07037db5d479f9 Palmer Dabbelt    2017-07-10   87  good_area:
07037db5d479f9 Palmer Dabbelt    2017-07-10   88  	code = SEGV_ACCERR;
07037db5d479f9 Palmer Dabbelt    2017-07-10   89  
07037db5d479f9 Palmer Dabbelt    2017-07-10   90  	switch (cause) {
07037db5d479f9 Palmer Dabbelt    2017-07-10   91  	case EXC_INST_PAGE_FAULT:
07037db5d479f9 Palmer Dabbelt    2017-07-10   92  		if (!(vma->vm_flags & VM_EXEC))
07037db5d479f9 Palmer Dabbelt    2017-07-10   93  			goto bad_area;
07037db5d479f9 Palmer Dabbelt    2017-07-10   94  		break;
07037db5d479f9 Palmer Dabbelt    2017-07-10   95  	case EXC_LOAD_PAGE_FAULT:
07037db5d479f9 Palmer Dabbelt    2017-07-10   96  		if (!(vma->vm_flags & VM_READ))
07037db5d479f9 Palmer Dabbelt    2017-07-10   97  			goto bad_area;
07037db5d479f9 Palmer Dabbelt    2017-07-10   98  		break;
07037db5d479f9 Palmer Dabbelt    2017-07-10   99  	case EXC_STORE_PAGE_FAULT:
07037db5d479f9 Palmer Dabbelt    2017-07-10  100  		if (!(vma->vm_flags & VM_WRITE))
07037db5d479f9 Palmer Dabbelt    2017-07-10  101  			goto bad_area;
07037db5d479f9 Palmer Dabbelt    2017-07-10  102  		flags |= FAULT_FLAG_WRITE;
07037db5d479f9 Palmer Dabbelt    2017-07-10  103  		break;
07037db5d479f9 Palmer Dabbelt    2017-07-10  104  	default:
07037db5d479f9 Palmer Dabbelt    2017-07-10  105  		panic("%s: unhandled cause %lu", __func__, cause);
07037db5d479f9 Palmer Dabbelt    2017-07-10  106  	}
07037db5d479f9 Palmer Dabbelt    2017-07-10  107  
07037db5d479f9 Palmer Dabbelt    2017-07-10  108  	/*
07037db5d479f9 Palmer Dabbelt    2017-07-10  109  	 * If for any reason at all we could not handle the fault,
07037db5d479f9 Palmer Dabbelt    2017-07-10  110  	 * make sure we exit gracefully rather than endlessly redo
07037db5d479f9 Palmer Dabbelt    2017-07-10  111  	 * the fault.
07037db5d479f9 Palmer Dabbelt    2017-07-10  112  	 */
07037db5d479f9 Palmer Dabbelt    2017-07-10  113  	fault = handle_mm_fault(vma, addr, flags);
07037db5d479f9 Palmer Dabbelt    2017-07-10  114  
07037db5d479f9 Palmer Dabbelt    2017-07-10  115  	/*
07037db5d479f9 Palmer Dabbelt    2017-07-10  116  	 * If we need to retry but a fatal signal is pending, handle the
07037db5d479f9 Palmer Dabbelt    2017-07-10  117  	 * signal first. We do not need to release the mmap_sem because it
07037db5d479f9 Palmer Dabbelt    2017-07-10  118  	 * would already be released in __lock_page_or_retry in mm/filemap.c.
07037db5d479f9 Palmer Dabbelt    2017-07-10  119  	 */
4ef873226ceb9c Peter Xu          2020-04-01  120  	if (fault_signal_pending(fault, regs))
07037db5d479f9 Palmer Dabbelt    2017-07-10  121  		return;
07037db5d479f9 Palmer Dabbelt    2017-07-10  122  
07037db5d479f9 Palmer Dabbelt    2017-07-10  123  	if (unlikely(fault & VM_FAULT_ERROR)) {
07037db5d479f9 Palmer Dabbelt    2017-07-10  124  		if (fault & VM_FAULT_OOM)
07037db5d479f9 Palmer Dabbelt    2017-07-10  125  			goto out_of_memory;
07037db5d479f9 Palmer Dabbelt    2017-07-10  126  		else if (fault & VM_FAULT_SIGBUS)
07037db5d479f9 Palmer Dabbelt    2017-07-10  127  			goto do_sigbus;
07037db5d479f9 Palmer Dabbelt    2017-07-10  128  		BUG();
07037db5d479f9 Palmer Dabbelt    2017-07-10  129  	}
07037db5d479f9 Palmer Dabbelt    2017-07-10  130  
07037db5d479f9 Palmer Dabbelt    2017-07-10  131  	/*
07037db5d479f9 Palmer Dabbelt    2017-07-10  132  	 * Major/minor page fault accounting is only done on the
07037db5d479f9 Palmer Dabbelt    2017-07-10  133  	 * initial attempt. If we go through a retry, it is extremely
07037db5d479f9 Palmer Dabbelt    2017-07-10  134  	 * likely that the page will be found in page cache at that point.
07037db5d479f9 Palmer Dabbelt    2017-07-10  135  	 */
07037db5d479f9 Palmer Dabbelt    2017-07-10  136  	if (flags & FAULT_FLAG_ALLOW_RETRY) {
07037db5d479f9 Palmer Dabbelt    2017-07-10  137  		if (fault & VM_FAULT_MAJOR) {
07037db5d479f9 Palmer Dabbelt    2017-07-10  138  			tsk->maj_flt++;
07037db5d479f9 Palmer Dabbelt    2017-07-10  139  			perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ,
07037db5d479f9 Palmer Dabbelt    2017-07-10  140  				      1, regs, addr);
07037db5d479f9 Palmer Dabbelt    2017-07-10  141  		} else {
07037db5d479f9 Palmer Dabbelt    2017-07-10  142  			tsk->min_flt++;
07037db5d479f9 Palmer Dabbelt    2017-07-10  143  			perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN,
07037db5d479f9 Palmer Dabbelt    2017-07-10  144  				      1, regs, addr);
07037db5d479f9 Palmer Dabbelt    2017-07-10  145  		}
07037db5d479f9 Palmer Dabbelt    2017-07-10  146  		if (fault & VM_FAULT_RETRY) {
07037db5d479f9 Palmer Dabbelt    2017-07-10  147  			flags |= FAULT_FLAG_TRIED;
07037db5d479f9 Palmer Dabbelt    2017-07-10  148  
07037db5d479f9 Palmer Dabbelt    2017-07-10  149  			/*
07037db5d479f9 Palmer Dabbelt    2017-07-10  150  			 * No need to up_read(&mm->mmap_sem) as we would
07037db5d479f9 Palmer Dabbelt    2017-07-10  151  			 * have already released it in __lock_page_or_retry
07037db5d479f9 Palmer Dabbelt    2017-07-10  152  			 * in mm/filemap.c.
07037db5d479f9 Palmer Dabbelt    2017-07-10  153  			 */
07037db5d479f9 Palmer Dabbelt    2017-07-10  154  			goto retry;
07037db5d479f9 Palmer Dabbelt    2017-07-10  155  		}
07037db5d479f9 Palmer Dabbelt    2017-07-10  156  	}
07037db5d479f9 Palmer Dabbelt    2017-07-10  157  
00b0a4590383ec Michel Lespinasse 2020-05-30  158  	mmap_read_unlock(mm);
07037db5d479f9 Palmer Dabbelt    2017-07-10  159  	return;
07037db5d479f9 Palmer Dabbelt    2017-07-10  160  
07037db5d479f9 Palmer Dabbelt    2017-07-10  161  	/*
07037db5d479f9 Palmer Dabbelt    2017-07-10  162  	 * Something tried to access memory that isn't in our memory map.
07037db5d479f9 Palmer Dabbelt    2017-07-10  163  	 * Fix it, but check if it's kernel or user first.
07037db5d479f9 Palmer Dabbelt    2017-07-10  164  	 */
07037db5d479f9 Palmer Dabbelt    2017-07-10  165  bad_area:
00b0a4590383ec Michel Lespinasse 2020-05-30  166  	mmap_read_unlock(mm);
07037db5d479f9 Palmer Dabbelt    2017-07-10  167  	/* User mode accesses just cause a SIGSEGV */
07037db5d479f9 Palmer Dabbelt    2017-07-10  168  	if (user_mode(regs)) {
6f25a967646aa3 Eric W. Biederman 2019-02-05  169  		do_trap(regs, SIGSEGV, code, addr);
07037db5d479f9 Palmer Dabbelt    2017-07-10  170  		return;
07037db5d479f9 Palmer Dabbelt    2017-07-10  171  	}
07037db5d479f9 Palmer Dabbelt    2017-07-10  172  
07037db5d479f9 Palmer Dabbelt    2017-07-10  173  no_context:
07037db5d479f9 Palmer Dabbelt    2017-07-10  174  	/* Are we prepared to handle this kernel fault? */
07037db5d479f9 Palmer Dabbelt    2017-07-10  175  	if (fixup_exception(regs))
07037db5d479f9 Palmer Dabbelt    2017-07-10  176  		return;
07037db5d479f9 Palmer Dabbelt    2017-07-10  177  
07037db5d479f9 Palmer Dabbelt    2017-07-10  178  	/*
07037db5d479f9 Palmer Dabbelt    2017-07-10  179  	 * Oops. The kernel tried to access some bad page. We'll have to
07037db5d479f9 Palmer Dabbelt    2017-07-10  180  	 * terminate things with extreme prejudice.
07037db5d479f9 Palmer Dabbelt    2017-07-10  181  	 */
07037db5d479f9 Palmer Dabbelt    2017-07-10  182  	bust_spinlocks(1);
07037db5d479f9 Palmer Dabbelt    2017-07-10  183  	pr_alert("Unable to handle kernel %s at virtual address " REG_FMT "\n",
07037db5d479f9 Palmer Dabbelt    2017-07-10  184  		(addr < PAGE_SIZE) ? "NULL pointer dereference" :
07037db5d479f9 Palmer Dabbelt    2017-07-10  185  		"paging request", addr);
07037db5d479f9 Palmer Dabbelt    2017-07-10  186  	die(regs, "Oops");
07037db5d479f9 Palmer Dabbelt    2017-07-10  187  	do_exit(SIGKILL);
07037db5d479f9 Palmer Dabbelt    2017-07-10  188  
07037db5d479f9 Palmer Dabbelt    2017-07-10  189  	/*
07037db5d479f9 Palmer Dabbelt    2017-07-10  190  	 * We ran out of memory, call the OOM killer, and return the userspace
07037db5d479f9 Palmer Dabbelt    2017-07-10  191  	 * (which will retry the fault, or kill us if we got oom-killed).
07037db5d479f9 Palmer Dabbelt    2017-07-10  192  	 */
07037db5d479f9 Palmer Dabbelt    2017-07-10  193  out_of_memory:
00b0a4590383ec Michel Lespinasse 2020-05-30  194  	mmap_read_unlock(mm);
07037db5d479f9 Palmer Dabbelt    2017-07-10  195  	if (!user_mode(regs))
07037db5d479f9 Palmer Dabbelt    2017-07-10  196  		goto no_context;
07037db5d479f9 Palmer Dabbelt    2017-07-10  197  	pagefault_out_of_memory();
07037db5d479f9 Palmer Dabbelt    2017-07-10  198  	return;
07037db5d479f9 Palmer Dabbelt    2017-07-10  199  
07037db5d479f9 Palmer Dabbelt    2017-07-10  200  do_sigbus:
00b0a4590383ec Michel Lespinasse 2020-05-30  201  	mmap_read_unlock(mm);
07037db5d479f9 Palmer Dabbelt    2017-07-10  202  	/* Kernel mode? Handle exceptions or die */
07037db5d479f9 Palmer Dabbelt    2017-07-10  203  	if (!user_mode(regs))
07037db5d479f9 Palmer Dabbelt    2017-07-10  204  		goto no_context;
6f25a967646aa3 Eric W. Biederman 2019-02-05  205  	do_trap(regs, SIGBUS, BUS_ADRERR, addr);
07037db5d479f9 Palmer Dabbelt    2017-07-10  206  	return;
07037db5d479f9 Palmer Dabbelt    2017-07-10  207  
07037db5d479f9 Palmer Dabbelt    2017-07-10  208  vmalloc_fault:
07037db5d479f9 Palmer Dabbelt    2017-07-10  209  	{
07037db5d479f9 Palmer Dabbelt    2017-07-10  210  		pgd_t *pgd, *pgd_k;
07037db5d479f9 Palmer Dabbelt    2017-07-10  211  		pud_t *pud, *pud_k;
07037db5d479f9 Palmer Dabbelt    2017-07-10  212  		p4d_t *p4d, *p4d_k;
07037db5d479f9 Palmer Dabbelt    2017-07-10  213  		pmd_t *pmd, *pmd_k;
07037db5d479f9 Palmer Dabbelt    2017-07-10  214  		pte_t *pte_k;
07037db5d479f9 Palmer Dabbelt    2017-07-10  215  		int index;
07037db5d479f9 Palmer Dabbelt    2017-07-10  216  
8fef9900d43feb Andreas Schwab    2019-05-07  217  		/* User mode accesses just cause a SIGSEGV */
07037db5d479f9 Palmer Dabbelt    2017-07-10  218  		if (user_mode(regs))
6f25a967646aa3 Eric W. Biederman 2019-02-05  219  			return do_trap(regs, SIGSEGV, code, addr);
07037db5d479f9 Palmer Dabbelt    2017-07-10  220  
07037db5d479f9 Palmer Dabbelt    2017-07-10  221  		/*
07037db5d479f9 Palmer Dabbelt    2017-07-10  222  		 * Synchronize this task's top level page-table
07037db5d479f9 Palmer Dabbelt    2017-07-10  223  		 * with the 'reference' page table.
07037db5d479f9 Palmer Dabbelt    2017-07-10  224  		 *
07037db5d479f9 Palmer Dabbelt    2017-07-10  225  		 * Do _not_ use "tsk->active_mm->pgd" here.
07037db5d479f9 Palmer Dabbelt    2017-07-10  226  		 * We might be inside an interrupt in the middle
07037db5d479f9 Palmer Dabbelt    2017-07-10  227  		 * of a task switch.
07037db5d479f9 Palmer Dabbelt    2017-07-10  228  		 */
07037db5d479f9 Palmer Dabbelt    2017-07-10  229  		index = pgd_index(addr);
a3182c91ef4e7d Anup Patel        2019-04-25  230  		pgd = (pgd_t *)pfn_to_virt(csr_read(CSR_SATP)) + index;
07037db5d479f9 Palmer Dabbelt    2017-07-10  231  		pgd_k = init_mm.pgd + index;
07037db5d479f9 Palmer Dabbelt    2017-07-10  232  
07037db5d479f9 Palmer Dabbelt    2017-07-10  233  		if (!pgd_present(*pgd_k))
07037db5d479f9 Palmer Dabbelt    2017-07-10  234  			goto no_context;
07037db5d479f9 Palmer Dabbelt    2017-07-10  235  		set_pgd(pgd, *pgd_k);
07037db5d479f9 Palmer Dabbelt    2017-07-10  236  
07037db5d479f9 Palmer Dabbelt    2017-07-10  237  		p4d = p4d_offset(pgd, addr);
07037db5d479f9 Palmer Dabbelt    2017-07-10  238  		p4d_k = p4d_offset(pgd_k, addr);
07037db5d479f9 Palmer Dabbelt    2017-07-10  239  		if (!p4d_present(*p4d_k))
07037db5d479f9 Palmer Dabbelt    2017-07-10  240  			goto no_context;
07037db5d479f9 Palmer Dabbelt    2017-07-10  241  
07037db5d479f9 Palmer Dabbelt    2017-07-10  242  		pud = pud_offset(p4d, addr);
07037db5d479f9 Palmer Dabbelt    2017-07-10  243  		pud_k = pud_offset(p4d_k, addr);
07037db5d479f9 Palmer Dabbelt    2017-07-10  244  		if (!pud_present(*pud_k))
07037db5d479f9 Palmer Dabbelt    2017-07-10  245  			goto no_context;
07037db5d479f9 Palmer Dabbelt    2017-07-10  246  
07037db5d479f9 Palmer Dabbelt    2017-07-10  247  		/*
07037db5d479f9 Palmer Dabbelt    2017-07-10  248  		 * Since the vmalloc area is global, it is unnecessary
07037db5d479f9 Palmer Dabbelt    2017-07-10  249  		 * to copy individual PTEs
07037db5d479f9 Palmer Dabbelt    2017-07-10  250  		 */
07037db5d479f9 Palmer Dabbelt    2017-07-10  251  		pmd = pmd_offset(pud, addr);
07037db5d479f9 Palmer Dabbelt    2017-07-10  252  		pmd_k = pmd_offset(pud_k, addr);
07037db5d479f9 Palmer Dabbelt    2017-07-10  253  		if (!pmd_present(*pmd_k))
07037db5d479f9 Palmer Dabbelt    2017-07-10  254  			goto no_context;
07037db5d479f9 Palmer Dabbelt    2017-07-10  255  		set_pmd(pmd, *pmd_k);
07037db5d479f9 Palmer Dabbelt    2017-07-10  256  
07037db5d479f9 Palmer Dabbelt    2017-07-10  257  		/*
07037db5d479f9 Palmer Dabbelt    2017-07-10  258  		 * Make sure the actual PTE exists as well to
07037db5d479f9 Palmer Dabbelt    2017-07-10  259  		 * catch kernel vmalloc-area accesses to non-mapped
07037db5d479f9 Palmer Dabbelt    2017-07-10  260  		 * addresses. If we don't do this, this will just
07037db5d479f9 Palmer Dabbelt    2017-07-10  261  		 * silently loop forever.
07037db5d479f9 Palmer Dabbelt    2017-07-10  262  		 */
07037db5d479f9 Palmer Dabbelt    2017-07-10  263  		pte_k = pte_offset_kernel(pmd_k, addr);
07037db5d479f9 Palmer Dabbelt    2017-07-10  264  		if (!pte_present(*pte_k))
07037db5d479f9 Palmer Dabbelt    2017-07-10  265  			goto no_context;
bf587caae305ae ShihPo Hung       2019-06-17  266  
bf587caae305ae ShihPo Hung       2019-06-17  267  		/*
bf587caae305ae ShihPo Hung       2019-06-17  268  		 * The kernel assumes that TLBs don't cache invalid
bf587caae305ae ShihPo Hung       2019-06-17  269  		 * entries, but in RISC-V, SFENCE.VMA specifies an
bf587caae305ae ShihPo Hung       2019-06-17  270  		 * ordering constraint, not a cache flush; it is
bf587caae305ae ShihPo Hung       2019-06-17  271  		 * necessary even after writing invalid entries.
bf587caae305ae ShihPo Hung       2019-06-17  272  		 */
bf587caae305ae ShihPo Hung       2019-06-17  273  		local_flush_tlb_page(addr);
bf587caae305ae ShihPo Hung       2019-06-17  274  
07037db5d479f9 Palmer Dabbelt    2017-07-10 @275  		return;

:::::: The code at line 275 was first introduced by commit
:::::: 07037db5d479f90377c998259a4f9a469c404edf RISC-V: Paging and MMU

:::::: TO: Palmer Dabbelt <palmer@dabbelt.com>
:::::: CC: Palmer Dabbelt <palmer@dabbelt.com>

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org

[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 27667 bytes --]

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2020-06-03 23:23 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-06-03 23:23 [linux-next:master 14900/14955] arch/riscv/mm/fault.c:275 do_page_fault() warn: inconsistent returns 'mm->mmap_lock' kernel test robot

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.