* [linux-next:master 5756/5946] mm/userfaultfd.c:212:6: warning: variable 'vm_alloc_shared' set but not used
@ 2021-06-01 12:10 kernel test robot
2021-06-02 8:18 ` Souptick Joarder
0 siblings, 1 reply; 3+ messages in thread
From: kernel test robot @ 2021-06-01 12:10 UTC (permalink / raw)
To: Mina Almasry; +Cc: kbuild-all, Linux Memory Management List, Andrew Morton
[-- Attachment #1: Type: text/plain, Size: 14439 bytes --]
tree: https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
head: 392d24c0d06bc89a762ba66977db41e53c21bfb5
commit: 1786d001262006df52cdcda4cbc0c8087a0200ec [5756/5946] mm, hugetlb: fix racy resv_huge_pages underflow on UFFDIO_COPY
config: powerpc64-randconfig-r022-20210601 (attached as .config)
compiler: powerpc-linux-gcc (GCC) 9.3.0
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/commit/?id=1786d001262006df52cdcda4cbc0c8087a0200ec
git remote add linux-next https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git
git fetch --no-tags linux-next master
git checkout 1786d001262006df52cdcda4cbc0c8087a0200ec
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=powerpc64
If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>
All warnings (new ones prefixed by >>):
mm/userfaultfd.c: In function '__mcopy_atomic_hugetlb':
>> mm/userfaultfd.c:212:6: warning: variable 'vm_alloc_shared' set but not used [-Wunused-but-set-variable]
212 | int vm_alloc_shared = dst_vma->vm_flags & VM_SHARED;
| ^~~~~~~~~~~~~~~
vim +/vm_alloc_shared +212 mm/userfaultfd.c
c1a4de99fada21 Andrea Arcangeli 2015-09-04 199
60d4d2d2b40e44 Mike Kravetz 2017-02-22 200 #ifdef CONFIG_HUGETLB_PAGE
60d4d2d2b40e44 Mike Kravetz 2017-02-22 201 /*
60d4d2d2b40e44 Mike Kravetz 2017-02-22 202 * __mcopy_atomic processing for HUGETLB vmas. Note that this routine is
c1e8d7c6a7a682 Michel Lespinasse 2020-06-08 203 * called with mmap_lock held, it will release mmap_lock before returning.
60d4d2d2b40e44 Mike Kravetz 2017-02-22 204 */
60d4d2d2b40e44 Mike Kravetz 2017-02-22 205 static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
60d4d2d2b40e44 Mike Kravetz 2017-02-22 206 struct vm_area_struct *dst_vma,
60d4d2d2b40e44 Mike Kravetz 2017-02-22 207 unsigned long dst_start,
60d4d2d2b40e44 Mike Kravetz 2017-02-22 208 unsigned long src_start,
60d4d2d2b40e44 Mike Kravetz 2017-02-22 209 unsigned long len,
f619147104c8ea Axel Rasmussen 2021-05-04 210 enum mcopy_atomic_mode mode)
60d4d2d2b40e44 Mike Kravetz 2017-02-22 211 {
1c9e8def43a345 Mike Kravetz 2017-02-22 @212 int vm_alloc_shared = dst_vma->vm_flags & VM_SHARED;
1c9e8def43a345 Mike Kravetz 2017-02-22 213 int vm_shared = dst_vma->vm_flags & VM_SHARED;
60d4d2d2b40e44 Mike Kravetz 2017-02-22 214 ssize_t err;
60d4d2d2b40e44 Mike Kravetz 2017-02-22 215 pte_t *dst_pte;
60d4d2d2b40e44 Mike Kravetz 2017-02-22 216 unsigned long src_addr, dst_addr;
60d4d2d2b40e44 Mike Kravetz 2017-02-22 217 long copied;
60d4d2d2b40e44 Mike Kravetz 2017-02-22 218 struct page *page;
60d4d2d2b40e44 Mike Kravetz 2017-02-22 219 unsigned long vma_hpagesize;
60d4d2d2b40e44 Mike Kravetz 2017-02-22 220 pgoff_t idx;
60d4d2d2b40e44 Mike Kravetz 2017-02-22 221 u32 hash;
60d4d2d2b40e44 Mike Kravetz 2017-02-22 222 struct address_space *mapping;
60d4d2d2b40e44 Mike Kravetz 2017-02-22 223
60d4d2d2b40e44 Mike Kravetz 2017-02-22 224 /*
60d4d2d2b40e44 Mike Kravetz 2017-02-22 225 * There is no default zero huge page for all huge page sizes as
60d4d2d2b40e44 Mike Kravetz 2017-02-22 226 * supported by hugetlb. A PMD_SIZE huge pages may exist as used
60d4d2d2b40e44 Mike Kravetz 2017-02-22 227 * by THP. Since we can not reliably insert a zero page, this
60d4d2d2b40e44 Mike Kravetz 2017-02-22 228 * feature is not supported.
60d4d2d2b40e44 Mike Kravetz 2017-02-22 229 */
f619147104c8ea Axel Rasmussen 2021-05-04 230 if (mode == MCOPY_ATOMIC_ZEROPAGE) {
d8ed45c5dcd455 Michel Lespinasse 2020-06-08 231 mmap_read_unlock(dst_mm);
60d4d2d2b40e44 Mike Kravetz 2017-02-22 232 return -EINVAL;
60d4d2d2b40e44 Mike Kravetz 2017-02-22 233 }
60d4d2d2b40e44 Mike Kravetz 2017-02-22 234
60d4d2d2b40e44 Mike Kravetz 2017-02-22 235 src_addr = src_start;
60d4d2d2b40e44 Mike Kravetz 2017-02-22 236 dst_addr = dst_start;
60d4d2d2b40e44 Mike Kravetz 2017-02-22 237 copied = 0;
60d4d2d2b40e44 Mike Kravetz 2017-02-22 238 page = NULL;
60d4d2d2b40e44 Mike Kravetz 2017-02-22 239 vma_hpagesize = vma_kernel_pagesize(dst_vma);
60d4d2d2b40e44 Mike Kravetz 2017-02-22 240
60d4d2d2b40e44 Mike Kravetz 2017-02-22 241 /*
60d4d2d2b40e44 Mike Kravetz 2017-02-22 242 * Validate alignment based on huge page size
60d4d2d2b40e44 Mike Kravetz 2017-02-22 243 */
60d4d2d2b40e44 Mike Kravetz 2017-02-22 244 err = -EINVAL;
60d4d2d2b40e44 Mike Kravetz 2017-02-22 245 if (dst_start & (vma_hpagesize - 1) || len & (vma_hpagesize - 1))
60d4d2d2b40e44 Mike Kravetz 2017-02-22 246 goto out_unlock;
60d4d2d2b40e44 Mike Kravetz 2017-02-22 247
60d4d2d2b40e44 Mike Kravetz 2017-02-22 248 retry:
60d4d2d2b40e44 Mike Kravetz 2017-02-22 249 /*
c1e8d7c6a7a682 Michel Lespinasse 2020-06-08 250 * On routine entry dst_vma is set. If we had to drop mmap_lock and
60d4d2d2b40e44 Mike Kravetz 2017-02-22 251 * retry, dst_vma will be set to NULL and we must lookup again.
60d4d2d2b40e44 Mike Kravetz 2017-02-22 252 */
60d4d2d2b40e44 Mike Kravetz 2017-02-22 253 if (!dst_vma) {
27d02568f529e9 Mike Rapoport 2017-02-24 254 err = -ENOENT;
643aa36eadebdc Wei Yang 2019-11-30 255 dst_vma = find_dst_vma(dst_mm, dst_start, len);
60d4d2d2b40e44 Mike Kravetz 2017-02-22 256 if (!dst_vma || !is_vm_hugetlb_page(dst_vma))
60d4d2d2b40e44 Mike Kravetz 2017-02-22 257 goto out_unlock;
1c9e8def43a345 Mike Kravetz 2017-02-22 258
27d02568f529e9 Mike Rapoport 2017-02-24 259 err = -EINVAL;
27d02568f529e9 Mike Rapoport 2017-02-24 260 if (vma_hpagesize != vma_kernel_pagesize(dst_vma))
27d02568f529e9 Mike Rapoport 2017-02-24 261 goto out_unlock;
27d02568f529e9 Mike Rapoport 2017-02-24 262
1c9e8def43a345 Mike Kravetz 2017-02-22 263 vm_shared = dst_vma->vm_flags & VM_SHARED;
60d4d2d2b40e44 Mike Kravetz 2017-02-22 264 }
60d4d2d2b40e44 Mike Kravetz 2017-02-22 265
60d4d2d2b40e44 Mike Kravetz 2017-02-22 266 /*
1c9e8def43a345 Mike Kravetz 2017-02-22 267 * If not shared, ensure the dst_vma has a anon_vma.
60d4d2d2b40e44 Mike Kravetz 2017-02-22 268 */
60d4d2d2b40e44 Mike Kravetz 2017-02-22 269 err = -ENOMEM;
1c9e8def43a345 Mike Kravetz 2017-02-22 270 if (!vm_shared) {
60d4d2d2b40e44 Mike Kravetz 2017-02-22 271 if (unlikely(anon_vma_prepare(dst_vma)))
60d4d2d2b40e44 Mike Kravetz 2017-02-22 272 goto out_unlock;
1c9e8def43a345 Mike Kravetz 2017-02-22 273 }
60d4d2d2b40e44 Mike Kravetz 2017-02-22 274
60d4d2d2b40e44 Mike Kravetz 2017-02-22 275 while (src_addr < src_start + len) {
60d4d2d2b40e44 Mike Kravetz 2017-02-22 276 BUG_ON(dst_addr >= dst_start + len);
60d4d2d2b40e44 Mike Kravetz 2017-02-22 277
60d4d2d2b40e44 Mike Kravetz 2017-02-22 278 /*
c0d0381ade7988 Mike Kravetz 2020-04-01 279 * Serialize via i_mmap_rwsem and hugetlb_fault_mutex.
c0d0381ade7988 Mike Kravetz 2020-04-01 280 * i_mmap_rwsem ensures the dst_pte remains valid even
c0d0381ade7988 Mike Kravetz 2020-04-01 281 * in the case of shared pmds. fault mutex prevents
c0d0381ade7988 Mike Kravetz 2020-04-01 282 * races with other faulting threads.
60d4d2d2b40e44 Mike Kravetz 2017-02-22 283 */
ddeaab32a89f04 Mike Kravetz 2019-01-08 284 mapping = dst_vma->vm_file->f_mapping;
c0d0381ade7988 Mike Kravetz 2020-04-01 285 i_mmap_lock_read(mapping);
c0d0381ade7988 Mike Kravetz 2020-04-01 286 idx = linear_page_index(dst_vma, dst_addr);
188b04a7d93860 Wei Yang 2019-11-30 287 hash = hugetlb_fault_mutex_hash(mapping, idx);
60d4d2d2b40e44 Mike Kravetz 2017-02-22 288 mutex_lock(&hugetlb_fault_mutex_table[hash]);
60d4d2d2b40e44 Mike Kravetz 2017-02-22 289
60d4d2d2b40e44 Mike Kravetz 2017-02-22 290 err = -ENOMEM;
aec44e0f0213e3 Peter Xu 2021-05-04 291 dst_pte = huge_pte_alloc(dst_mm, dst_vma, dst_addr, vma_hpagesize);
60d4d2d2b40e44 Mike Kravetz 2017-02-22 292 if (!dst_pte) {
60d4d2d2b40e44 Mike Kravetz 2017-02-22 293 mutex_unlock(&hugetlb_fault_mutex_table[hash]);
c0d0381ade7988 Mike Kravetz 2020-04-01 294 i_mmap_unlock_read(mapping);
60d4d2d2b40e44 Mike Kravetz 2017-02-22 295 goto out_unlock;
60d4d2d2b40e44 Mike Kravetz 2017-02-22 296 }
60d4d2d2b40e44 Mike Kravetz 2017-02-22 297
f619147104c8ea Axel Rasmussen 2021-05-04 298 if (mode != MCOPY_ATOMIC_CONTINUE &&
f619147104c8ea Axel Rasmussen 2021-05-04 299 !huge_pte_none(huge_ptep_get(dst_pte))) {
60d4d2d2b40e44 Mike Kravetz 2017-02-22 300 err = -EEXIST;
60d4d2d2b40e44 Mike Kravetz 2017-02-22 301 mutex_unlock(&hugetlb_fault_mutex_table[hash]);
c0d0381ade7988 Mike Kravetz 2020-04-01 302 i_mmap_unlock_read(mapping);
60d4d2d2b40e44 Mike Kravetz 2017-02-22 303 goto out_unlock;
60d4d2d2b40e44 Mike Kravetz 2017-02-22 304 }
60d4d2d2b40e44 Mike Kravetz 2017-02-22 305
60d4d2d2b40e44 Mike Kravetz 2017-02-22 306 err = hugetlb_mcopy_atomic_pte(dst_mm, dst_pte, dst_vma,
f619147104c8ea Axel Rasmussen 2021-05-04 307 dst_addr, src_addr, mode, &page);
60d4d2d2b40e44 Mike Kravetz 2017-02-22 308
60d4d2d2b40e44 Mike Kravetz 2017-02-22 309 mutex_unlock(&hugetlb_fault_mutex_table[hash]);
c0d0381ade7988 Mike Kravetz 2020-04-01 310 i_mmap_unlock_read(mapping);
1c9e8def43a345 Mike Kravetz 2017-02-22 311 vm_alloc_shared = vm_shared;
60d4d2d2b40e44 Mike Kravetz 2017-02-22 312
60d4d2d2b40e44 Mike Kravetz 2017-02-22 313 cond_resched();
60d4d2d2b40e44 Mike Kravetz 2017-02-22 314
9e368259ad9883 Andrea Arcangeli 2018-11-30 315 if (unlikely(err == -ENOENT)) {
d8ed45c5dcd455 Michel Lespinasse 2020-06-08 316 mmap_read_unlock(dst_mm);
60d4d2d2b40e44 Mike Kravetz 2017-02-22 317 BUG_ON(!page);
60d4d2d2b40e44 Mike Kravetz 2017-02-22 318
60d4d2d2b40e44 Mike Kravetz 2017-02-22 319 err = copy_huge_page_from_user(page,
60d4d2d2b40e44 Mike Kravetz 2017-02-22 320 (const void __user *)src_addr,
4fb07ee6510280 Wei Yang 2019-11-30 321 vma_hpagesize / PAGE_SIZE,
4fb07ee6510280 Wei Yang 2019-11-30 322 true);
60d4d2d2b40e44 Mike Kravetz 2017-02-22 323 if (unlikely(err)) {
60d4d2d2b40e44 Mike Kravetz 2017-02-22 324 err = -EFAULT;
60d4d2d2b40e44 Mike Kravetz 2017-02-22 325 goto out;
60d4d2d2b40e44 Mike Kravetz 2017-02-22 326 }
d8ed45c5dcd455 Michel Lespinasse 2020-06-08 327 mmap_read_lock(dst_mm);
60d4d2d2b40e44 Mike Kravetz 2017-02-22 328
60d4d2d2b40e44 Mike Kravetz 2017-02-22 329 dst_vma = NULL;
60d4d2d2b40e44 Mike Kravetz 2017-02-22 330 goto retry;
60d4d2d2b40e44 Mike Kravetz 2017-02-22 331 } else
60d4d2d2b40e44 Mike Kravetz 2017-02-22 332 BUG_ON(page);
60d4d2d2b40e44 Mike Kravetz 2017-02-22 333
60d4d2d2b40e44 Mike Kravetz 2017-02-22 334 if (!err) {
60d4d2d2b40e44 Mike Kravetz 2017-02-22 335 dst_addr += vma_hpagesize;
60d4d2d2b40e44 Mike Kravetz 2017-02-22 336 src_addr += vma_hpagesize;
60d4d2d2b40e44 Mike Kravetz 2017-02-22 337 copied += vma_hpagesize;
60d4d2d2b40e44 Mike Kravetz 2017-02-22 338
60d4d2d2b40e44 Mike Kravetz 2017-02-22 339 if (fatal_signal_pending(current))
60d4d2d2b40e44 Mike Kravetz 2017-02-22 340 err = -EINTR;
60d4d2d2b40e44 Mike Kravetz 2017-02-22 341 }
60d4d2d2b40e44 Mike Kravetz 2017-02-22 342 if (err)
60d4d2d2b40e44 Mike Kravetz 2017-02-22 343 break;
60d4d2d2b40e44 Mike Kravetz 2017-02-22 344 }
60d4d2d2b40e44 Mike Kravetz 2017-02-22 345
60d4d2d2b40e44 Mike Kravetz 2017-02-22 346 out_unlock:
d8ed45c5dcd455 Michel Lespinasse 2020-06-08 347 mmap_read_unlock(dst_mm);
60d4d2d2b40e44 Mike Kravetz 2017-02-22 348 out:
1786d001262006 Mina Almasry 2021-06-01 349 if (page)
60d4d2d2b40e44 Mike Kravetz 2017-02-22 350 put_page(page);
60d4d2d2b40e44 Mike Kravetz 2017-02-22 351 BUG_ON(copied < 0);
60d4d2d2b40e44 Mike Kravetz 2017-02-22 352 BUG_ON(err > 0);
60d4d2d2b40e44 Mike Kravetz 2017-02-22 353 BUG_ON(!copied && !err);
60d4d2d2b40e44 Mike Kravetz 2017-02-22 354 return copied ? copied : err;
60d4d2d2b40e44 Mike Kravetz 2017-02-22 355 }
60d4d2d2b40e44 Mike Kravetz 2017-02-22 356 #else /* !CONFIG_HUGETLB_PAGE */
60d4d2d2b40e44 Mike Kravetz 2017-02-22 357 /* fail at build time if gcc attempts to use this */
60d4d2d2b40e44 Mike Kravetz 2017-02-22 358 extern ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
60d4d2d2b40e44 Mike Kravetz 2017-02-22 359 struct vm_area_struct *dst_vma,
60d4d2d2b40e44 Mike Kravetz 2017-02-22 360 unsigned long dst_start,
60d4d2d2b40e44 Mike Kravetz 2017-02-22 361 unsigned long src_start,
60d4d2d2b40e44 Mike Kravetz 2017-02-22 362 unsigned long len,
f619147104c8ea Axel Rasmussen 2021-05-04 363 enum mcopy_atomic_mode mode);
60d4d2d2b40e44 Mike Kravetz 2017-02-22 364 #endif /* CONFIG_HUGETLB_PAGE */
60d4d2d2b40e44 Mike Kravetz 2017-02-22 365
:::::: The code at line 212 was first introduced by commit
:::::: 1c9e8def43a3452e7af658b340f5f4f4ecde5c38 userfaultfd: hugetlbfs: add UFFDIO_COPY support for shared mappings
:::::: TO: Mike Kravetz <mike.kravetz@oracle.com>
:::::: CC: Linus Torvalds <torvalds@linux-foundation.org>
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 39093 bytes --]
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [linux-next:master 5756/5946] mm/userfaultfd.c:212:6: warning: variable 'vm_alloc_shared' set but not used
2021-06-01 12:10 [linux-next:master 5756/5946] mm/userfaultfd.c:212:6: warning: variable 'vm_alloc_shared' set but not used kernel test robot
@ 2021-06-02 8:18 ` Souptick Joarder
2021-06-03 8:44 ` [kbuild-all] " Rong Chen
0 siblings, 1 reply; 3+ messages in thread
From: Souptick Joarder @ 2021-06-02 8:18 UTC (permalink / raw)
To: kernel test robot
Cc: Mina Almasry, kbuild-all, Linux Memory Management List, Andrew Morton
On Tue, Jun 1, 2021 at 5:40 PM kernel test robot <lkp@intel.com> wrote:
>
> tree: https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
> head: 392d24c0d06bc89a762ba66977db41e53c21bfb5
> commit: 1786d001262006df52cdcda4cbc0c8087a0200ec [5756/5946] mm, hugetlb: fix racy resv_huge_pages underflow on UFFDIO_COPY
> config: powerpc64-randconfig-r022-20210601 (attached as .config)
> compiler: powerpc-linux-gcc (GCC) 9.3.0
> reproduce (this is a W=1 build):
> wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
> chmod +x ~/bin/make.cross
> # https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/commit/?id=1786d001262006df52cdcda4cbc0c8087a0200ec
> git remote add linux-next https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git
> git fetch --no-tags linux-next master
> git checkout 1786d001262006df52cdcda4cbc0c8087a0200ec
> # save the attached .config to linux build tree
> COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=powerpc64
>
> If you fix the issue, kindly add following tag as appropriate
> Reported-by: kernel test robot <lkp@intel.com>
>
> All warnings (new ones prefixed by >>):
>
> mm/userfaultfd.c: In function '__mcopy_atomic_hugetlb':
> >> mm/userfaultfd.c:212:6: warning: variable 'vm_alloc_shared' set but not used [-Wunused-but-set-variable]
> 212 | int vm_alloc_shared = dst_vma->vm_flags & VM_SHARED;
> | ^~~~~~~~~~~~~~~
>
Looks like a false warning. vm_alloc_shared is set here within the
same function.
mutex_unlock(&hugetlb_fault_mutex_table[hash]);
i_mmap_unlock_read(mapping);
vm_alloc_shared = vm_shared;
> vim +/vm_alloc_shared +212 mm/userfaultfd.c
>
> c1a4de99fada21 Andrea Arcangeli 2015-09-04 199
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 200 #ifdef CONFIG_HUGETLB_PAGE
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 201 /*
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 202 * __mcopy_atomic processing for HUGETLB vmas. Note that this routine is
> c1e8d7c6a7a682 Michel Lespinasse 2020-06-08 203 * called with mmap_lock held, it will release mmap_lock before returning.
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 204 */
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 205 static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 206 struct vm_area_struct *dst_vma,
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 207 unsigned long dst_start,
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 208 unsigned long src_start,
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 209 unsigned long len,
> f619147104c8ea Axel Rasmussen 2021-05-04 210 enum mcopy_atomic_mode mode)
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 211 {
> 1c9e8def43a345 Mike Kravetz 2017-02-22 @212 int vm_alloc_shared = dst_vma->vm_flags & VM_SHARED;
> 1c9e8def43a345 Mike Kravetz 2017-02-22 213 int vm_shared = dst_vma->vm_flags & VM_SHARED;
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 214 ssize_t err;
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 215 pte_t *dst_pte;
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 216 unsigned long src_addr, dst_addr;
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 217 long copied;
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 218 struct page *page;
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 219 unsigned long vma_hpagesize;
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 220 pgoff_t idx;
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 221 u32 hash;
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 222 struct address_space *mapping;
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 223
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 224 /*
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 225 * There is no default zero huge page for all huge page sizes as
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 226 * supported by hugetlb. A PMD_SIZE huge pages may exist as used
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 227 * by THP. Since we can not reliably insert a zero page, this
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 228 * feature is not supported.
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 229 */
> f619147104c8ea Axel Rasmussen 2021-05-04 230 if (mode == MCOPY_ATOMIC_ZEROPAGE) {
> d8ed45c5dcd455 Michel Lespinasse 2020-06-08 231 mmap_read_unlock(dst_mm);
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 232 return -EINVAL;
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 233 }
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 234
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 235 src_addr = src_start;
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 236 dst_addr = dst_start;
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 237 copied = 0;
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 238 page = NULL;
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 239 vma_hpagesize = vma_kernel_pagesize(dst_vma);
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 240
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 241 /*
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 242 * Validate alignment based on huge page size
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 243 */
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 244 err = -EINVAL;
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 245 if (dst_start & (vma_hpagesize - 1) || len & (vma_hpagesize - 1))
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 246 goto out_unlock;
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 247
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 248 retry:
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 249 /*
> c1e8d7c6a7a682 Michel Lespinasse 2020-06-08 250 * On routine entry dst_vma is set. If we had to drop mmap_lock and
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 251 * retry, dst_vma will be set to NULL and we must lookup again.
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 252 */
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 253 if (!dst_vma) {
> 27d02568f529e9 Mike Rapoport 2017-02-24 254 err = -ENOENT;
> 643aa36eadebdc Wei Yang 2019-11-30 255 dst_vma = find_dst_vma(dst_mm, dst_start, len);
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 256 if (!dst_vma || !is_vm_hugetlb_page(dst_vma))
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 257 goto out_unlock;
> 1c9e8def43a345 Mike Kravetz 2017-02-22 258
> 27d02568f529e9 Mike Rapoport 2017-02-24 259 err = -EINVAL;
> 27d02568f529e9 Mike Rapoport 2017-02-24 260 if (vma_hpagesize != vma_kernel_pagesize(dst_vma))
> 27d02568f529e9 Mike Rapoport 2017-02-24 261 goto out_unlock;
> 27d02568f529e9 Mike Rapoport 2017-02-24 262
> 1c9e8def43a345 Mike Kravetz 2017-02-22 263 vm_shared = dst_vma->vm_flags & VM_SHARED;
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 264 }
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 265
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 266 /*
> 1c9e8def43a345 Mike Kravetz 2017-02-22 267 * If not shared, ensure the dst_vma has a anon_vma.
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 268 */
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 269 err = -ENOMEM;
> 1c9e8def43a345 Mike Kravetz 2017-02-22 270 if (!vm_shared) {
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 271 if (unlikely(anon_vma_prepare(dst_vma)))
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 272 goto out_unlock;
> 1c9e8def43a345 Mike Kravetz 2017-02-22 273 }
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 274
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 275 while (src_addr < src_start + len) {
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 276 BUG_ON(dst_addr >= dst_start + len);
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 277
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 278 /*
> c0d0381ade7988 Mike Kravetz 2020-04-01 279 * Serialize via i_mmap_rwsem and hugetlb_fault_mutex.
> c0d0381ade7988 Mike Kravetz 2020-04-01 280 * i_mmap_rwsem ensures the dst_pte remains valid even
> c0d0381ade7988 Mike Kravetz 2020-04-01 281 * in the case of shared pmds. fault mutex prevents
> c0d0381ade7988 Mike Kravetz 2020-04-01 282 * races with other faulting threads.
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 283 */
> ddeaab32a89f04 Mike Kravetz 2019-01-08 284 mapping = dst_vma->vm_file->f_mapping;
> c0d0381ade7988 Mike Kravetz 2020-04-01 285 i_mmap_lock_read(mapping);
> c0d0381ade7988 Mike Kravetz 2020-04-01 286 idx = linear_page_index(dst_vma, dst_addr);
> 188b04a7d93860 Wei Yang 2019-11-30 287 hash = hugetlb_fault_mutex_hash(mapping, idx);
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 288 mutex_lock(&hugetlb_fault_mutex_table[hash]);
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 289
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 290 err = -ENOMEM;
> aec44e0f0213e3 Peter Xu 2021-05-04 291 dst_pte = huge_pte_alloc(dst_mm, dst_vma, dst_addr, vma_hpagesize);
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 292 if (!dst_pte) {
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 293 mutex_unlock(&hugetlb_fault_mutex_table[hash]);
> c0d0381ade7988 Mike Kravetz 2020-04-01 294 i_mmap_unlock_read(mapping);
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 295 goto out_unlock;
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 296 }
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 297
> f619147104c8ea Axel Rasmussen 2021-05-04 298 if (mode != MCOPY_ATOMIC_CONTINUE &&
> f619147104c8ea Axel Rasmussen 2021-05-04 299 !huge_pte_none(huge_ptep_get(dst_pte))) {
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 300 err = -EEXIST;
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 301 mutex_unlock(&hugetlb_fault_mutex_table[hash]);
> c0d0381ade7988 Mike Kravetz 2020-04-01 302 i_mmap_unlock_read(mapping);
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 303 goto out_unlock;
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 304 }
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 305
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 306 err = hugetlb_mcopy_atomic_pte(dst_mm, dst_pte, dst_vma,
> f619147104c8ea Axel Rasmussen 2021-05-04 307 dst_addr, src_addr, mode, &page);
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 308
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 309 mutex_unlock(&hugetlb_fault_mutex_table[hash]);
> c0d0381ade7988 Mike Kravetz 2020-04-01 310 i_mmap_unlock_read(mapping);
> 1c9e8def43a345 Mike Kravetz 2017-02-22 311 vm_alloc_shared = vm_shared;
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 312
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 313 cond_resched();
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 314
> 9e368259ad9883 Andrea Arcangeli 2018-11-30 315 if (unlikely(err == -ENOENT)) {
> d8ed45c5dcd455 Michel Lespinasse 2020-06-08 316 mmap_read_unlock(dst_mm);
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 317 BUG_ON(!page);
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 318
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 319 err = copy_huge_page_from_user(page,
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 320 (const void __user *)src_addr,
> 4fb07ee6510280 Wei Yang 2019-11-30 321 vma_hpagesize / PAGE_SIZE,
> 4fb07ee6510280 Wei Yang 2019-11-30 322 true);
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 323 if (unlikely(err)) {
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 324 err = -EFAULT;
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 325 goto out;
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 326 }
> d8ed45c5dcd455 Michel Lespinasse 2020-06-08 327 mmap_read_lock(dst_mm);
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 328
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 329 dst_vma = NULL;
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 330 goto retry;
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 331 } else
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 332 BUG_ON(page);
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 333
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 334 if (!err) {
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 335 dst_addr += vma_hpagesize;
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 336 src_addr += vma_hpagesize;
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 337 copied += vma_hpagesize;
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 338
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 339 if (fatal_signal_pending(current))
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 340 err = -EINTR;
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 341 }
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 342 if (err)
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 343 break;
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 344 }
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 345
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 346 out_unlock:
> d8ed45c5dcd455 Michel Lespinasse 2020-06-08 347 mmap_read_unlock(dst_mm);
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 348 out:
> 1786d001262006 Mina Almasry 2021-06-01 349 if (page)
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 350 put_page(page);
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 351 BUG_ON(copied < 0);
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 352 BUG_ON(err > 0);
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 353 BUG_ON(!copied && !err);
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 354 return copied ? copied : err;
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 355 }
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 356 #else /* !CONFIG_HUGETLB_PAGE */
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 357 /* fail at build time if gcc attempts to use this */
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 358 extern ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 359 struct vm_area_struct *dst_vma,
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 360 unsigned long dst_start,
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 361 unsigned long src_start,
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 362 unsigned long len,
> f619147104c8ea Axel Rasmussen 2021-05-04 363 enum mcopy_atomic_mode mode);
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 364 #endif /* CONFIG_HUGETLB_PAGE */
> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 365
>
> :::::: The code at line 212 was first introduced by commit
> :::::: 1c9e8def43a3452e7af658b340f5f4f4ecde5c38 userfaultfd: hugetlbfs: add UFFDIO_COPY support for shared mappings
>
> :::::: TO: Mike Kravetz <mike.kravetz@oracle.com>
> :::::: CC: Linus Torvalds <torvalds@linux-foundation.org>
>
> ---
> 0-DAY CI Kernel Test Service, Intel Corporation
> https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [kbuild-all] Re: [linux-next:master 5756/5946] mm/userfaultfd.c:212:6: warning: variable 'vm_alloc_shared' set but not used
2021-06-02 8:18 ` Souptick Joarder
@ 2021-06-03 8:44 ` Rong Chen
0 siblings, 0 replies; 3+ messages in thread
From: Rong Chen @ 2021-06-03 8:44 UTC (permalink / raw)
To: Souptick Joarder, kernel test robot
Cc: Mina Almasry, kbuild-all, Linux Memory Management List, Andrew Morton
On 6/2/21 4:18 PM, Souptick Joarder wrote:
> On Tue, Jun 1, 2021 at 5:40 PM kernel test robot <lkp@intel.com> wrote:
>> tree: https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
>> head: 392d24c0d06bc89a762ba66977db41e53c21bfb5
>> commit: 1786d001262006df52cdcda4cbc0c8087a0200ec [5756/5946] mm, hugetlb: fix racy resv_huge_pages underflow on UFFDIO_COPY
>> config: powerpc64-randconfig-r022-20210601 (attached as .config)
>> compiler: powerpc-linux-gcc (GCC) 9.3.0
>> reproduce (this is a W=1 build):
>> wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
>> chmod +x ~/bin/make.cross
>> # https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/commit/?id=1786d001262006df52cdcda4cbc0c8087a0200ec
>> git remote add linux-next https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git
>> git fetch --no-tags linux-next master
>> git checkout 1786d001262006df52cdcda4cbc0c8087a0200ec
>> # save the attached .config to linux build tree
>> COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=powerpc64
>>
>> If you fix the issue, kindly add following tag as appropriate
>> Reported-by: kernel test robot <lkp@intel.com>
>>
>> All warnings (new ones prefixed by >>):
>>
>> mm/userfaultfd.c: In function '__mcopy_atomic_hugetlb':
>>>> mm/userfaultfd.c:212:6: warning: variable 'vm_alloc_shared' set but not used [-Wunused-but-set-variable]
>> 212 | int vm_alloc_shared = dst_vma->vm_flags & VM_SHARED;
>> | ^~~~~~~~~~~~~~~
>>
> Looks like a false warning. vm_alloc_shared is set here within the
> same function.
Hi Souptick,
The warning is the same as you said, vm_alloc_shared is only set but
not used.
Best Regards,
Rong Chen
>
> mutex_unlock(&hugetlb_fault_mutex_table[hash]);
> i_mmap_unlock_read(mapping);
> vm_alloc_shared = vm_shared;
>
>> vim +/vm_alloc_shared +212 mm/userfaultfd.c
>>
>> c1a4de99fada21 Andrea Arcangeli 2015-09-04 199
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 200 #ifdef CONFIG_HUGETLB_PAGE
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 201 /*
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 202 * __mcopy_atomic processing for HUGETLB vmas. Note that this routine is
>> c1e8d7c6a7a682 Michel Lespinasse 2020-06-08 203 * called with mmap_lock held, it will release mmap_lock before returning.
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 204 */
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 205 static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 206 struct vm_area_struct *dst_vma,
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 207 unsigned long dst_start,
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 208 unsigned long src_start,
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 209 unsigned long len,
>> f619147104c8ea Axel Rasmussen 2021-05-04 210 enum mcopy_atomic_mode mode)
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 211 {
>> 1c9e8def43a345 Mike Kravetz 2017-02-22 @212 int vm_alloc_shared = dst_vma->vm_flags & VM_SHARED;
>> 1c9e8def43a345 Mike Kravetz 2017-02-22 213 int vm_shared = dst_vma->vm_flags & VM_SHARED;
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 214 ssize_t err;
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 215 pte_t *dst_pte;
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 216 unsigned long src_addr, dst_addr;
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 217 long copied;
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 218 struct page *page;
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 219 unsigned long vma_hpagesize;
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 220 pgoff_t idx;
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 221 u32 hash;
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 222 struct address_space *mapping;
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 223
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 224 /*
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 225 * There is no default zero huge page for all huge page sizes as
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 226 * supported by hugetlb. A PMD_SIZE huge pages may exist as used
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 227 * by THP. Since we can not reliably insert a zero page, this
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 228 * feature is not supported.
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 229 */
>> f619147104c8ea Axel Rasmussen 2021-05-04 230 if (mode == MCOPY_ATOMIC_ZEROPAGE) {
>> d8ed45c5dcd455 Michel Lespinasse 2020-06-08 231 mmap_read_unlock(dst_mm);
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 232 return -EINVAL;
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 233 }
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 234
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 235 src_addr = src_start;
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 236 dst_addr = dst_start;
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 237 copied = 0;
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 238 page = NULL;
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 239 vma_hpagesize = vma_kernel_pagesize(dst_vma);
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 240
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 241 /*
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 242 * Validate alignment based on huge page size
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 243 */
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 244 err = -EINVAL;
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 245 if (dst_start & (vma_hpagesize - 1) || len & (vma_hpagesize - 1))
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 246 goto out_unlock;
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 247
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 248 retry:
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 249 /*
>> c1e8d7c6a7a682 Michel Lespinasse 2020-06-08 250 * On routine entry dst_vma is set. If we had to drop mmap_lock and
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 251 * retry, dst_vma will be set to NULL and we must lookup again.
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 252 */
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 253 if (!dst_vma) {
>> 27d02568f529e9 Mike Rapoport 2017-02-24 254 err = -ENOENT;
>> 643aa36eadebdc Wei Yang 2019-11-30 255 dst_vma = find_dst_vma(dst_mm, dst_start, len);
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 256 if (!dst_vma || !is_vm_hugetlb_page(dst_vma))
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 257 goto out_unlock;
>> 1c9e8def43a345 Mike Kravetz 2017-02-22 258
>> 27d02568f529e9 Mike Rapoport 2017-02-24 259 err = -EINVAL;
>> 27d02568f529e9 Mike Rapoport 2017-02-24 260 if (vma_hpagesize != vma_kernel_pagesize(dst_vma))
>> 27d02568f529e9 Mike Rapoport 2017-02-24 261 goto out_unlock;
>> 27d02568f529e9 Mike Rapoport 2017-02-24 262
>> 1c9e8def43a345 Mike Kravetz 2017-02-22 263 vm_shared = dst_vma->vm_flags & VM_SHARED;
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 264 }
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 265
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 266 /*
>> 1c9e8def43a345 Mike Kravetz 2017-02-22 267 * If not shared, ensure the dst_vma has a anon_vma.
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 268 */
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 269 err = -ENOMEM;
>> 1c9e8def43a345 Mike Kravetz 2017-02-22 270 if (!vm_shared) {
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 271 if (unlikely(anon_vma_prepare(dst_vma)))
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 272 goto out_unlock;
>> 1c9e8def43a345 Mike Kravetz 2017-02-22 273 }
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 274
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 275 while (src_addr < src_start + len) {
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 276 BUG_ON(dst_addr >= dst_start + len);
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 277
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 278 /*
>> c0d0381ade7988 Mike Kravetz 2020-04-01 279 * Serialize via i_mmap_rwsem and hugetlb_fault_mutex.
>> c0d0381ade7988 Mike Kravetz 2020-04-01 280 * i_mmap_rwsem ensures the dst_pte remains valid even
>> c0d0381ade7988 Mike Kravetz 2020-04-01 281 * in the case of shared pmds. fault mutex prevents
>> c0d0381ade7988 Mike Kravetz 2020-04-01 282 * races with other faulting threads.
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 283 */
>> ddeaab32a89f04 Mike Kravetz 2019-01-08 284 mapping = dst_vma->vm_file->f_mapping;
>> c0d0381ade7988 Mike Kravetz 2020-04-01 285 i_mmap_lock_read(mapping);
>> c0d0381ade7988 Mike Kravetz 2020-04-01 286 idx = linear_page_index(dst_vma, dst_addr);
>> 188b04a7d93860 Wei Yang 2019-11-30 287 hash = hugetlb_fault_mutex_hash(mapping, idx);
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 288 mutex_lock(&hugetlb_fault_mutex_table[hash]);
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 289
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 290 err = -ENOMEM;
>> aec44e0f0213e3 Peter Xu 2021-05-04 291 dst_pte = huge_pte_alloc(dst_mm, dst_vma, dst_addr, vma_hpagesize);
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 292 if (!dst_pte) {
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 293 mutex_unlock(&hugetlb_fault_mutex_table[hash]);
>> c0d0381ade7988 Mike Kravetz 2020-04-01 294 i_mmap_unlock_read(mapping);
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 295 goto out_unlock;
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 296 }
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 297
>> f619147104c8ea Axel Rasmussen 2021-05-04 298 if (mode != MCOPY_ATOMIC_CONTINUE &&
>> f619147104c8ea Axel Rasmussen 2021-05-04 299 !huge_pte_none(huge_ptep_get(dst_pte))) {
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 300 err = -EEXIST;
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 301 mutex_unlock(&hugetlb_fault_mutex_table[hash]);
>> c0d0381ade7988 Mike Kravetz 2020-04-01 302 i_mmap_unlock_read(mapping);
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 303 goto out_unlock;
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 304 }
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 305
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 306 err = hugetlb_mcopy_atomic_pte(dst_mm, dst_pte, dst_vma,
>> f619147104c8ea Axel Rasmussen 2021-05-04 307 dst_addr, src_addr, mode, &page);
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 308
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 309 mutex_unlock(&hugetlb_fault_mutex_table[hash]);
>> c0d0381ade7988 Mike Kravetz 2020-04-01 310 i_mmap_unlock_read(mapping);
>> 1c9e8def43a345 Mike Kravetz 2017-02-22 311 vm_alloc_shared = vm_shared;
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 312
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 313 cond_resched();
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 314
>> 9e368259ad9883 Andrea Arcangeli 2018-11-30 315 if (unlikely(err == -ENOENT)) {
>> d8ed45c5dcd455 Michel Lespinasse 2020-06-08 316 mmap_read_unlock(dst_mm);
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 317 BUG_ON(!page);
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 318
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 319 err = copy_huge_page_from_user(page,
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 320 (const void __user *)src_addr,
>> 4fb07ee6510280 Wei Yang 2019-11-30 321 vma_hpagesize / PAGE_SIZE,
>> 4fb07ee6510280 Wei Yang 2019-11-30 322 true);
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 323 if (unlikely(err)) {
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 324 err = -EFAULT;
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 325 goto out;
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 326 }
>> d8ed45c5dcd455 Michel Lespinasse 2020-06-08 327 mmap_read_lock(dst_mm);
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 328
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 329 dst_vma = NULL;
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 330 goto retry;
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 331 } else
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 332 BUG_ON(page);
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 333
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 334 if (!err) {
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 335 dst_addr += vma_hpagesize;
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 336 src_addr += vma_hpagesize;
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 337 copied += vma_hpagesize;
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 338
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 339 if (fatal_signal_pending(current))
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 340 err = -EINTR;
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 341 }
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 342 if (err)
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 343 break;
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 344 }
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 345
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 346 out_unlock:
>> d8ed45c5dcd455 Michel Lespinasse 2020-06-08 347 mmap_read_unlock(dst_mm);
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 348 out:
>> 1786d001262006 Mina Almasry 2021-06-01 349 if (page)
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 350 put_page(page);
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 351 BUG_ON(copied < 0);
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 352 BUG_ON(err > 0);
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 353 BUG_ON(!copied && !err);
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 354 return copied ? copied : err;
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 355 }
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 356 #else /* !CONFIG_HUGETLB_PAGE */
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 357 /* fail at build time if gcc attempts to use this */
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 358 extern ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 359 struct vm_area_struct *dst_vma,
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 360 unsigned long dst_start,
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 361 unsigned long src_start,
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 362 unsigned long len,
>> f619147104c8ea Axel Rasmussen 2021-05-04 363 enum mcopy_atomic_mode mode);
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 364 #endif /* CONFIG_HUGETLB_PAGE */
>> 60d4d2d2b40e44 Mike Kravetz 2017-02-22 365
>>
>> :::::: The code at line 212 was first introduced by commit
>> :::::: 1c9e8def43a3452e7af658b340f5f4f4ecde5c38 userfaultfd: hugetlbfs: add UFFDIO_COPY support for shared mappings
>>
>> :::::: TO: Mike Kravetz <mike.kravetz@oracle.com>
>> :::::: CC: Linus Torvalds <torvalds@linux-foundation.org>
>>
>> ---
>> 0-DAY CI Kernel Test Service, Intel Corporation
>> https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
> _______________________________________________
> kbuild-all mailing list -- kbuild-all@lists.01.org
> To unsubscribe send an email to kbuild-all-leave@lists.01.org
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2021-06-03 8:44 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-01 12:10 [linux-next:master 5756/5946] mm/userfaultfd.c:212:6: warning: variable 'vm_alloc_shared' set but not used kernel test robot
2021-06-02 8:18 ` Souptick Joarder
2021-06-03 8:44 ` [kbuild-all] " Rong Chen
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).