On Thu, 17 Dec 2020 at 02:40, kernel test robot wrote: > > tree: https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master > head: 9317f948b0b188b8d2fded75957e6d42c460df1b > commit: e21d96503adda2ccb571d577ad32929383c710ea [12593/13311] x86, kfence: enable KFENCE for x86 > config: x86_64-randconfig-s022-20201216 (attached as .config) > compiler: gcc-9 (Debian 9.3.0-15) 9.3.0 > reproduce: > # apt-get install sparse > # sparse version: v0.6.3-184-g1b896707-dirty > # https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/commit/?id=e21d96503adda2ccb571d577ad32929383c710ea > git remote add linux-next https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git > git fetch --no-tags linux-next master > git checkout e21d96503adda2ccb571d577ad32929383c710ea > # save the attached .config to linux build tree > make W=1 C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' ARCH=x86_64 > > If you fix the issue, kindly add following tag as appropriate > Reported-by: kernel test robot > > > "sparse warnings: (new ones prefixed by >>)" > >> mm/kfence/core.c:250:13: sparse: sparse: context imbalance in 'kfence_guarded_alloc' - wrong count at exit > >> mm/kfence/core.c:825:9: sparse: sparse: context imbalance in 'kfence_handle_page_fault' - different lock contexts for basic block This is a false positive, and sparse can't seem to follow locking done here. This code has been tested extensively with lockdep. > vim +/kfence_guarded_alloc +250 mm/kfence/core.c > > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 249 > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 @250 static void *kfence_guarded_alloc(struct kmem_cache *cache, size_t size, gfp_t gfp) > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 251 { > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 252 struct kfence_metadata *meta = NULL; > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 253 unsigned long flags; > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 254 struct page *page; > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 255 void *addr; > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 256 > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 257 /* Try to obtain a free object. */ > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 258 raw_spin_lock_irqsave(&kfence_freelist_lock, flags); > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 259 if (!list_empty(&kfence_freelist)) { > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 260 meta = list_entry(kfence_freelist.next, struct kfence_metadata, list); > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 261 list_del_init(&meta->list); > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 262 } > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 263 raw_spin_unlock_irqrestore(&kfence_freelist_lock, flags); > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 264 if (!meta) > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 265 return NULL; > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 266 > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 267 if (unlikely(!raw_spin_trylock_irqsave(&meta->lock, flags))) { > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 268 /* > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 269 * This is extremely unlikely -- we are reporting on a > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 270 * use-after-free, which locked meta->lock, and the reporting > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 271 * code via printk calls kmalloc() which ends up in > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 272 * kfence_alloc() and tries to grab the same object that we're > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 273 * reporting on. While it has never been observed, lockdep does > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 274 * report that there is a possibility of deadlock. Fix it by > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 275 * using trylock and bailing out gracefully. > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 276 */ > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 277 raw_spin_lock_irqsave(&kfence_freelist_lock, flags); > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 278 /* Put the object back on the freelist. */ > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 279 list_add_tail(&meta->list, &kfence_freelist); > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 280 raw_spin_unlock_irqrestore(&kfence_freelist_lock, flags); > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 281 > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 282 return NULL; > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 283 } > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 284 > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 285 meta->addr = metadata_to_pageaddr(meta); > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 286 /* Unprotect if we're reusing this page. */ > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 287 if (meta->state == KFENCE_OBJECT_FREED) > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 288 kfence_unprotect(meta->addr); > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 289 > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 290 /* > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 291 * Note: for allocations made before RNG initialization, will always > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 292 * return zero. We still benefit from enabling KFENCE as early as > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 293 * possible, even when the RNG is not yet available, as this will allow > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 294 * KFENCE to detect bugs due to earlier allocations. The only downside > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 295 * is that the out-of-bounds accesses detected are deterministic for > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 296 * such allocations. > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 297 */ > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 298 if (prandom_u32_max(2)) { > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 299 /* Allocate on the "right" side, re-calculate address. */ > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 300 meta->addr += PAGE_SIZE - size; > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 301 meta->addr = ALIGN_DOWN(meta->addr, cache->align); > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 302 } > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 303 > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 304 addr = (void *)meta->addr; > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 305 > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 306 /* Update remaining metadata. */ > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 307 metadata_update_state(meta, KFENCE_OBJECT_ALLOCATED); > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 308 /* Pairs with READ_ONCE() in kfence_shutdown_cache(). */ > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 309 WRITE_ONCE(meta->cache, cache); > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 310 meta->size = size; > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 311 for_each_canary(meta, set_canary_byte); > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 312 > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 313 /* Set required struct page fields. */ > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 314 page = virt_to_page(meta->addr); > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 315 page->slab_cache = cache; > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 316 > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 317 raw_spin_unlock_irqrestore(&meta->lock, flags); > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 318 > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 319 /* Memory initialization. */ > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 320 > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 321 /* > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 322 * We check slab_want_init_on_alloc() ourselves, rather than letting > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 323 * SL*B do the initialization, as otherwise we might overwrite KFENCE's > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 324 * redzone. > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 325 */ > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 326 if (unlikely(slab_want_init_on_alloc(gfp, cache))) > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 327 memzero_explicit(addr, size); > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 328 if (cache->ctor) > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 329 cache->ctor(addr); > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 330 > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 331 if (CONFIG_KFENCE_STRESS_TEST_FAULTS && !prandom_u32_max(CONFIG_KFENCE_STRESS_TEST_FAULTS)) > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 332 kfence_protect(meta->addr); /* Random "faults" by protecting the object. */ > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 333 > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 334 atomic_long_inc(&counters[KFENCE_COUNTER_ALLOCATED]); > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 335 atomic_long_inc(&counters[KFENCE_COUNTER_ALLOCS]); > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 336 > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 337 return addr; > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 338 } > 3b295ea3a66b734 Alexander Potapenko 2020-12-10 339 > > :::::: The code at line 250 was first introduced by commit > :::::: 3b295ea3a66b734a0cd23ae66bae0747a078725a mm: add Kernel Electric-Fence infrastructure > > :::::: TO: Alexander Potapenko > :::::: CC: Stephen Rothwell > > --- > 0-DAY CI Kernel Test Service, Intel Corporation > https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org > _______________________________________________ > kbuild mailing list -- kbuild@lists.01.org > To unsubscribe send an email to kbuild-leave@lists.01.org