> > In lock_region, simplify the calculation of the region_width parameter. > > This field is the size, but encoded as log2(ceil(size)) - 1. > > log2(ceil(size)) may be computed directly as fls(size - 1). However, we > > want to use the 64-bit versions as the amount to lock can exceed > > 32-bits. > > > > This avoids undefined behaviour when locking all memory (size ~0), > > caught by UBSAN. > > It might have been useful to mention what it is that UBSAN specifically > picked up (it took me a while to spot) - but anyway I think there's a > bigger issue with it being completely wrong when size == ~0 (see below). Indeed. I've updated the commit message in v2 to explain what goes wrong (your analysis was spot on, but a mailing list message is more ephermal than a commit message). I'll send out v2 tomorrow assuming nobody objects to v1 in the mean time. Thanks for the review. > There is potentially a third bug which kbase only recently attempted to > fix. The lock address is effectively rounded down by the hardware (the > bottom bits are ignored). So if you have mask=(1< (iova & mask) != ((iova + size) & mask) then you are potentially failing > to lock the end of the intended region. kbase has added some code to > handle this: > > > /* Round up if some memory pages spill into the next region. */ > > region_frame_number_start = pfn >> (lockaddr_size_log2 - PAGE_SHIFT); > > region_frame_number_end = > > (pfn + num_pages - 1) >> (lockaddr_size_log2 - PAGE_SHIFT); > > > > if (region_frame_number_start < region_frame_number_end) > > lockaddr_size_log2 += 1; > > I guess we should too? Oh, I missed this one. Guess we have 4 bugs with this code instead of just 3, yikes. How could such a short function be so deeply and horribly broken? 😃 Should I add a fourth patch to the series to fix this?