From: Tejun Heo <tj@kernel.org> To: Vlastimil Babka <vbabka@suse.cz> Cc: Alexei Starovoitov <alexei.starovoitov@gmail.com>, Sasha Levin <sasha.levin@oracle.com>, ast@kernel.org, "netdev@vger.kernel.org" <netdev@vger.kernel.org>, LKML <linux-kernel@vger.kernel.org>, Christoph Lameter <cl@linux.com>, Linux-MM layout <linux-mm@kvack.org>, marco.gra@gmail.com Subject: Re: bpf: use-after-free in array_map_alloc Date: Tue, 24 May 2016 11:30:29 -0400 [thread overview] Message-ID: <20160524153029.GA3354@mtj.duckdns.org> (raw) In-Reply-To: <57441396.2050607@suse.cz> Hello, On Tue, May 24, 2016 at 10:40:54AM +0200, Vlastimil Babka wrote: > [+CC Marco who reported the CVE, forgot that earlier] > > On 05/23/2016 11:35 PM, Tejun Heo wrote: > > Hello, > > > > Can you please test whether this patch resolves the issue? While > > adding support for atomic allocations, I reduced alloc_mutex covered > > region too much. > > > > Thanks. > > Ugh, this makes the code even more head-spinning than it was. Locking-wise, it isn't complicated. It used to be a single mutex protecting everything. Atomic alloc support required putting core allocation parts under spinlock. It is messy because the two paths are mixed in the same function. If we break out the core part to a separate function and let the sleepable path call into that, it should look okay, but that's for another patch. Also, I think protecting chunk's lifetime w/ alloc_mutex is making it a bit nasty. Maybe we should do per-chunk "extending" completion and let pcpu_alloc_mutex just protect populating chunks. > > @@ -435,6 +435,8 @@ static int pcpu_extend_area_map(struct pcpu_chunk *chunk, int new_alloc) > > size_t old_size = 0, new_size = new_alloc * sizeof(new[0]); > > unsigned long flags; > > > > + lockdep_assert_held(&pcpu_alloc_mutex); > > I don't see where the mutex gets locked when called via > pcpu_map_extend_workfn? (except via the new cancel_work_sync() call below?) Ah, right. > Also what protects chunks with scheduled work items from being removed? cancel_work_sync(), which now obviously should be called outside alloc_mutex. > > @@ -895,6 +897,9 @@ static void __percpu *pcpu_alloc(size_t size, size_t align, bool reserved, > > return NULL; > > } > > > > + if (!is_atomic) > > + mutex_lock(&pcpu_alloc_mutex); > > BTW I noticed that > bool is_atomic = (gfp & GFP_KERNEL) != GFP_KERNEL; > > this is too pessimistic IMHO. Reclaim is possible even without __GFP_FS and > __GFP_IO. Could you just use gfpflags_allow_blocking(gfp) here? vmalloc hardcodes GFP_KERNEL, so getting more relaxed doesn't buy us much. Thanks. -- tejun
WARNING: multiple messages have this Message-ID (diff)
From: Tejun Heo <tj@kernel.org> To: Vlastimil Babka <vbabka@suse.cz> Cc: Alexei Starovoitov <alexei.starovoitov@gmail.com>, Sasha Levin <sasha.levin@oracle.com>, ast@kernel.org, "netdev@vger.kernel.org" <netdev@vger.kernel.org>, LKML <linux-kernel@vger.kernel.org>, Christoph Lameter <cl@linux.com>, Linux-MM layout <linux-mm@kvack.org>, marco.gra@gmail.com Subject: Re: bpf: use-after-free in array_map_alloc Date: Tue, 24 May 2016 11:30:29 -0400 [thread overview] Message-ID: <20160524153029.GA3354@mtj.duckdns.org> (raw) In-Reply-To: <57441396.2050607@suse.cz> Hello, On Tue, May 24, 2016 at 10:40:54AM +0200, Vlastimil Babka wrote: > [+CC Marco who reported the CVE, forgot that earlier] > > On 05/23/2016 11:35 PM, Tejun Heo wrote: > > Hello, > > > > Can you please test whether this patch resolves the issue? While > > adding support for atomic allocations, I reduced alloc_mutex covered > > region too much. > > > > Thanks. > > Ugh, this makes the code even more head-spinning than it was. Locking-wise, it isn't complicated. It used to be a single mutex protecting everything. Atomic alloc support required putting core allocation parts under spinlock. It is messy because the two paths are mixed in the same function. If we break out the core part to a separate function and let the sleepable path call into that, it should look okay, but that's for another patch. Also, I think protecting chunk's lifetime w/ alloc_mutex is making it a bit nasty. Maybe we should do per-chunk "extending" completion and let pcpu_alloc_mutex just protect populating chunks. > > @@ -435,6 +435,8 @@ static int pcpu_extend_area_map(struct pcpu_chunk *chunk, int new_alloc) > > size_t old_size = 0, new_size = new_alloc * sizeof(new[0]); > > unsigned long flags; > > > > + lockdep_assert_held(&pcpu_alloc_mutex); > > I don't see where the mutex gets locked when called via > pcpu_map_extend_workfn? (except via the new cancel_work_sync() call below?) Ah, right. > Also what protects chunks with scheduled work items from being removed? cancel_work_sync(), which now obviously should be called outside alloc_mutex. > > @@ -895,6 +897,9 @@ static void __percpu *pcpu_alloc(size_t size, size_t align, bool reserved, > > return NULL; > > } > > > > + if (!is_atomic) > > + mutex_lock(&pcpu_alloc_mutex); > > BTW I noticed that > bool is_atomic = (gfp & GFP_KERNEL) != GFP_KERNEL; > > this is too pessimistic IMHO. Reclaim is possible even without __GFP_FS and > __GFP_IO. Could you just use gfpflags_allow_blocking(gfp) here? vmalloc hardcodes GFP_KERNEL, so getting more relaxed doesn't buy us much. Thanks. -- tejun -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2016-05-24 15:30 UTC|newest] Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top 2016-04-17 16:58 bpf: use-after-free in array_map_alloc Sasha Levin 2016-04-17 17:29 ` Alexei Starovoitov 2016-04-17 22:45 ` Sasha Levin 2016-05-23 12:01 ` Vlastimil Babka 2016-05-23 12:01 ` Vlastimil Babka 2016-05-23 12:07 ` Vlastimil Babka 2016-05-23 12:07 ` Vlastimil Babka 2016-05-23 21:35 ` Tejun Heo 2016-05-23 21:35 ` Tejun Heo 2016-05-23 22:13 ` Alexei Starovoitov 2016-05-23 22:13 ` Alexei Starovoitov 2016-05-24 8:40 ` Vlastimil Babka 2016-05-24 8:40 ` Vlastimil Babka 2016-05-24 15:30 ` Tejun Heo [this message] 2016-05-24 15:30 ` Tejun Heo 2016-05-24 19:04 ` Tejun Heo 2016-05-24 19:04 ` Tejun Heo 2016-05-24 20:43 ` Alexei Starovoitov 2016-05-24 20:43 ` Alexei Starovoitov 2016-05-25 15:44 ` [PATCH percpu/for-4.7-fixes 1/2] percpu: fix synchronization between chunk->map_extend_work and chunk destruction Tejun Heo 2016-05-25 15:44 ` Tejun Heo 2016-05-26 9:19 ` Vlastimil Babka 2016-05-26 9:19 ` Vlastimil Babka 2016-05-26 19:21 ` Tejun Heo 2016-05-26 19:21 ` Tejun Heo 2016-05-26 20:48 ` Vlastimil Babka 2016-05-26 20:48 ` Vlastimil Babka 2016-05-25 15:45 ` [PATCH percpu/for-4.7-fixes 2/2] percpu: fix synchronization between synchronous map extension " Tejun Heo 2016-05-25 15:45 ` Tejun Heo 2016-05-26 9:48 ` Vlastimil Babka 2016-05-26 9:48 ` Vlastimil Babka
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20160524153029.GA3354@mtj.duckdns.org \ --to=tj@kernel.org \ --cc=alexei.starovoitov@gmail.com \ --cc=ast@kernel.org \ --cc=cl@linux.com \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=marco.gra@gmail.com \ --cc=netdev@vger.kernel.org \ --cc=sasha.levin@oracle.com \ --cc=vbabka@suse.cz \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.