* [PATCH-block 0/3] blk-cgroup: Fix potential UAF & miscellaneous cleanup
@ 2022-12-08 22:01 Waiman Long
2022-12-08 22:01 ` [PATCH-block 1/3] bdi, blk-cgroup: Fix potential UAF of blkcg Waiman Long
` (2 more replies)
0 siblings, 3 replies; 6+ messages in thread
From: Waiman Long @ 2022-12-08 22:01 UTC (permalink / raw)
To: Jens Axboe, Tejun Heo, Josef Bacik, Zefan Li, Johannes Weiner,
Andrew Morton
Cc: cgroups, linux-block, linux-kernel, linux-mm, Michal Koutný,
Dennis Zhou (Facebook),
Waiman Long
It was found that blkcg_destroy_blkgs() may be called with all blkcg
references gone. This may potentially cause user-after-free and so
should be fixed. The last 2 patches are miscellaneous cleanups of
commit 3b8cc6298724 ("blk-cgroup: Optimize blkcg_rstat_flush()").
Waiman Long (3):
bdi, blk-cgroup: Fix potential UAF of blkcg
blk-cgroup: Don't flush a blkg if destroyed
blk-cgroup: Flush stats at blkgs destruction path
block/blk-cgroup.c | 26 ++++++++++++++++++++++++++
include/linux/cgroup.h | 1 +
kernel/cgroup/rstat.c | 20 ++++++++++++++++++++
mm/backing-dev.c | 8 ++++++--
4 files changed, 53 insertions(+), 2 deletions(-)
--
2.31.1
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH-block 1/3] bdi, blk-cgroup: Fix potential UAF of blkcg
2022-12-08 22:01 [PATCH-block 0/3] blk-cgroup: Fix potential UAF & miscellaneous cleanup Waiman Long
@ 2022-12-08 22:01 ` Waiman Long
2022-12-08 22:01 ` [PATCH-block 2/3] blk-cgroup: Don't flush a blkg if destroyed Waiman Long
2022-12-08 22:01 ` [PATCH-block 3/3] blk-cgroup: Flush stats at blkgs destruction path Waiman Long
2 siblings, 0 replies; 6+ messages in thread
From: Waiman Long @ 2022-12-08 22:01 UTC (permalink / raw)
To: Jens Axboe, Tejun Heo, Josef Bacik, Zefan Li, Johannes Weiner,
Andrew Morton
Cc: cgroups, linux-block, linux-kernel, linux-mm, Michal Koutný,
Dennis Zhou (Facebook),
Waiman Long, Yi Zhang
Commit 59b57717fff8 ("blkcg: delay blkg destruction until after
writeback has finished") delayed call to blkcg_destroy_blkgs() to
cgwb_release_workfn(). However, it is done after a css_put() of blkcg
which may be the final put that causes the blkcg to be freed as RCU
read lock isn't held.
Another place where blkcg_destroy_blkgs() can be called indirectly via
blkcg_unpin_online() is from the offline_css() function called from
css_killed_work_fn(). Over there, the potentially final css_put() call
is issued after offline_css().
By adding a css_tryget() into blkcg_destroy_blkgs() and warning its
failure, the following stack trace was produced in a test system on
bootup.
[ 34.254240] RIP: 0010:blkcg_destroy_blkgs+0x16a/0x1a0
:
[ 34.339943] Call Trace:
[ 34.344510] blkcg_unpin_online+0x38/0x60
[ 34.348523] cgwb_release_workfn+0x6a/0x200
[ 34.352708] process_one_work+0x1e5/0x3b0
[ 34.360758] worker_thread+0x50/0x3a0
[ 34.368447] kthread+0xd9/0x100
[ 34.376386] ret_from_fork+0x22/0x30
This confirms that a potential UAF situation can really happen in
cgwb_release_workfn().
Fix that by delaying the css_put() until after the blkcg_unpin_online()
call. Also use css_tryget() in blkcg_destroy_blkgs() and issue a warning
if css_tryget() fails.
The reproducing system can no longer produce a warning with this patch.
All the runnable block/0* tests including block/027 were run successfully
without failure.
Fixes: 59b57717fff8 ("blkcg: delay blkg destruction until after writeback has finished")
Suggested-by: Michal Koutný <mkoutny@suse.com>
Reported-by: Yi Zhang <yi.zhang@redhat.com>
Signed-off-by: Waiman Long <longman@redhat.com>
---
block/blk-cgroup.c | 8 ++++++++
mm/backing-dev.c | 8 ++++++--
2 files changed, 14 insertions(+), 2 deletions(-)
diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index 1bb939d3b793..21cc88349f21 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -1084,6 +1084,13 @@ struct list_head *blkcg_get_cgwb_list(struct cgroup_subsys_state *css)
*/
static void blkcg_destroy_blkgs(struct blkcg *blkcg)
{
+ /*
+ * blkcg_destroy_blkgs() shouldn't be called with all the blkcg
+ * references gone.
+ */
+ if (WARN_ON_ONCE(!css_tryget(&blkcg->css)))
+ return;
+
might_sleep();
spin_lock_irq(&blkcg->lock);
@@ -1110,6 +1117,7 @@ static void blkcg_destroy_blkgs(struct blkcg *blkcg)
}
spin_unlock_irq(&blkcg->lock);
+ css_put(&blkcg->css);
}
/**
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index c30419a5e119..36f75b072325 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -390,11 +390,15 @@ static void cgwb_release_workfn(struct work_struct *work)
wb_shutdown(wb);
css_put(wb->memcg_css);
- css_put(wb->blkcg_css);
mutex_unlock(&wb->bdi->cgwb_release_mutex);
- /* triggers blkg destruction if no online users left */
+ /*
+ * Triggers blkg destruction if no online users left
+ * The final blkcg css_put() has to be done after blkcg_unpin_online()
+ * to avoid use-after-free.
+ */
blkcg_unpin_online(wb->blkcg_css);
+ css_put(wb->blkcg_css);
fprop_local_destroy_percpu(&wb->memcg_completions);
--
2.31.1
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH-block 2/3] blk-cgroup: Don't flush a blkg if destroyed
2022-12-08 22:01 [PATCH-block 0/3] blk-cgroup: Fix potential UAF & miscellaneous cleanup Waiman Long
2022-12-08 22:01 ` [PATCH-block 1/3] bdi, blk-cgroup: Fix potential UAF of blkcg Waiman Long
@ 2022-12-08 22:01 ` Waiman Long
2022-12-08 22:01 ` [PATCH-block 3/3] blk-cgroup: Flush stats at blkgs destruction path Waiman Long
2 siblings, 0 replies; 6+ messages in thread
From: Waiman Long @ 2022-12-08 22:01 UTC (permalink / raw)
To: Jens Axboe, Tejun Heo, Josef Bacik, Zefan Li, Johannes Weiner,
Andrew Morton
Cc: cgroups, linux-block, linux-kernel, linux-mm, Michal Koutný,
Dennis Zhou (Facebook),
Waiman Long
Before commit 3b8cc6298724 ("blk-cgroup: Optimize blkcg_rstat_flush()"),
blkg's stats is only flushed if they are online. In addition, the
stat flushing of blkgs in blkcg_rstat_flush() includes propagating
the rstat data to its parent. However, if a blkg has been destroyed
(offline), the validity of its parent may be questionable. For safety,
revert back to the old behavior by ignoring offline blkg's.
Signed-off-by: Waiman Long <longman@redhat.com>
---
block/blk-cgroup.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index 21cc88349f21..c466aef0d467 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -885,6 +885,12 @@ static void blkcg_rstat_flush(struct cgroup_subsys_state *css, int cpu)
WRITE_ONCE(bisc->lqueued, false);
+ /* Don't flush its stats if blkg is offline */
+ if (unlikely(!blkg->online)) {
+ percpu_ref_put(&blkg->refcnt);
+ continue;
+ }
+
/* fetch the current per-cpu values */
do {
seq = u64_stats_fetch_begin(&bisc->sync);
--
2.31.1
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH-block 3/3] blk-cgroup: Flush stats at blkgs destruction path
2022-12-08 22:01 [PATCH-block 0/3] blk-cgroup: Fix potential UAF & miscellaneous cleanup Waiman Long
2022-12-08 22:01 ` [PATCH-block 1/3] bdi, blk-cgroup: Fix potential UAF of blkcg Waiman Long
2022-12-08 22:01 ` [PATCH-block 2/3] blk-cgroup: Don't flush a blkg if destroyed Waiman Long
@ 2022-12-08 22:01 ` Waiman Long
2022-12-08 23:00 ` Jens Axboe
2 siblings, 1 reply; 6+ messages in thread
From: Waiman Long @ 2022-12-08 22:01 UTC (permalink / raw)
To: Jens Axboe, Tejun Heo, Josef Bacik, Zefan Li, Johannes Weiner,
Andrew Morton
Cc: cgroups, linux-block, linux-kernel, linux-mm, Michal Koutný,
Dennis Zhou (Facebook),
Waiman Long
As noted by Michal, the blkg_iostat_set's in the lockless list
hold reference to blkg's to protect against their removal. Those
blkg's hold reference to blkcg. When a cgroup is being destroyed,
cgroup_rstat_flush() is only called at css_release_work_fn() which is
called when the blkcg reference count reaches 0. This circular dependency
will prevent blkcg from being freed until some other events cause
cgroup_rstat_flush() to be called to flush out the pending blkcg stats.
To prevent this delayed blkcg removal, add a new cgroup_rstat_css_flush()
function to flush stats for a given css and cpu and call it at the blkgs
destruction path, blkcg_destroy_blkgs(), whenever there are still some
pending stats to be flushed. This will ensure that blkcg reference
count can reach 0 ASAP.
Signed-off-by: Waiman Long <longman@redhat.com>
---
block/blk-cgroup.c | 12 ++++++++++++
include/linux/cgroup.h | 1 +
kernel/cgroup/rstat.c | 20 ++++++++++++++++++++
3 files changed, 33 insertions(+)
diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index c466aef0d467..534f3baeb84a 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -1090,6 +1090,8 @@ struct list_head *blkcg_get_cgwb_list(struct cgroup_subsys_state *css)
*/
static void blkcg_destroy_blkgs(struct blkcg *blkcg)
{
+ int cpu;
+
/*
* blkcg_destroy_blkgs() shouldn't be called with all the blkcg
* references gone.
@@ -1099,6 +1101,16 @@ static void blkcg_destroy_blkgs(struct blkcg *blkcg)
might_sleep();
+ /*
+ * Flush all the non-empty percpu lockless lists.
+ */
+ for_each_possible_cpu(cpu) {
+ struct llist_head *lhead = per_cpu_ptr(blkcg->lhead, cpu);
+
+ if (!llist_empty(lhead))
+ cgroup_rstat_css_cpu_flush(&blkcg->css, cpu);
+ }
+
spin_lock_irq(&blkcg->lock);
while (!hlist_empty(&blkcg->blkg_list)) {
diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h
index 528bd44b59e2..6c4e66b3fa84 100644
--- a/include/linux/cgroup.h
+++ b/include/linux/cgroup.h
@@ -766,6 +766,7 @@ void cgroup_rstat_flush(struct cgroup *cgrp);
void cgroup_rstat_flush_irqsafe(struct cgroup *cgrp);
void cgroup_rstat_flush_hold(struct cgroup *cgrp);
void cgroup_rstat_flush_release(void);
+void cgroup_rstat_css_cpu_flush(struct cgroup_subsys_state *css, int cpu);
/*
* Basic resource stats.
diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c
index 793ecff29038..910e633869b0 100644
--- a/kernel/cgroup/rstat.c
+++ b/kernel/cgroup/rstat.c
@@ -281,6 +281,26 @@ void cgroup_rstat_flush_release(void)
spin_unlock_irq(&cgroup_rstat_lock);
}
+/**
+ * cgroup_rstat_css_cpu_flush - flush stats for the given css and cpu
+ * @css: target css to be flush
+ * @cpu: the cpu that holds the stats to be flush
+ *
+ * A lightweight rstat flush operation for a given css and cpu.
+ * Only the cpu_lock is being held for mutual exclusion, the cgroup_rstat_lock
+ * isn't used.
+ */
+void cgroup_rstat_css_cpu_flush(struct cgroup_subsys_state *css, int cpu)
+{
+ raw_spinlock_t *cpu_lock = per_cpu_ptr(&cgroup_rstat_cpu_lock, cpu);
+
+ raw_spin_lock_irq(cpu_lock);
+ rcu_read_lock();
+ css->ss->css_rstat_flush(css, cpu);
+ rcu_read_unlock();
+ raw_spin_unlock_irq(cpu_lock);
+}
+
int cgroup_rstat_init(struct cgroup *cgrp)
{
int cpu;
--
2.31.1
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH-block 3/3] blk-cgroup: Flush stats at blkgs destruction path
2022-12-08 22:01 ` [PATCH-block 3/3] blk-cgroup: Flush stats at blkgs destruction path Waiman Long
@ 2022-12-08 23:00 ` Jens Axboe
2022-12-09 15:58 ` Waiman Long
0 siblings, 1 reply; 6+ messages in thread
From: Jens Axboe @ 2022-12-08 23:00 UTC (permalink / raw)
To: Waiman Long, Tejun Heo, Josef Bacik, Zefan Li, Johannes Weiner,
Andrew Morton
Cc: cgroups, linux-block, linux-kernel, linux-mm, Michal Koutný,
Dennis Zhou (Facebook)
On 12/8/22 3:01?PM, Waiman Long wrote:
> diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c
> index 793ecff29038..910e633869b0 100644
> --- a/kernel/cgroup/rstat.c
> +++ b/kernel/cgroup/rstat.c
> @@ -281,6 +281,26 @@ void cgroup_rstat_flush_release(void)
> spin_unlock_irq(&cgroup_rstat_lock);
> }
>
> +/**
> + * cgroup_rstat_css_cpu_flush - flush stats for the given css and cpu
> + * @css: target css to be flush
> + * @cpu: the cpu that holds the stats to be flush
> + *
> + * A lightweight rstat flush operation for a given css and cpu.
> + * Only the cpu_lock is being held for mutual exclusion, the cgroup_rstat_lock
> + * isn't used.
> + */
> +void cgroup_rstat_css_cpu_flush(struct cgroup_subsys_state *css, int cpu)
> +{
> + raw_spinlock_t *cpu_lock = per_cpu_ptr(&cgroup_rstat_cpu_lock, cpu);
> +
> + raw_spin_lock_irq(cpu_lock);
> + rcu_read_lock();
> + css->ss->css_rstat_flush(css, cpu);
> + rcu_read_unlock();
> + raw_spin_unlock_irq(cpu_lock);
> +}
> +
> int cgroup_rstat_init(struct cgroup *cgrp)
> {
> int cpu;
As I mentioned last time, raw_spin_lock_irq() will be equivalent to an
RCU protected section anyway, so you don't need to do both. Just add a
comment on why rcu_read_lock()/rcu_read_unlock() isn't needed inside the
raw irq safe lock.
--
Jens Axboe
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH-block 3/3] blk-cgroup: Flush stats at blkgs destruction path
2022-12-08 23:00 ` Jens Axboe
@ 2022-12-09 15:58 ` Waiman Long
0 siblings, 0 replies; 6+ messages in thread
From: Waiman Long @ 2022-12-09 15:58 UTC (permalink / raw)
To: Jens Axboe, Tejun Heo, Josef Bacik, Zefan Li, Johannes Weiner,
Andrew Morton
Cc: cgroups, linux-block, linux-kernel, linux-mm, Michal Koutný,
Dennis Zhou (Facebook)
On 12/8/22 18:00, Jens Axboe wrote:
> On 12/8/22 3:01?PM, Waiman Long wrote:
>> diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c
>> index 793ecff29038..910e633869b0 100644
>> --- a/kernel/cgroup/rstat.c
>> +++ b/kernel/cgroup/rstat.c
>> @@ -281,6 +281,26 @@ void cgroup_rstat_flush_release(void)
>> spin_unlock_irq(&cgroup_rstat_lock);
>> }
>>
>> +/**
>> + * cgroup_rstat_css_cpu_flush - flush stats for the given css and cpu
>> + * @css: target css to be flush
>> + * @cpu: the cpu that holds the stats to be flush
>> + *
>> + * A lightweight rstat flush operation for a given css and cpu.
>> + * Only the cpu_lock is being held for mutual exclusion, the cgroup_rstat_lock
>> + * isn't used.
>> + */
>> +void cgroup_rstat_css_cpu_flush(struct cgroup_subsys_state *css, int cpu)
>> +{
>> + raw_spinlock_t *cpu_lock = per_cpu_ptr(&cgroup_rstat_cpu_lock, cpu);
>> +
>> + raw_spin_lock_irq(cpu_lock);
>> + rcu_read_lock();
>> + css->ss->css_rstat_flush(css, cpu);
>> + rcu_read_unlock();
>> + raw_spin_unlock_irq(cpu_lock);
>> +}
>> +
>> int cgroup_rstat_init(struct cgroup *cgrp)
>> {
>> int cpu;
> As I mentioned last time, raw_spin_lock_irq() will be equivalent to an
> RCU protected section anyway, so you don't need to do both. Just add a
> comment on why rcu_read_lock()/rcu_read_unlock() isn't needed inside the
> raw irq safe lock.
Yes, you are right. We don't need rcu_read_lock() here. I put it there
to follow the locking pattern in cgroup_rstat_flush_locked(). I will
remove it in the next version.
Cheers,
Longman
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2022-12-09 15:59 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-12-08 22:01 [PATCH-block 0/3] blk-cgroup: Fix potential UAF & miscellaneous cleanup Waiman Long
2022-12-08 22:01 ` [PATCH-block 1/3] bdi, blk-cgroup: Fix potential UAF of blkcg Waiman Long
2022-12-08 22:01 ` [PATCH-block 2/3] blk-cgroup: Don't flush a blkg if destroyed Waiman Long
2022-12-08 22:01 ` [PATCH-block 3/3] blk-cgroup: Flush stats at blkgs destruction path Waiman Long
2022-12-08 23:00 ` Jens Axboe
2022-12-09 15:58 ` Waiman Long
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).