From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([140.186.70.92]:37145) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1R9aV5-00082i-B9 for qemu-devel@nongnu.org; Fri, 30 Sep 2011 06:39:28 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1R9aV4-0008L5-42 for qemu-devel@nongnu.org; Fri, 30 Sep 2011 06:39:27 -0400 Received: from mtagate3.uk.ibm.com ([194.196.100.163]:59144) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1R9aV3-0008In-T7 for qemu-devel@nongnu.org; Fri, 30 Sep 2011 06:39:26 -0400 Received: from d06nrmr1806.portsmouth.uk.ibm.com (d06nrmr1806.portsmouth.uk.ibm.com [9.149.39.193]) by mtagate3.uk.ibm.com (8.13.1/8.13.1) with ESMTP id p8UAdIJP021716 for ; Fri, 30 Sep 2011 10:39:18 GMT Received: from d06av08.portsmouth.uk.ibm.com (d06av08.portsmouth.uk.ibm.com [9.149.37.249]) by d06nrmr1806.portsmouth.uk.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id p8UAdIjN2265156 for ; Fri, 30 Sep 2011 11:39:18 +0100 Received: from d06av08.portsmouth.uk.ibm.com (loopback [127.0.0.1]) by d06av08.portsmouth.uk.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id p8UAdIT3014834 for ; Fri, 30 Sep 2011 11:39:18 +0100 From: Stefan Hajnoczi Date: Fri, 30 Sep 2011 11:39:11 +0100 Message-Id: <1317379151-11557-1-git-send-email-stefanha@linux.vnet.ibm.com> Subject: [Qemu-devel] [PATCH] qed: fix use-after-free during l2 cache commit List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Amit Shah Cc: Anthony Liguori , qemu-devel@nongnu.org, Stefan Hajnoczi QED's metadata caching strategy allows two parallel requests to race for metadata lookup. The first one to complete will populate the metadata cache and the second one will drop the data it just read in favor of the cached data. There is a use-after-free in qed_read_l2_table_cb() and qed_commit_l2_update() where l2_table->offset was used after the l2_table may have been freed due to a metadata lookup race. Fix this by keeping the l2_offset in a local variable and not reaching into the possibly freed l2_table. Reported-by: Amit Shah Signed-off-by: Stefan Hajnoczi --- Hi Amit, Thanks for reporting the assertion failure you saw at http://fpaste.org/CDuv/. Does this patch fix the problem? If not, please send details on your setup and how to reproduce the issue. Thanks, Stefan block/qed-table.c | 6 +++--- block/qed.c | 4 ++-- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/block/qed-table.c b/block/qed-table.c index d96afa8..f31f9ff 100644 --- a/block/qed-table.c +++ b/block/qed-table.c @@ -222,21 +222,21 @@ static void qed_read_l2_table_cb(void *opaque, int ret) QEDRequest *request = read_l2_table_cb->request; BDRVQEDState *s = read_l2_table_cb->s; CachedL2Table *l2_table = request->l2_table; + uint64_t l2_offset = read_l2_table_cb->l2_offset; if (ret) { /* can't trust loaded L2 table anymore */ qed_unref_l2_cache_entry(l2_table); request->l2_table = NULL; } else { - l2_table->offset = read_l2_table_cb->l2_offset; + l2_table->offset = l2_offset; qed_commit_l2_cache_entry(&s->l2_cache, l2_table); /* This is guaranteed to succeed because we just committed the entry * to the cache. */ - request->l2_table = qed_find_l2_cache_entry(&s->l2_cache, - l2_table->offset); + request->l2_table = qed_find_l2_cache_entry(&s->l2_cache, l2_offset); assert(request->l2_table != NULL); } diff --git a/block/qed.c b/block/qed.c index 624e261..e87dc4d 100644 --- a/block/qed.c +++ b/block/qed.c @@ -911,14 +911,14 @@ static void qed_commit_l2_update(void *opaque, int ret) QEDAIOCB *acb = opaque; BDRVQEDState *s = acb_to_s(acb); CachedL2Table *l2_table = acb->request.l2_table; + uint64_t l2_offset = l2_table->offset; qed_commit_l2_cache_entry(&s->l2_cache, l2_table); /* This is guaranteed to succeed because we just committed the entry to the * cache. */ - acb->request.l2_table = qed_find_l2_cache_entry(&s->l2_cache, - l2_table->offset); + acb->request.l2_table = qed_find_l2_cache_entry(&s->l2_cache, l2_offset); assert(acb->request.l2_table != NULL); qed_aio_next_io(opaque, ret); -- 1.7.6.3