From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 57057ECE58C for ; Mon, 7 Oct 2019 14:41:38 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 203BC21655 for ; Mon, 7 Oct 2019 14:41:38 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="cvkWOHaH"; dkim=pass (1024-bit key) header.d=marvell.onmicrosoft.com header.i=@marvell.onmicrosoft.com header.b="B2Lk12LL" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 203BC21655 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=marvell.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:45622 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1iHUCf-0000sp-52 for qemu-devel@archiver.kernel.org; Mon, 07 Oct 2019 10:41:37 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:41276) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1iHU87-00040h-1M for qemu-devel@nongnu.org; Mon, 07 Oct 2019 10:36:56 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1iHU85-0005TZ-0r for qemu-devel@nongnu.org; Mon, 07 Oct 2019 10:36:54 -0400 Received: from mx0a-0016f401.pphosted.com ([67.231.148.174]:56528 helo=mx0b-0016f401.pphosted.com) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1iHU84-0005SI-3R; Mon, 07 Oct 2019 10:36:52 -0400 Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id x97EZDbS010489; Mon, 7 Oct 2019 07:36:41 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : references : in-reply-to : content-type : content-id : content-transfer-encoding : mime-version; s=pfpt0818; bh=xO0p0B6n4W8nbpdUV9faQD9sQ2u/lJb06Gvy2WERHMU=; b=cvkWOHaHPr3r4i+E612G8ZkXB91GmdBSqskLJOvOZ81PWVjpcs2myd6Y6rlRpJJKKbRv 8EwoV8WUK69VjZeJcUrcmXWv8hxWUBkwyCFAuvwL2209DemUARiFz1nmgP7U7TD4ultm ndYLjvYLz0bCLBcHl/mYWbRs9p8Ub4ujOzhSGdH4FpSsiYtn3gwQ+mZNX7Von2J3wbQB myhjoiNaWKiNVQLOUhP+xxJ7Oplj03MxDaIgpUG3/vC5w5KlSB1tiy7XNsQPRhC4EDq2 uAX0eK/nvR/VUMKyGjqKpbX26dpyxTpldMtyyPHRhKfbbtzJwciD+GSk/x52NK8frSQ0 nw== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0a-0016f401.pphosted.com with ESMTP id 2vg3n8gsd2-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 07 Oct 2019 07:36:40 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Mon, 7 Oct 2019 07:36:39 -0700 Received: from NAM01-BY2-obe.outbound.protection.outlook.com (104.47.34.51) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3 via Frontend Transport; Mon, 7 Oct 2019 07:36:39 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=dDY/yrjixF6BhNqrX5EPe/3ndMl5LQiRtLi6OX8lzpTd5i1+euoQZd7IEn6JiGB6e7lYl1LgSlRuDV3AxMx8n7uOr08Pl2TZurtb2FPf+rLhgAZYi8mCBE/GXhKBdcr5iTOV79JDieIxT81z5vMFongmej6ueGfnHWmawY0wYzxv5AwPXCSHCmR1AgzuYnRU3ZOzVrn1oJeVSrrAcGyhu/Z74NdYtmx/drcX2S2FrpgX2S9hkbqlwCDa09RjlG7GG8R/5bc2FfRBFqfdtLjzSqE7O+4GFF6NTk/QHBfd4v0xyG7HPMT/OCS5pXCxtjAO4yUlJh/dPOd1C8TW2ATyeQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=xO0p0B6n4W8nbpdUV9faQD9sQ2u/lJb06Gvy2WERHMU=; b=I4QQU9uMhb1cY0ywzsH8ZnrIpPoJWybM16BxiylJHiuDq9oGejYuYuxL4C2Zv7+wVgDZJAMKkGnmygxYwXcpKMiFl8Q5bnonjAKl+PKncSo7+y6rbkAfz0ubQIFMWRbi9/pqs23ArOZY0o6pnV73XXG5w5DOxe8xhPWM6nFzJoePpOnjYd1O3MKOIO98d/Sn5Z+dUTL6vhxfSDINY7N+7CHAEvXntCuKbNDknRoNV7CVef/gIGK0fk1VuaVyMb143UeaEXadOT0JfBorYv//G29XxPLYeyDoSmvxMaMnYBW9Hy+BN7sBxxOtIZmNYjmS++2rp0weyeW6BKeuozkmRA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=marvell.com; dmarc=pass action=none header.from=marvell.com; dkim=pass header.d=marvell.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.onmicrosoft.com; s=selector2-marvell-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=xO0p0B6n4W8nbpdUV9faQD9sQ2u/lJb06Gvy2WERHMU=; b=B2Lk12LLlOI83e1TdDnxRoypaXz6FMhRENmGEcxOGfGXK6PD2h+DvGFnjY+wKZ4opjotxSKaH2KDWBW96H0XyiE1/I/LZzUIq/BIn39H9LgKLG2iRxFpbu+ZI8ngdxmbW6nyH0jtJxEXcAP3uLcuEwvGTOKJw2hBh99mwLwIjtU= Received: from DM6PR18MB3001.namprd18.prod.outlook.com (20.179.104.143) by DM6PR18MB3164.namprd18.prod.outlook.com (10.255.172.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2327.24; Mon, 7 Oct 2019 14:36:38 +0000 Received: from DM6PR18MB3001.namprd18.prod.outlook.com ([fe80::11c2:98e0:b9d9:5dba]) by DM6PR18MB3001.namprd18.prod.outlook.com ([fe80::11c2:98e0:b9d9:5dba%5]) with mapi id 15.20.2327.026; Mon, 7 Oct 2019 14:36:38 +0000 From: Jan Glauber To: Paolo Bonzini Subject: Re: [Qemu-devel] qemu_futex_wait() lockups in ARM64: 2 possible issues Thread-Topic: [Qemu-devel] qemu_futex_wait() lockups in ARM64: 2 possible issues Thread-Index: AQHVfRygtO7Er40jOkCa/RGKZULg/Q== Date: Mon, 7 Oct 2019 14:36:38 +0000 Message-ID: <20191007143629.GA23062@hc> References: <1864070a-2f84-1d98-341e-f01ddf74ec4b@ubuntu.com> <20190924202517.GA21422@xps13.dannf> <20191002092253.GA3857@hc> <6dd73749-49b0-0fbc-b9bb-44c3736642b8@redhat.com> In-Reply-To: <6dd73749-49b0-0fbc-b9bb-44c3736642b8@redhat.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-clientproxiedby: AM0PR02CA0005.eurprd02.prod.outlook.com (2603:10a6:208:3e::18) To DM6PR18MB3001.namprd18.prod.outlook.com (2603:10b6:5:182::15) x-ms-exchange-messagesentrepresentingtype: 1 x-originating-ip: [2a02:8070:8784:5a00:a897:ff71:4e18:8f6] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 9ec8422c-eff6-4711-131b-08d74b33c2ab x-ms-traffictypediagnostic: DM6PR18MB3164: x-ms-exchange-purlcount: 1 x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:9508; x-forefront-prvs: 01834E39B7 x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(7916004)(4636009)(39850400004)(366004)(376002)(396003)(136003)(346002)(189003)(199004)(54906003)(966005)(14454004)(486006)(316002)(33716001)(6506007)(7736002)(386003)(53546011)(256004)(14444005)(11346002)(446003)(81156014)(478600001)(8936002)(81166006)(8676002)(46003)(1076003)(476003)(5660300002)(86362001)(6486002)(2906002)(186003)(4326008)(6436002)(6116002)(99286004)(52116002)(9686003)(6512007)(71190400001)(229853002)(76176011)(305945005)(102836004)(66556008)(66946007)(33656002)(66476007)(64756008)(71200400001)(6306002)(66446008)(25786009)(6246003)(6916009); DIR:OUT; SFP:1101; SCL:1; SRVR:DM6PR18MB3164; H:DM6PR18MB3001.namprd18.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; MX:1; A:1; received-spf: None (protection.outlook.com: marvell.com does not designate permitted sender hosts) x-ms-exchange-senderadcheck: 1 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: 5YVWVnUEPZX0i7VwHL1aF+kCfGjklaXCKFZ5IW2aldRIkU4iK9RjiBpj/BMzeAomJ+Xv3u5rgoMWbYUSaMitO6Yj+PR4Mdv5ZEuw0U7iNZO2R+/T7rjDaCD5PF6EyAI6uoR2i13Jt0gvJ6/SminG5n06ZPHkvV7aXgZNZCt3OtBps/qfONcvWsRsmKneXR20Z6YI5Fo/gUThvVwk2TzLfsN2537wTAhliThnhjcch6wbVcEg5nH1dz+9OqVA3/sGx19urYcGeviSfp69lKklVMtwuiLePgKSQsoZNBk18c3URxmpsNnWrU0VbLct4eBujxk6dQiDMSaZemKuddFBPqe6TQ6ebHUIjzgpqF1h85YXTlLV1DDdFHzJwmn1CmfFzX22swiwy1nCpBUZGGhMiFpT5RfjkeYSr7zCiCJ911yZ9UoQHhk294yvuxGyV+1p0LpB458GQ/ibRZhG1BJz2A== x-ms-exchange-transport-forked: True Content-Type: text/plain; charset="us-ascii" Content-ID: <47F70ACD419DB841B57599EFD1C52806@namprd18.prod.outlook.com> Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-CrossTenant-Network-Message-Id: 9ec8422c-eff6-4711-131b-08d74b33c2ab X-MS-Exchange-CrossTenant-originalarrivaltime: 07 Oct 2019 14:36:38.1379 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 70e1fb47-1155-421d-87fc-2e58f638b6e0 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: qk+GmGcZjBMJiA++KCA5V/uQrcc954kQPUHBD4NPido0MCcIuaYefQLCBI74AH08O9+lZ+Vu663AH5H7KyzqHQ== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR18MB3164 X-OriginatorOrg: marvell.com X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.95,1.0.8 definitions=2019-10-07_02:2019-10-07,2019-10-07 signatures=0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [generic] [fuzzy] X-Received-From: 67.231.148.174 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Rafael David Tinoco , lizhengui , dann frazier , QEMU Developers , Bug 1805256 <1805256@bugs.launchpad.net>, QEMU Developers - ARM Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" On Mon, Oct 07, 2019 at 01:06:20PM +0200, Paolo Bonzini wrote: > On 02/10/19 11:23, Jan Glauber wrote: > > I've looked into this on ThunderX2. The arm64 code generated for the > > atomic_[add|sub] accesses of ctx->notify_me doesn't contain any > > memory barriers. It is just plain ldaxr/stlxr. > >=20 > > From my understanding this is not sufficient for SMP sync. > >=20 > > If I read this comment correct: > >=20 > > void aio_notify(AioContext *ctx) > > { > > /* Write e.g. bh->scheduled before reading ctx->notify_me. Pai= rs > > * with atomic_or in aio_ctx_prepare or atomic_add in aio_poll. > > */ > > smp_mb(); > > if (ctx->notify_me) { > >=20 > > it points out that the smp_mb() should be paired. But as > > I said the used atomics don't generate any barriers at all. >=20 > Based on the rest of the thread, this patch should also fix the bug: >=20 > diff --git a/util/async.c b/util/async.c > index 47dcbfa..721ea53 100644 > --- a/util/async.c > +++ b/util/async.c > @@ -249,7 +249,7 @@ aio_ctx_check(GSource *source) > aio_notify_accept(ctx); > =20 > for (bh =3D ctx->first_bh; bh; bh =3D bh->next) { > - if (bh->scheduled) { > + if (atomic_mb_read(&bh->scheduled)) { > return true; > } > } >=20 >=20 > And also the memory barrier in aio_notify can actually be replaced > with a SEQ_CST load: >=20 > diff --git a/util/async.c b/util/async.c > index 47dcbfa..721ea53 100644 > --- a/util/async.c > +++ b/util/async.c > @@ -349,11 +349,11 @@ LinuxAioState *aio_get_linux_aio(AioContext *ctx) > =20 > void aio_notify(AioContext *ctx) > { > - /* Write e.g. bh->scheduled before reading ctx->notify_me. Pairs > - * with atomic_or in aio_ctx_prepare or atomic_add in aio_poll. > + /* Using atomic_mb_read ensures that e.g. bh->scheduled is written b= efore > + * ctx->notify_me is read. Pairs with atomic_or in aio_ctx_prepare = or > + * atomic_add in aio_poll. > */ > - smp_mb(); > - if (ctx->notify_me) { > + if (atomic_mb_read(&ctx->notify_me)) { > event_notifier_set(&ctx->notifier); > atomic_mb_set(&ctx->notified, true); > } >=20 >=20 > Would you be able to test these (one by one possibly)? Sure. > > I've tried to verify me theory with this patch and didn't run into the > > issue for ~500 iterations (usually I would trigger the issue ~20 iterat= ions). >=20 > Sorry for asking the obvious---500 iterations of what? The testcase mentioned in the Canonical issue: https://bugs.launchpad.net/qemu/+bug/1805256 It's a simple image convert: qemu-img convert -f qcow2 -O qcow2 ./disk01.qcow2 ./output.qcow2 Usually it got stuck after 3-20 iterations. --Jan From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6C606ECE58D for ; Mon, 7 Oct 2019 14:52:23 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4461B206C2 for ; Mon, 7 Oct 2019 14:52:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4461B206C2 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=marvell.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:45738 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1iHUN4-0000cn-Be for qemu-devel@archiver.kernel.org; Mon, 07 Oct 2019 10:52:22 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:42896) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1iHUM8-0008Hq-Qf for qemu-devel@nongnu.org; Mon, 07 Oct 2019 10:51:26 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1iHUM7-0002W6-9D for qemu-devel@nongnu.org; Mon, 07 Oct 2019 10:51:24 -0400 Received: from indium.canonical.com ([91.189.90.7]:50000) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1iHUM7-0002Vc-3Y for qemu-devel@nongnu.org; Mon, 07 Oct 2019 10:51:23 -0400 Received: from loganberry.canonical.com ([91.189.90.37]) by indium.canonical.com with esmtp (Exim 4.86_2 #2 (Debian)) id 1iHUM5-0001Dy-Jg for ; Mon, 07 Oct 2019 14:51:21 +0000 Received: from loganberry.canonical.com (localhost [127.0.0.1]) by loganberry.canonical.com (Postfix) with ESMTP id 92E142E807B for ; Mon, 7 Oct 2019 14:51:21 +0000 (UTC) MIME-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Date: Mon, 07 Oct 2019 14:36:38 -0000 From: Jan Glauber To: qemu-devel@nongnu.org X-Launchpad-Notification-Type: bug X-Launchpad-Bug: product=kunpeng920; status=New; importance=Undecided; assignee=None; X-Launchpad-Bug: product=qemu; status=In Progress; importance=Undecided; assignee=rafaeldtinoco@kernelpath.com; X-Launchpad-Bug: distribution=ubuntu; sourcepackage=qemu; component=main; status=In Progress; importance=Medium; assignee=rafaeldtinoco@kernelpath.com; X-Launchpad-Bug: distribution=ubuntu; distroseries=bionic; sourcepackage=qemu; component=main; status=New; importance=Medium; assignee=None; X-Launchpad-Bug: distribution=ubuntu; distroseries=disco; sourcepackage=qemu; component=main; status=New; importance=Medium; assignee=None; X-Launchpad-Bug: distribution=ubuntu; distroseries=eoan; sourcepackage=qemu; component=main; status=In Progress; importance=Medium; assignee=rafaeldtinoco@kernelpath.com; X-Launchpad-Bug: distribution=ubuntu; distroseries=ff-series; sourcepackage=qemu; component=None; status=New; importance=Medium; assignee=None; X-Launchpad-Bug-Tags: qemu-img X-Launchpad-Bug-Information-Type: Public X-Launchpad-Bug-Private: no X-Launchpad-Bug-Security-Vulnerability: no X-Launchpad-Bug-Commenters: dannf jan-glauber-i jnsnow lizhengui rafaeldtinoco X-Launchpad-Bug-Reporter: dann frazier (dannf) X-Launchpad-Bug-Modifier: Jan Glauber (jan-glauber-i) References: <154327283728.15443.11625169757714443608.malonedeb@soybean.canonical.com> Message-Id: <20191007143629.GA23062@hc> Subject: [Bug 1805256] Re: [Qemu-devel] qemu_futex_wait() lockups in ARM64: 2 possible issues X-Launchpad-Message-Rationale: Subscriber (QEMU) @qemu-devel-ml X-Launchpad-Message-For: qemu-devel-ml Precedence: bulk X-Generated-By: Launchpad (canonical.com); Revision="af2eefe214bd95389a09b7c956720881bab16807"; Instance="production-secrets-lazr.conf" X-Launchpad-Hash: 71338b9d9cc3c8fea6af4c9af4049de1b88b143a X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 91.189.90.7 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Bug 1805256 <1805256@bugs.launchpad.net> Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" Message-ID: <20191007143638.Uoc9NbIJ1xf_suJ0KoSXkyBGHHmRS_crZt4PSDpwacU@z> On Mon, Oct 07, 2019 at 01:06:20PM +0200, Paolo Bonzini wrote: > On 02/10/19 11:23, Jan Glauber wrote: > > I've looked into this on ThunderX2. The arm64 code generated for the > > atomic_[add|sub] accesses of ctx->notify_me doesn't contain any > > memory barriers. It is just plain ldaxr/stlxr. > > = > > From my understanding this is not sufficient for SMP sync. > > = > > If I read this comment correct: > > = > > void aio_notify(AioContext *ctx) > > { > > /* Write e.g. bh->scheduled before reading ctx->notify_me. Pai= rs > > * with atomic_or in aio_ctx_prepare or atomic_add in aio_poll. > > */ > > smp_mb(); > > if (ctx->notify_me) { > > = > > it points out that the smp_mb() should be paired. But as > > I said the used atomics don't generate any barriers at all. > = > Based on the rest of the thread, this patch should also fix the bug: > = > diff --git a/util/async.c b/util/async.c > index 47dcbfa..721ea53 100644 > --- a/util/async.c > +++ b/util/async.c > @@ -249,7 +249,7 @@ aio_ctx_check(GSource *source) > aio_notify_accept(ctx); > = > for (bh =3D ctx->first_bh; bh; bh =3D bh->next) { > - if (bh->scheduled) { > + if (atomic_mb_read(&bh->scheduled)) { > return true; > } > } > = > = > And also the memory barrier in aio_notify can actually be replaced > with a SEQ_CST load: > = > diff --git a/util/async.c b/util/async.c > index 47dcbfa..721ea53 100644 > --- a/util/async.c > +++ b/util/async.c > @@ -349,11 +349,11 @@ LinuxAioState *aio_get_linux_aio(AioContext *ctx) > = > void aio_notify(AioContext *ctx) > { > - /* Write e.g. bh->scheduled before reading ctx->notify_me. Pairs > - * with atomic_or in aio_ctx_prepare or atomic_add in aio_poll. > + /* Using atomic_mb_read ensures that e.g. bh->scheduled is written b= efore > + * ctx->notify_me is read. Pairs with atomic_or in aio_ctx_prepare = or > + * atomic_add in aio_poll. > */ > - smp_mb(); > - if (ctx->notify_me) { > + if (atomic_mb_read(&ctx->notify_me)) { > event_notifier_set(&ctx->notifier); > atomic_mb_set(&ctx->notified, true); > } > = > = > Would you be able to test these (one by one possibly)? Sure. > > I've tried to verify me theory with this patch and didn't run into the > > issue for ~500 iterations (usually I would trigger the issue ~20 iterat= ions). > = > Sorry for asking the obvious---500 iterations of what? The testcase mentioned in the Canonical issue: https://bugs.launchpad.net/qemu/+bug/1805256 It's a simple image convert: qemu-img convert -f qcow2 -O qcow2 ./disk01.qcow2 ./output.qcow2 Usually it got stuck after 3-20 iterations. --Jan -- = You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1805256 Title: qemu-img hangs on rcu_call_ready_event logic in Aarch64 when converting images Status in kunpeng920: New Status in QEMU: In Progress Status in qemu package in Ubuntu: In Progress Status in qemu source package in Bionic: New Status in qemu source package in Disco: New Status in qemu source package in Eoan: In Progress Status in qemu source package in FF-Series: New Bug description: Command: qemu-img convert -f qcow2 -O qcow2 ./disk01.qcow2 ./output.qcow2 Hangs indefinitely approximately 30% of the runs. ---- Workaround: qemu-img convert -m 1 -f qcow2 -O qcow2 ./disk01.qcow2 ./output.qcow2 Run "qemu-img convert" with "a single coroutine" to avoid this issue. ---- (gdb) thread 1 ... (gdb) bt #0 0x0000ffffbf1ad81c in __GI_ppoll #1 0x0000aaaaaabcf73c in ppoll #2 qemu_poll_ns #3 0x0000aaaaaabd0764 in os_host_main_loop_wait #4 main_loop_wait ... (gdb) thread 2 ... (gdb) bt #0 syscall () #1 0x0000aaaaaabd41cc in qemu_futex_wait #2 qemu_event_wait (ev=3Dev@entry=3D0xaaaaaac86ce8 ) #3 0x0000aaaaaabed05c in call_rcu_thread #4 0x0000aaaaaabd34c8 in qemu_thread_start #5 0x0000ffffbf25c880 in start_thread #6 0x0000ffffbf1b6b9c in thread_start () (gdb) thread 3 ... (gdb) bt #0 0x0000ffffbf11aa20 in __GI___sigtimedwait #1 0x0000ffffbf2671b4 in __sigwait #2 0x0000aaaaaabd1ddc in sigwait_compat #3 0x0000aaaaaabd34c8 in qemu_thread_start #4 0x0000ffffbf25c880 in start_thread #5 0x0000ffffbf1b6b9c in thread_start ---- (gdb) run Starting program: /usr/bin/qemu-img convert -f qcow2 -O qcow2 ./disk01.ext4.qcow2 ./output.qcow2 [New Thread 0xffffbec5ad90 (LWP 72839)] [New Thread 0xffffbe459d90 (LWP 72840)] [New Thread 0xffffbdb57d90 (LWP 72841)] [New Thread 0xffffacac9d90 (LWP 72859)] [New Thread 0xffffa7ffed90 (LWP 72860)] [New Thread 0xffffa77fdd90 (LWP 72861)] [New Thread 0xffffa6ffcd90 (LWP 72862)] [New Thread 0xffffa67fbd90 (LWP 72863)] [New Thread 0xffffa5ffad90 (LWP 72864)] [Thread 0xffffa5ffad90 (LWP 72864) exited] [Thread 0xffffa6ffcd90 (LWP 72862) exited] [Thread 0xffffa77fdd90 (LWP 72861) exited] [Thread 0xffffbdb57d90 (LWP 72841) exited] [Thread 0xffffa67fbd90 (LWP 72863) exited] [Thread 0xffffacac9d90 (LWP 72859) exited] [Thread 0xffffa7ffed90 (LWP 72860) exited] """ All the tasks left are blocked in a system call, so no task left to call qemu_futex_wake() to unblock thread #2 (in futex()), which would unblock thread #1 (doing poll() in a pipe with thread #2). Those 7 threads exit before disk conversion is complete (sometimes in the beginning, sometimes at the end). ---- [ Original Description ] On the HiSilicon D06 system - a 96 core NUMA arm64 box - qemu-img frequently hangs (~50% of the time) with this command: qemu-img convert -f qcow2 -O qcow2 /tmp/cloudimg /tmp/cloudimg2 Where "cloudimg" is a standard qcow2 Ubuntu cloud image. This qcow2->qcow2 conversion happens to be something uvtool does every time it fetches images. Once hung, attaching gdb gives the following backtrace: (gdb) bt #0 0x0000ffffae4f8154 in __GI_ppoll (fds=3D0xaaaae8a67dc0, nfds=3D187650= 274213760, =C2=A0=C2=A0=C2=A0=C2=A0timeout=3D, timeout@entry=3D0x0, s= igmask=3D0xffffc123b950) =C2=A0=C2=A0=C2=A0=C2=A0at ../sysdeps/unix/sysv/linux/ppoll.c:39 #1 0x0000aaaabbefaf00 in ppoll (__ss=3D0x0, __timeout=3D0x0, __nfds=3D, =C2=A0=C2=A0=C2=A0=C2=A0__fds=3D) at /usr/include/aarch64-= linux-gnu/bits/poll2.h:77 #2 qemu_poll_ns (fds=3D, nfds=3D, =C2=A0=C2=A0=C2=A0=C2=A0timeout=3Dtimeout@entry=3D-1) at util/qemu-timer.= c:322 #3 0x0000aaaabbefbf80 in os_host_main_loop_wait (timeout=3D-1) =C2=A0=C2=A0=C2=A0=C2=A0at util/main-loop.c:233 #4 main_loop_wait (nonblocking=3D) at util/main-loop.c:497 #5 0x0000aaaabbe2aa30 in convert_do_copy (s=3D0xffffc123bb58) at qemu-im= g.c:1980 #6 img_convert (argc=3D, argv=3D) at qemu-= img.c:2456 #7 0x0000aaaabbe2333c in main (argc=3D7, argv=3D) at qemu= -img.c:4975 Reproduced w/ latest QEMU git (@ 53744e0a182) To manage notifications about this bug go to: https://bugs.launchpad.net/kunpeng920/+bug/1805256/+subscriptions