From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 90350C433F5 for ; Fri, 27 May 2022 07:32:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347136AbiE0HcR (ORCPT ); Fri, 27 May 2022 03:32:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44954 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234922AbiE0HcP (ORCPT ); Fri, 27 May 2022 03:32:15 -0400 Received: from sipsolutions.net (s3.sipsolutions.net [IPv6:2a01:4f8:191:4433::2]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E0016F68B4 for ; Fri, 27 May 2022 00:32:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=sipsolutions.net; s=mail; h=MIME-Version:Content-Transfer-Encoding: Content-Type:References:In-Reply-To:Date:Cc:To:From:Subject:Message-ID:Sender :Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From:Resent-To: Resent-Cc:Resent-Message-ID; bh=7L1UvtWzSeI1RAxfnE/3HtRDKWRdqRZ2oSzwia86H/U=; t=1653636733; x=1654846333; b=nnrMgrImbIVbCZEg3jsS/ZjfBbuDwIgSnaJCpugINo4Y4F4 ryHeOD5AUw5brzunxa825n6o8A4phNnfpomLN9Pg7lC6mRci+WaNDDibycTiVRfrC9dX3lT71ckMf ilYgdtuxc8oIfIsOzI9biLsExJ8vSHKu8qiI7/o6ahjkYQR54MMQgpaypjjI3plpE92cOSsNryFor q3DAH/iFVX663qCUGN+/5ZjbPsCV87PQO/vhu70nyk1j/Qy8UmGgQhYsUFdcWDlN4Z7WfZnVyn5FA VI4TL7nYCA1tTpk0E4t2w1IGacy5a8WxqB/4jIgqPzW5sEW6ak4ayjz4piPzE3Og==; Received: by sipsolutions.net with esmtpsa (TLS1.3:ECDHE_X25519__RSA_PSS_RSAE_SHA256__AES_256_GCM:256) (Exim 4.95) (envelope-from ) id 1nuUS4-005xX2-LH; Fri, 27 May 2022 09:32:04 +0200 Message-ID: <1f0e79c925f79fba884ec905cf55a3eb7b602d48.camel@sipsolutions.net> Subject: Re: [RFC PATCH v3] UML: add support for KASAN under x86_64 From: Johannes Berg To: Dmitry Vyukov , David Gow Cc: Vincent Whitchurch , Patricia Alfonso , Jeff Dike , Richard Weinberger , anton.ivanov@cambridgegreys.com, Brendan Higgins , kasan-dev , linux-um@lists.infradead.org, LKML , Daniel Latypov Date: Fri, 27 May 2022 09:32:03 +0200 In-Reply-To: References: <20220525111756.GA15955@axis.com> <20220526010111.755166-1-davidgow@google.com> Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable User-Agent: Evolution 3.44.1 (3.44.1-1.fc36) MIME-Version: 1.0 X-malware-bazaar: not-scanned Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, 2022-05-27 at 07:31 +0200, Dmitry Vyukov wrote: > > - This doesn't seem to work when CONFIG_STATIC_LINK is enabled (because > > libc crt0 code calls memory functions, which expect the shadow memory > > to already exist, due to multiple symbols being resolved. > > - I think we should just make this depend on dynamic UML. > > - For that matter, I think static UML is actually broken at the > > moment. I'll send a patch out tomorrow. >=20 > I don't know how important the static build is for UML. Depends who you ask, I guess. IMHO just making KASAN depend on !STATIC_LINK is fine, until somebody actually wants to do what you describe: > Generally I prefer to build things statically b/c e.g. if a testing > system builds on one machine but runs tests on another, dynamic link > may be a problem. Or, say, if a testing system provides binary > artifacts, and then nobody can run it locally. >=20 > One potential way to fix it is to require outline KASAN > instrumentation for static build and then make kasan_arch_is_ready() > return false until the shadow is mapped. I see kasan_arch_is_ready() > is checked at the beginning of all KASAN runtime entry points. > But it would be nice if the dynamic build also supports inline and > does not add kasan_arch_is_ready() check overhead. which sounds fine too, but ... trade-offs. > > + if (IS_ENABLED(CONFIG_UML)) { > > + __memset(kasan_mem_to_shadow((void *)addr), KASAN_VMALL= OC_INVALID, shadow_end - shadow_start); >=20 > "kasan_mem_to_shadow((void *)addr)" can be replaced with shadow_start. and then the memset line isn't so long anymore :) >=20 >=20 > > + return 0; > > + } > > + > > + shadow_start =3D ALIGN_DOWN(shadow_start, PAGE_SIZE); > > shadow_end =3D ALIGN(shadow_end, PAGE_SIZE); >=20 > There is no new fancy PAGE_ALIGN macro for this. And I've seen people s/no/now the/ I guess, but it's also existing code. johannes