From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.3 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BB5E0C56202 for ; Thu, 26 Nov 2020 14:30:16 +0000 (UTC) Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8EB08221E2 for ; Thu, 26 Nov 2020 14:30:15 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=ibm.com header.i=@ibm.com header.b="OUGDq79n" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8EB08221E2 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.ibm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 4ChgC744J9zDqpC for ; Fri, 27 Nov 2020 01:30:11 +1100 (AEDT) Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=linux.ibm.com (client-ip=148.163.156.1; helo=mx0a-001b2d01.pphosted.com; envelope-from=ganeshgr@linux.ibm.com; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=linux.ibm.com Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=ibm.com header.i=@ibm.com header.a=rsa-sha256 header.s=pp1 header.b=OUGDq79n; dkim-atps=neutral Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4Chdpx6kQSzDqtK for ; Fri, 27 Nov 2020 00:27:29 +1100 (AEDT) Received: from pps.filterd (m0098410.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 0AQD2kvl031133; Thu, 26 Nov 2020 08:27:19 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=subject : to : cc : references : from : message-id : date : mime-version : in-reply-to : content-type : content-transfer-encoding; s=pp1; bh=uagBdmUlyAsIZnot2jgbW1cZbfcnzif0XvwS6xP2TCo=; b=OUGDq79nALzAe+DdkgulSMAEx31j0FOXpwAWmJSOUBa6TfmRJnb6lSVqiCY73xBTGz0y Teq90cssFPMuboBYs+J4ZFn4Gu2kACxLXNirPPqm/wYojAgVduu4NnzKjFwPypC4TG8o 8kF37i06aeSHWgVw0A+Ih2Kb1LZyQqK5gjDyJkEZK5yg8HrygRLWrtPj26SrA4dXn8m3 KGTnTq8qIJW5iPQDr/mb6Tl1JWIsSLPwBrir/7tD68sbqfjmf+mdMSerM0M0RlIk8nhY yGI7m+EQu2gVNVEUm9S7qyIb2uCultJnEsVnjbyJi1k2Ps6vKTImKkSCoCckemVdVW/O eQ== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com with ESMTP id 352ccehk7q-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 26 Nov 2020 08:27:19 -0500 Received: from m0098410.ppops.net (m0098410.ppops.net [127.0.0.1]) by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 0AQDL2lr102490; Thu, 26 Nov 2020 08:27:18 -0500 Received: from ppma04ams.nl.ibm.com (63.31.33a9.ip4.static.sl-reverse.com [169.51.49.99]) by mx0a-001b2d01.pphosted.com with ESMTP id 352ccehk74-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 26 Nov 2020 08:27:18 -0500 Received: from pps.filterd (ppma04ams.nl.ibm.com [127.0.0.1]) by ppma04ams.nl.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 0AQDNRWX007917; Thu, 26 Nov 2020 13:27:16 GMT Received: from b06cxnps4076.portsmouth.uk.ibm.com (d06relay13.portsmouth.uk.ibm.com [9.149.109.198]) by ppma04ams.nl.ibm.com with ESMTP id 3518j8hpa4-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 26 Nov 2020 13:27:16 +0000 Received: from d06av26.portsmouth.uk.ibm.com (d06av26.portsmouth.uk.ibm.com [9.149.105.62]) by b06cxnps4076.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 0AQDREo09503268 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 26 Nov 2020 13:27:14 GMT Received: from d06av26.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 1DC32AE056; Thu, 26 Nov 2020 13:27:14 +0000 (GMT) Received: from d06av26.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id B59A3AE053; Thu, 26 Nov 2020 13:27:09 +0000 (GMT) Received: from localhost.localdomain (unknown [9.85.107.84]) by d06av26.portsmouth.uk.ibm.com (Postfix) with ESMTP; Thu, 26 Nov 2020 13:27:09 +0000 (GMT) Subject: Re: [PATCH v4 2/2] lkdtm/powerpc: Add SLB multihit test To: Michael Ellerman References: <20201009064005.19777-1-ganeshgr@linux.ibm.com> <20201009064005.19777-3-ganeshgr@linux.ibm.com> <87362azdjm.fsf@mpe.ellerman.id.au> <20201019131541.GL29778@kitsune.suse.cz> From: Ganesh Message-ID: <8e4c8bc6-8949-3627-d6d1-a468f9946501@linux.ibm.com> Date: Thu, 26 Nov 2020 18:57:08 +0530 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.10.0 MIME-Version: 1.0 In-Reply-To: <20201019131541.GL29778@kitsune.suse.cz> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Content-Language: en-US X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312, 18.0.737 definitions=2020-11-26_04:2020-11-26, 2020-11-26 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 suspectscore=0 lowpriorityscore=0 bulkscore=0 mlxlogscore=999 malwarescore=0 clxscore=1011 spamscore=0 phishscore=0 priorityscore=1501 adultscore=0 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2011260075 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?UTF-8?Q?Michal_Such=c3=a1nek?= , linuxppc-dev@lists.ozlabs.org, keescook@chromium.org, npiggin@gmail.com, mahesh@linux.ibm.com Errors-To: linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Sender: "Linuxppc-dev" Archived-At: List-Archive: On 10/19/20 6:45 PM, Michal Suchánek wrote: > On Mon, Oct 19, 2020 at 09:59:57PM +1100, Michael Ellerman wrote: >> Hi Ganesh, >> >> Some comments below ... >> >> Ganesh Goudar writes: >>> To check machine check handling, add support to inject slb >>> multihit errors. >>> >>> Cc: Kees Cook >>> Reviewed-by: Michal Suchánek >>> Co-developed-by: Mahesh Salgaonkar >>> Signed-off-by: Mahesh Salgaonkar >>> Signed-off-by: Ganesh Goudar >>> --- >>> drivers/misc/lkdtm/Makefile | 1 + >>> drivers/misc/lkdtm/core.c | 3 + >>> drivers/misc/lkdtm/lkdtm.h | 3 + >>> drivers/misc/lkdtm/powerpc.c | 156 ++++++++++++++++++++++++ >>> tools/testing/selftests/lkdtm/tests.txt | 1 + >>> 5 files changed, 164 insertions(+) >>> create mode 100644 drivers/misc/lkdtm/powerpc.c >>> >> .. >>> diff --git a/drivers/misc/lkdtm/powerpc.c b/drivers/misc/lkdtm/powerpc.c >>> new file mode 100644 >>> index 000000000000..f388b53dccba >>> --- /dev/null >>> +++ b/drivers/misc/lkdtm/powerpc.c >>> @@ -0,0 +1,156 @@ >>> +// SPDX-License-Identifier: GPL-2.0 >>> + >>> +#include "lkdtm.h" >>> +#include >>> +#include >> Usual style is to include the linux headers first and then the local header. ok >>> + >>> +/* Gets index for new slb entry */ >>> +static inline unsigned long get_slb_index(void) >>> +{ >>> + unsigned long index; >>> + >>> + index = get_paca()->stab_rr; >>> + >>> + /* >>> + * simple round-robin replacement of slb starting at SLB_NUM_BOLTED. >>> + */ >>> + if (index < (mmu_slb_size - 1)) >>> + index++; >>> + else >>> + index = SLB_NUM_BOLTED; >>> + get_paca()->stab_rr = index; >>> + return index; >>> +} >> I'm not sure we need that really? >> >> We can just always insert at SLB_MUM_BOLTED and SLB_NUM_BOLTED + 1. >> >> Or we could allocate from the top down using mmu_slb_size - 1, and >> mmu_slb_size - 2. Ok, We can do that. >>> +#define slb_esid_mask(ssize) \ >>> + (((ssize) == MMU_SEGSIZE_256M) ? ESID_MASK : ESID_MASK_1T) >>> + >>> +/* Form the operand for slbmte */ >>> +static inline unsigned long mk_esid_data(unsigned long ea, int ssize, >>> + unsigned long slot) >>> +{ >>> + return (ea & slb_esid_mask(ssize)) | SLB_ESID_V | slot; >>> +} >>> + >>> +#define slb_vsid_shift(ssize) \ >>> + ((ssize) == MMU_SEGSIZE_256M ? SLB_VSID_SHIFT : SLB_VSID_SHIFT_1T) >>> + >>> +/* Form the operand for slbmte */ >>> +static inline unsigned long mk_vsid_data(unsigned long ea, int ssize, >>> + unsigned long flags) >>> +{ >>> + return (get_kernel_vsid(ea, ssize) << slb_vsid_shift(ssize)) | flags | >>> + ((unsigned long)ssize << SLB_VSID_SSIZE_SHIFT); >>> +} >> I realise it's not much code, but I'd rather those were in a header, >> rather than copied from slb.c. That way they can never skew vs the >> versions in slb.c >> >> Best place I think would be arch/powerpc/include/asm/book3s/64/mmu-hash.h Ok, ill move them. >>> + >>> +/* Inserts new slb entry */ >> It inserts two. Right. >>> +static void insert_slb_entry(char *p, int ssize) >>> +{ >>> + unsigned long flags, entry; >>> + >>> + flags = SLB_VSID_KERNEL | mmu_psize_defs[MMU_PAGE_64K].sllp; >> That won't work if the kernel is built for 4K pages. Or at least it >> won't work the way we want it to. >> >> You should use mmu_linear_psize. >> >> But for vmalloc you should use mmu_vmalloc_psize, so it will need to be >> a parameter. Sure, Thanks >>> + preempt_disable(); >>> + >>> + entry = get_slb_index(); >>> + asm volatile("slbmte %0,%1" : >>> + : "r" (mk_vsid_data((unsigned long)p, ssize, flags)), >>> + "r" (mk_esid_data((unsigned long)p, ssize, entry)) >>> + : "memory"); >>> + >>> + entry = get_slb_index(); >>> + asm volatile("slbmte %0,%1" : >>> + : "r" (mk_vsid_data((unsigned long)p, ssize, flags)), >>> + "r" (mk_esid_data((unsigned long)p, ssize, entry)) >>> + : "memory"); >>> + preempt_enable(); >>> + /* >>> + * This triggers exception, If handled correctly we must recover >>> + * from this error. >>> + */ >>> + p[0] = '!'; >> That doesn't belong in here, it should be done by the caller. >> >> That would also mean p could be unsigned long in here, so you wouldn't >> have to cast it four times. Sure, ill change it. >>> +} >>> + >>> +/* Inject slb multihit on vmalloc-ed address i.e 0xD00... */ >>> +static void inject_vmalloc_slb_multihit(void) >>> +{ >>> + char *p; >>> + >>> + p = vmalloc(2048); >> vmalloc() allocates whole pages, so it may as well be vmalloc(PAGE_SIZE). ok >>> + if (!p) >>> + return; >> That's unlikely, but it should be an error that's propagated up to the caller. ok >>> + >>> + insert_slb_entry(p, MMU_SEGSIZE_1T); >>> + vfree(p); >>> +} >>> + >>> +/* Inject slb multihit on kmalloc-ed address i.e 0xC00... */ >>> +static void inject_kmalloc_slb_multihit(void) >>> +{ >>> + char *p; >>> + >>> + p = kmalloc(2048, GFP_KERNEL); >>> + if (!p) >>> + return; >>> + >>> + insert_slb_entry(p, MMU_SEGSIZE_1T); >>> + kfree(p); >>> +} >>> + >>> +/* >>> + * Few initial SLB entries are bolted. Add a test to inject >>> + * multihit in bolted entry 0. >>> + */ >>> +static void insert_dup_slb_entry_0(void) >>> +{ >>> + unsigned long test_address = 0xC000000000000000; >> Should use PAGE_OFFSET; ok >>> + volatile unsigned long *test_ptr; >> Does it need to be a volatile? >> The slbmte should act as a compiler barrier (it has a memory clobber) >> and a CPU barrier as well? Yes, volatile is not required, ill remove it. >>> + unsigned long entry, i = 0; >>> + unsigned long esid, vsid; >> Please group your variables: >> >> unsigned long esid, vsid, entry, test_address, i; >> volatile unsigned long *test_ptr; >> >> And then initialise them as appropriate. ok >>> + test_ptr = (unsigned long *)test_address; >>> + preempt_disable(); >>> + >>> + asm volatile("slbmfee %0,%1" : "=r" (esid) : "r" (i)); >>> + asm volatile("slbmfev %0,%1" : "=r" (vsid) : "r" (i)); >> Why do we need to read them out of the SLB rather than just computing >> the values? > It ensures that the entry is perfect duplicate without copying even more > code from other parts of the kernel, doesn't it? > > Especially when inserting only one duplicate as suggested later it > ensures that the test really does what it should. >>> + entry = get_slb_index(); >>> + >>> + /* for i !=0 we would need to mask out the old entry number */ >> Or you could just compute esid and then it wouldn't be an issue. >> >>> + asm volatile("slbmte %0,%1" : >>> + : "r" (vsid), >>> + "r" (esid | entry) >>> + : "memory"); >> At this point we've just inserted a duplicate of entry 0. So you don't >> need to insert a third entry do you? > This code was obviously adapted from the previous one which needed two > entries in case there was none for the memory region to start with. > > Addin only one duplicate should suffice and it can be easily tested that > it still generates a MCE. > >>> + asm volatile("slbmfee %0,%1" : "=r" (esid) : "r" (i)); >>> + asm volatile("slbmfev %0,%1" : "=r" (vsid) : "r" (i)); >>> + entry = get_slb_index(); >>> + >>> + /* for i !=0 we would need to mask out the old entry number */ >>> + asm volatile("slbmte %0,%1" : >>> + : "r" (vsid), >>> + "r" (esid | entry) >>> + : "memory"); >>> + >>> + pr_info("%s accessing test address 0x%lx: 0x%lx\n", >>> + __func__, test_address, *test_ptr); >> This prints the first two instructions of the kernel. I happen to know >> what values they should have, but most people won't understand what >> they're seeing. A better test would be to read the value at the top of >> the function and then load it again here and check we got the right >> thing. > It does not really matter what we read back so long as the compiler does > not optimize out the read. The point here is to access an address in the > range covered by the SLB entry 0. The failure case is that the system > crashes and the test never finishes. > > Thanks > > Michal