From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.6 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,SPF_PASS,USER_AGENT_MUTT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C1E3BC282CB for ; Fri, 8 Feb 2019 22:17:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 8F66D2192D for ; Fri, 8 Feb 2019 22:17:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1549664252; bh=aPMxoo4Z7ldFTDRDoV1vEOlWGTDWIdboW5BPSddFeMI=; h=Date:From:To:Cc:Subject:References:In-Reply-To:List-ID:From; b=Om0DMg3nAvm9k6VPx9Mbs57nYFwQeDytCEStxvJ3kGeCi/15qIrr4YcFwZuAYGNhb awlbwruh1bK4rBwqsmdefimvxQCu2UU0Sh5uLBl7BATRNMcAtBqzmX5/JEt8OczJ6m L2BJaHfd3pxsLpahnarMQJgIFB97xkBHr1UBgw4w= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726847AbfBHWRc (ORCPT ); Fri, 8 Feb 2019 17:17:32 -0500 Received: from mail-pl1-f193.google.com ([209.85.214.193]:38379 "EHLO mail-pl1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726788AbfBHWRb (ORCPT ); Fri, 8 Feb 2019 17:17:31 -0500 Received: by mail-pl1-f193.google.com with SMTP id e5so2337281plb.5; Fri, 08 Feb 2019 14:17:31 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=Cv/8gCpg8fQM/oW04N1hgq0fLKsiZPDr2+NGHEpECEA=; b=ZuNe4vgtHSecf/dHfEtuZ84iriKqNUt30NhvFC1PuBS4KKMLyhPN0Q+2PywMxl2AeJ XgOXmRLAqLIXBJMriyiwY6WOu2Ulxb6B9fx/gNJckG2YgHNwDKsrXrDTF59Cu699DlSP zecgvfBu+b1+DwWDTCnxnOeO6Ilcj7TZQibufky8L9GhMV7xFZ6oU9phAeEPfCQHtaMt uw3x8bHHMNtpjFXsDLiCodMN2JHcPc7wVZtNO9Ctqr8KrV8itbcxgmbwc5AwS3exgWx5 Gp+SlIt595U9cKB7rjMoZn8G/tTMjnsea4zj1VqDQOKzvf9rtbpnTuuFhAaqKmPrd9cn DG8g== X-Gm-Message-State: AHQUAubPz7AIBMTceyvNSRc0WRRETBSOSlMqPT7RZTKwfhT3Zjj7D9sn eFZDnNlr5rfXxavoYgv5LOs= X-Google-Smtp-Source: AHgI3IYJCzbjcA/KXRXpFn9j07dzTdipknTSmqda7YGm2Kmuo0oayLBbiLpHcI8/lZ8ogUs5FbKjrA== X-Received: by 2002:a17:902:29ab:: with SMTP id h40mr25007596plb.238.1549664250815; Fri, 08 Feb 2019 14:17:30 -0800 (PST) Received: from garbanzo.do-not-panic.com (c-73-71-40-85.hsd1.ca.comcast.net. [73.71.40.85]) by smtp.gmail.com with ESMTPSA id z127sm4545693pfb.80.2019.02.08.14.17.28 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 08 Feb 2019 14:17:29 -0800 (PST) Received: by garbanzo.do-not-panic.com (sSMTP sendmail emulation); Fri, 08 Feb 2019 14:17:26 -0800 Date: Fri, 8 Feb 2019 14:17:26 -0800 From: Luis Chamberlain To: Sasha Levin Cc: Dave Chinner , linux-xfs@vger.kernel.org, gregkh@linuxfoundation.org, Alexander.Levin@microsoft.com, stable@vger.kernel.org, amir73il@gmail.com, hch@infradead.org Subject: Re: [PATCH v2 00/10] xfs: stable fixes for v4.19.y Message-ID: <20190208221726.GM11489@garbanzo.do-not-panic.com> References: <20190204165427.23607-1-mcgrof@kernel.org> <20190205220655.GF14116@dastard> <20190206040559.GA4119@sasha-vm> <20190206215454.GG14116@dastard> <20190208060620.GA31898@sasha-vm> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190208060620.GA31898@sasha-vm> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org On Fri, Feb 08, 2019 at 01:06:20AM -0500, Sasha Levin wrote: > Sure! Below are the various configs this was run against. There were > multiple runs over 48+ hours and no regressions from a 4.14.17 baseline > were observed. In an effort to consolidate our sections: > [default] > TEST_DEV=/dev/nvme0n1p1 > TEST_DIR=/media/test > SCRATCH_DEV_POOL="/dev/nvme0n1p2" > SCRATCH_MNT=/media/scratch > RESULT_BASE=$PWD/results/$HOST/$(uname -r) > MKFS_OPTIONS='-f -m crc=1,reflink=0,rmapbt=0, -i sparse=0' This matches my "xfs" section. > USE_EXTERNAL=no > LOGWRITES_DEV=/dev/nve0n1p3 > FSTYP=xfs > > > [default] > TEST_DEV=/dev/nvme0n1p1 > TEST_DIR=/media/test > SCRATCH_DEV_POOL="/dev/nvme0n1p2" > SCRATCH_MNT=/media/scratch > RESULT_BASE=$PWD/results/$HOST/$(uname -r) > MKFS_OPTIONS='-f -m reflink=1,rmapbt=1, -i sparse=1,' This matches my "xfs_reflink" > USE_EXTERNAL=no > LOGWRITES_DEV=/dev/nvme0n1p3 > FSTYP=xfs > > > [default] > TEST_DEV=/dev/nvme0n1p1 > TEST_DIR=/media/test > SCRATCH_DEV_POOL="/dev/nvme0n1p2" > SCRATCH_MNT=/media/scratch > RESULT_BASE=$PWD/results/$HOST/$(uname -r) > MKFS_OPTIONS='-f -m reflink=1,rmapbt=1, -i sparse=1, -b size=1024,' This matches my "xfs_reflink_1024" section. > USE_EXTERNAL=no > LOGWRITES_DEV=/dev/nvme0n1p3 > FSTYP=xfs > > > [default] > TEST_DEV=/dev/nvme0n1p1 > TEST_DIR=/media/test > SCRATCH_DEV_POOL="/dev/nvme0n1p2" > SCRATCH_MNT=/media/scratch > RESULT_BASE=$PWD/results/$HOST/$(uname -r) > MKFS_OPTIONS='-f -m crc=0,reflink=0,rmapbt=0, -i sparse=0,' This matches my "xfs_nocrc" section. > USE_EXTERNAL=no > LOGWRITES_DEV=/dev/nvme0n1p3 > FSTYP=xfs > > > [default] > TEST_DEV=/dev/nvme0n1p1 > TEST_DIR=/media/test > SCRATCH_DEV_POOL="/dev/nvme0n1p2" > SCRATCH_MNT=/media/scratch > RESULT_BASE=$PWD/results/$HOST/$(uname -r) > MKFS_OPTIONS='-f -m crc=0,reflink=0,rmapbt=0, -i sparse=0, -b size=512,' This matches my "xfs_nocrc_512" section. > USE_EXTERNAL=no > LOGWRITES_DEV=/dev/nvme0n1p3 > FSTYP=xfs > > > [default_pmem] > TEST_DEV=/dev/pmem0 I'll have to add this to my framework. Have you found pmem issues not present on other sections? > TEST_DIR=/media/test > SCRATCH_DEV_POOL="/dev/pmem1" > SCRATCH_MNT=/media/scratch > RESULT_BASE=$PWD/results/$HOST/$(uname -r)-pmem > MKFS_OPTIONS='-f -m crc=1,reflink=0,rmapbt=0, -i sparse=0' OK so you just repeat the above options vervbatim but for pmem. Correct? Any reason you don't name the sections with more finer granularity? It would help me in ensuring when we revise both of tests we can more easily ensure we're talking about apples, pears, or bananas. FWIW, I run two different bare metal hosts now, and each has a VM guest per section above. One host I use for tracking stable, the other host for my changes. This ensures I don't mess things up easier and I can re-test any time fast. I dedicate a VM guest to test *one* section. I do this with oscheck easily: ./oscheck.sh --test-section xfs_nocrc | tee log-xfs-4.19.18+ For instance will just test xfs_nocrc section. On average each section takes about 1 hour to run. I could run the tests on raw nvme and do away with the guests, but that loses some of my ability to debug on crashes easily and out to baremetal.. but curious, how long do your tests takes? How about per section? Say just the default "xfs" section? IIRC you also had your system on hyperV :) so maybe you can still debug easily on crashes. Luis