From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1CBFDC43217 for ; Tue, 15 Mar 2022 22:16:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352139AbiCOWRY (ORCPT ); Tue, 15 Mar 2022 18:17:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46880 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352087AbiCOWRX (ORCPT ); Tue, 15 Mar 2022 18:17:23 -0400 Received: from out1-smtp.messagingengine.com (out1-smtp.messagingengine.com [66.111.4.25]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 712A75C378; Tue, 15 Mar 2022 15:16:10 -0700 (PDT) Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.nyi.internal (Postfix) with ESMTP id D2D905C020B; Tue, 15 Mar 2022 18:16:09 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Tue, 15 Mar 2022 18:16:09 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bur.io; h=cc :content-transfer-encoding:date:date:from:from:in-reply-to :in-reply-to:message-id:mime-version:references:reply-to:sender :subject:subject:to:to; s=fm1; bh=ny4Q2FVzFqb+sXdXjFmgDO8a4sK0jN +xSwCEIIx8Blk=; b=zU32nAqwS/4hgQrdu8W2Pe+rFR172gk6vPyBc/33bLsVj+ dBj7yE3hCja2PtXp/XkKBiCksbFPxQ7tzYJE2H++GLECDdE/y/Kl8RVenad6tK6F CY4dFcn0w2+g7186pSjjmIDHHY19/deWCdpaQnZIbF6s5dI+JA9Ep8seboCCcSY+ hTYTkh2V1Xo+W9qbsUpkuFBGC125trrQtfof1rcoCxAWd8nwxvUUv8ZnNqc99Cun xzrKP4Qp1CH4sQTsHVgJERELBN6GeLjulzb5ETrEYhEXNz95NLL+5lEWKzqh48WT mWNBqhKmHx+gE6OogLau1VxaNHD8NkCP/9klAK3A== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:date :from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:sender:subject:subject:to:to:x-me-proxy :x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm3; bh=ny4Q2F VzFqb+sXdXjFmgDO8a4sK0jN+xSwCEIIx8Blk=; b=Ug2meB8di6p/708GYVUJDf SLdrzGwU/57N63iNaejfS5EsffLTrWQ7WpIR7pc3lreTbafbYDGwhX+QBk9paRql T/e1kI1TafF1V3sohP2ENb+lRFLV194Ca/at89bIV3AMd3dC0BLOPZko16I4/l6d yZYGwANxpD+obWerEMXMiTRISnyYU1RCDOIPX5GAFJf5XGSJK9+mJJ4cCvV5EC3O lHXZlE1Hs+QNdIn6It8OevghCpQdGpEy+Sk6n68+HfrNBJxRuBdWyuSLfewK5zI2 ITBzHw+PMvCc2HSC6HmABtC/YALXAQrwewdKohdqj9Rh9wawSqZB1+p0I3XscTvg == X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvvddrudeftddgudehjecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecunecujfgurhephffvufffkffojghfggfgsedtke ertdertddtnecuhfhrohhmpeeuohhrihhsuceuuhhrkhhovhcuoegsohhrihhssegsuhhr rdhioheqnecuggftrfgrthhtvghrnhepieeuffeuvdeiueejhfehiefgkeevudejjeejff evvdehtddufeeihfekgeeuheelnecuvehluhhsthgvrhfuihiivgepudenucfrrghrrghm pehmrghilhhfrhhomhepsghorhhishessghurhdrihho X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 15 Mar 2022 18:16:09 -0400 (EDT) From: Boris Burkov To: fstests@vger.kernel.org, linux-fscrypt@vger.kernel.org, linux-btrfs@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v7 4/5] btrfs: test verity orphans with dmlogwrites Date: Tue, 15 Mar 2022 15:16:00 -0700 Message-Id: <5579a70597cd660ffb265db9e97840a1faca8812.1647382272.git.boris@bur.io> X-Mailer: git-send-email 2.35.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-fscrypt@vger.kernel.org The behavior of orphans is most interesting across mounts, interrupted at arbitrary points during fsverity enable. To cover as many such cases as possible, use dmlogwrites and dmsnapshot as in log-writes/replay-individual.sh. At each log entry, we want to assert a somewhat complicated invariant: If verity has not yet started: an orphan indicates that verity has started. If verity has started: mount should handle the orphan and blow away verity data: expect 0 merkle items after mounting the snapshot dev. If we can measure the file, verity has finished. If verity has finished: the orphan should be gone, so mount should not blow away merkle items. Expect the same number of merkle items before and after mounting the snapshot dev. Note that this relies on grepping btrfs inspect-internal dump-tree. Until btrfs-progs has the ability to print the new Merkle items, they will show up as UNKNOWN.36/37. Signed-off-by: Boris Burkov --- tests/btrfs/291 | 161 ++++++++++++++++++++++++++++++++++++++++++++ tests/btrfs/291.out | 2 + 2 files changed, 163 insertions(+) create mode 100755 tests/btrfs/291 create mode 100644 tests/btrfs/291.out diff --git a/tests/btrfs/291 b/tests/btrfs/291 new file mode 100755 index 00000000..1bb3f1b3 --- /dev/null +++ b/tests/btrfs/291 @@ -0,0 +1,161 @@ +#! /bin/bash +# SPDX-License-Identifier: GPL-2.0 +# Copyright (C) 2021 Facebook, Inc. All Rights Reserved. +# +# FS QA Test 291 +# +# Test btrfs consistency after each FUA while enabling verity on a file +# This test works by following the pattern in log-writes/replay-individual.sh: +# 1. run a workload (verity + sync) while logging to the log device +# 2. replay an entry to the replay device +# 3. snapshot the replay device to the snapshot device +# 4. run destructive tests on the snapshot device (e.g. mount with orphans) +# 5. goto 2 +# +. ./common/preamble +_begin_fstest auto verity + +# Override the default cleanup function. +_cleanup() +{ + cd / + _log_writes_cleanup &> /dev/null + rm -f $img + $LVM_PROG vgremove -f -y $vgname >>$seqres.full 2>&1 + losetup -d $loop_dev >>$seqres.full 2>&1 +} + +# Import common functions. +. ./common/filter +. ./common/attr +. ./common/dmlogwrites +. ./common/verity + +# real QA test starts here +_supported_fs btrfs + +_require_scratch +_require_test +_require_log_writes +_require_dm_target snapshot +_require_command $LVM_PROG lvm +_require_scratch_verity +_require_btrfs_command inspect-internal dump-tree +_require_test_program "log-writes/replay-log" + +sync_loop() { + i=$1 + [ -z "$i" ] && _fail "sync loop needs a number of iterations" + while [ $i -gt 0 ] + do + $XFS_IO_PROG -c sync $SCRATCH_MNT + let i-=1 + done +} + +dump_tree() { + local dev=$1 + $BTRFS_UTIL_PROG inspect-internal dump-tree $dev +} + +count_item() { + local dev=$1 + local item=$2 + dump_tree $dev | grep -c $item +} + +_log_writes_init $SCRATCH_DEV +_log_writes_mkfs +_log_writes_mount + +f=$SCRATCH_MNT/fsv +MB=$((1024 * 1024)) +img=$TEST_DIR/$$.img +$XFS_IO_PROG -fc "pwrite -q 0 $((10 * $MB))" $f +$XFS_IO_PROG -c sync $SCRATCH_MNT +sync_loop 10 & +sync_proc=$! +_fsv_enable $f +$XFS_IO_PROG -c sync $SCRATCH_MNT +wait $sync_proc + +_log_writes_unmount +_log_writes_remove + +# the snapshot and the replay will each be the size of the log writes dev +# so we create a loop device of size 2 * logwrites and then split it into +# replay and snapshot with lvm. +log_writes_blocks=$(blockdev --getsz $LOGWRITES_DEV) +replay_bytes=$((512 * $log_writes_blocks)) +img_bytes=$((2 * $replay_bytes)) + +$XFS_IO_PROG -fc "pwrite -q -S 0 $img_bytes $MB" $img >>$seqres.full 2>&1 || \ + _fail "failed to create image for loop device" +loop_dev=$(losetup -f --show $img) +vgname=vg_replay +lvname=lv_replay +replay_dev=/dev/mapper/vg_replay-lv_replay +snapname=lv_snap +snap_dev=/dev/mapper/vg_replay-$snapname + +$LVM_PROG vgcreate -f $vgname $loop_dev >>$seqres.full 2>&1 || _fail "failed to vgcreate $vgname" +$LVM_PROG lvcreate -L "$replay_bytes"B -n $lvname $vgname -y >>$seqres.full 2>&1 || \ + _fail "failed to lvcreate $lvname" +$UDEV_SETTLE_PROG >>$seqres.full 2>&1 + +replay_log_prog=$here/src/log-writes/replay-log +num_entries=$($replay_log_prog --log $LOGWRITES_DEV --num-entries) +entry=$($replay_log_prog --log $LOGWRITES_DEV --replay $replay_dev --find --end-mark mkfs | cut -d@ -f1) +$replay_log_prog --log $LOGWRITES_DEV --replay $replay_dev --limit $entry || \ + _fail "failed to replay to start entry $entry" +let entry+=1 + +# state = 0: verity hasn't started +# state = 1: verity underway +# state = 2: verity done +state=0 +while [ $entry -lt $num_entries ]; +do + $replay_log_prog --limit 1 --log $LOGWRITES_DEV --replay $replay_dev --start $entry || \ + _fail "failed to take replay step at entry: $entry" + + $LVM_PROG lvcreate -s -L 4M -n $snapname $vgname/$lvname >>$seqres.full 2>&1 || \ + _fail "Failed to create snapshot" + $UDEV_SETTLE_PROG >>$seqres.full 2>&1 + + orphan=$(count_item $snap_dev ORPHAN) + if [ $state -eq 0 ]; then + [ $orphan -gt 0 ] && state=1 + fi + + pre_mount=$(count_item $snap_dev UNKNOWN.3[67]) + _mount $snap_dev $SCRATCH_MNT || _fail "mount failed at entry $entry" + fsverity measure $SCRATCH_MNT/fsv >>$seqres.full 2>&1 + measured=$? + umount $SCRATCH_MNT + [ $state -eq 1 ] && [ $measured -eq 0 ] && state=2 + [ $state -eq 2 ] && ([ $measured -eq 0 ] || _fail "verity done, but measurement failed at entry $entry") + post_mount=$(count_item $snap_dev UNKNOWN.3[67]) + + echo "entry: $entry, state: $state, orphan: $orphan, pre_mount: $pre_mount, post_mount: $post_mount" >> $seqres.full + + if [ $state -eq 1 ]; then + [ $post_mount -eq 0 ] || \ + _fail "mount failed to clear under-construction merkle items pre: $pre_mount, post: $post_mount at entry $entry"; + fi + if [ $state -eq 2 ]; then + [ $pre_mount -gt 0 ] || \ + _fail "expected to have verity items before mount at entry $entry" + [ $pre_mount -eq $post_mount ] || \ + _fail "mount cleared merkle items after verity was enabled $pre_mount vs $post_mount at entry $entry"; + fi + + let entry+=1 + $LVM_PROG lvremove $vgname/$snapname -y >>$seqres.full +done + +echo "Silence is golden" + +# success, all done +status=0 +exit diff --git a/tests/btrfs/291.out b/tests/btrfs/291.out new file mode 100644 index 00000000..04605c70 --- /dev/null +++ b/tests/btrfs/291.out @@ -0,0 +1,2 @@ +QA output created by 291 +Silence is golden -- 2.31.0