From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DDAEBCCA47E for ; Fri, 24 Jun 2022 21:33:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231643AbiFXVd1 (ORCPT ); Fri, 24 Jun 2022 17:33:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60694 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231654AbiFXVd0 (ORCPT ); Fri, 24 Jun 2022 17:33:26 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5C6D381702 for ; Fri, 24 Jun 2022 14:33:24 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-3187c3e8751so31633477b3.2 for ; Fri, 24 Jun 2022 14:33:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=w7pYH3Bp+7D9xuT3co02A5pJbc4yA028A6ZIKFRsVP4=; b=WMOIzGv+yBgmPmWboW5Ib1BtgY+7qA8wvottktfkN+RrL22pEEfHfLiOCA56hKi1g4 qZjmllp4XM0O52pMMhB8lR7ykCAtM/HaKIicOw7Zmw3HCnUqy/fvSlvoPixK8Lq7D+5A SOJmXAjm9RCNRyMR/SAsOox5gBIRsoGDXohzFCioViiFltsaj32tylEMrYOI23xJSDNl 8AwoKodZYbyMEkWiJUW+wDT601H9febrWwm+W6+rArsfFZcyM9LvzbFvkHbrgJ4nM0pW E8fYCP9rAyYcaocc+bdk5mdeIWGRTgHYFmV5Ze2n09dhAJEy9a9qq3GMTfD26srlGQ2I m1lQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=w7pYH3Bp+7D9xuT3co02A5pJbc4yA028A6ZIKFRsVP4=; b=1uCcQLCCNc+xCQitQqHS2H1R398NrYcnm/vvxy8u3Og7cPgSEiQAyPvE+e8+PVAAoj 4porwl0Ffxpw7FCO128ANsscFTo+e3EVBogazcMWdt4tn1W6k3jhunGwF6UixhdPvLwU 5QjUM2fFp5X7WjH0Jf5ei3qF4Ag9F5Ueh9WQdtgrheLK1KHcPeHxdrF85mxzER5CbBQn Azb8RKOLzcxOdCUuzXma9T9gI89HBMySARMBHjP7i6BsknlFLnx/MfPuPzgkmCsdy6Al 8XPkvSM1/lLSHYkhbX9KifqLGBAmQt2ZnyWko++XAh0rJLCGngmIQL3IZgIqPuEoKbRS IyLA== X-Gm-Message-State: AJIora/hvbKihXKn0BibdrBWHKcImoes4fQEKopXZO27BQlvvXJrTTCF 7HX5jxWZpavXw9/krDR5pKUEvuIh+MsUf55MWJFoU4JrmCZD3U6r8EwNOm3MvoVufo5QHyFG1uP yZ/1pgAn6E9LP3A7WgFYNUy+6tLQ26CfDGoDC2wKjOC8UrG0ny+U/K//hguVV5oE= X-Google-Smtp-Source: AGRyM1ufgISjcvjyIPokRN7wEV5P1Oyma2JgDTA2seroXGdLHZ1Yl3hfPPWOXaf4uVjlaWkEAPNYY7k9ga7HJg== X-Received: from ricarkol2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:62fe]) (user=ricarkol job=sendgmr) by 2002:a0d:fac6:0:b0:317:5202:b8c1 with SMTP id k189-20020a0dfac6000000b003175202b8c1mr1037201ywf.467.1656106403490; Fri, 24 Jun 2022 14:33:23 -0700 (PDT) Date: Fri, 24 Jun 2022 14:32:57 -0700 In-Reply-To: <20220624213257.1504783-1-ricarkol@google.com> Message-Id: <20220624213257.1504783-14-ricarkol@google.com> Mime-Version: 1.0 References: <20220624213257.1504783-1-ricarkol@google.com> X-Mailer: git-send-email 2.37.0.rc0.161.g10f37bed90-goog Subject: [PATCH v4 13/13] KVM: selftests: aarch64: Add mix of tests into page_fault_test From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, drjones@redhat.com Cc: pbonzini@redhat.com, maz@kernel.org, alexandru.elisei@arm.com, eric.auger@redhat.com, oupton@google.com, reijiw@google.com, rananta@google.com, bgardon@google.com, dmatlack@google.com, axelrasmussen@google.com, Ricardo Koller Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add some mix of tests into page_fault_test: memslots with all the pairwise combinations of read-only, userfaultfd, and dirty-logging. For example, writing into a read-only memslot which has a hole handled with userfaultfd. Signed-off-by: Ricardo Koller --- .../selftests/kvm/aarch64/page_fault_test.c | 178 ++++++++++++++++++ 1 file changed, 178 insertions(+) diff --git a/tools/testing/selftests/kvm/aarch64/page_fault_test.c b/tools/testing/selftests/kvm/aarch64/page_fault_test.c index c96fc2fd3390..4116a35979d5 100644 --- a/tools/testing/selftests/kvm/aarch64/page_fault_test.c +++ b/tools/testing/selftests/kvm/aarch64/page_fault_test.c @@ -396,6 +396,12 @@ static int uffd_test_read_handler(int mode, int uffd, struct uffd_msg *msg) return uffd_generic_handler(mode, uffd, msg, &memslot[TEST], false); } +static int uffd_no_handler(int mode, int uffd, struct uffd_msg *msg) +{ + TEST_FAIL("There was no UFFD fault expected."); + return -1; +} + /* Returns false if the test should be skipped. */ static bool punch_hole_in_memslot(struct kvm_vm *vm, struct memslot_desc *memslot) @@ -886,6 +892,22 @@ static void help(char *name) .expected_events = { 0 }, \ } +#define TEST_UFFD_AND_DIRTY_LOG(_access, _with_af, _uffd_test_handler, \ + _uffd_faults, _test_check) \ +{ \ + .name = SCAT3(uffd_and_dirty_log, _access, _with_af), \ + .test_memslot_flags = KVM_MEM_LOG_DIRTY_PAGES, \ + .pt_memslot_flags = KVM_MEM_LOG_DIRTY_PAGES, \ + .guest_prepare = { _PREPARE(_with_af), \ + _PREPARE(_access) }, \ + .guest_test = _access, \ + .mem_mark_cmd = CMD_HOLE_TEST | CMD_HOLE_PT, \ + .guest_test_check = { _CHECK(_with_af), _test_check }, \ + .uffd_test_handler = _uffd_test_handler, \ + .uffd_pt_handler = uffd_pt_write_handler, \ + .expected_events = { .uffd_faults = _uffd_faults, }, \ +} + #define TEST_RO_MEMSLOT(_access, _mmio_handler, _mmio_exits, \ _iabt_handler, _dabt_handler, _aborts) \ { \ @@ -912,6 +934,71 @@ static void help(char *name) .expected_events = { .aborts = 1, .fail_vcpu_runs = 1 }, \ } +#define TEST_RO_MEMSLOT_AND_DIRTY_LOG(_access, _mmio_handler, _mmio_exits, \ + _iabt_handler, _dabt_handler, _aborts, \ + _test_check) \ +{ \ + .name = SCAT3(ro_memslot, _access, _with_af), \ + .test_memslot_flags = KVM_MEM_READONLY | KVM_MEM_LOG_DIRTY_PAGES, \ + .pt_memslot_flags = KVM_MEM_READONLY | KVM_MEM_LOG_DIRTY_PAGES, \ + .guest_prepare = { _PREPARE(_access) }, \ + .guest_test = _access, \ + .guest_test_check = { _test_check }, \ + .mmio_handler = _mmio_handler, \ + .iabt_handler = _iabt_handler, \ + .dabt_handler = _dabt_handler, \ + .expected_events = { .mmio_exits = _mmio_exits, \ + .aborts = _aborts}, \ +} + +#define TEST_RO_MEMSLOT_NO_SYNDROME_AND_DIRTY_LOG(_access, _test_check) \ +{ \ + .name = SCAT2(ro_memslot_no_syn_and_dlog, _access), \ + .test_memslot_flags = KVM_MEM_READONLY | KVM_MEM_LOG_DIRTY_PAGES, \ + .pt_memslot_flags = KVM_MEM_READONLY | KVM_MEM_LOG_DIRTY_PAGES, \ + .guest_test = _access, \ + .guest_test_check = { _test_check }, \ + .dabt_handler = dabt_s1ptw_on_ro_memslot_handler, \ + .fail_vcpu_run_handler = fail_vcpu_run_mmio_no_syndrome_handler, \ + .expected_events = { .aborts = 1, .fail_vcpu_runs = 1 }, \ +} + +#define TEST_RO_MEMSLOT_AND_UFFD(_access, _mmio_handler, _mmio_exits, \ + _iabt_handler, _dabt_handler, _aborts, \ + _uffd_test_handler, _uffd_faults) \ +{ \ + .name = SCAT2(ro_memslot_uffd, _access), \ + .test_memslot_flags = KVM_MEM_READONLY, \ + .pt_memslot_flags = KVM_MEM_READONLY, \ + .mem_mark_cmd = CMD_HOLE_TEST | CMD_HOLE_PT, \ + .guest_prepare = { _PREPARE(_access) }, \ + .guest_test = _access, \ + .uffd_test_handler = _uffd_test_handler, \ + .uffd_pt_handler = uffd_pt_write_handler, \ + .mmio_handler = _mmio_handler, \ + .iabt_handler = _iabt_handler, \ + .dabt_handler = _dabt_handler, \ + .expected_events = { .mmio_exits = _mmio_exits, \ + .aborts = _aborts, \ + .uffd_faults = _uffd_faults }, \ +} + +#define TEST_RO_MEMSLOT_NO_SYNDROME_AND_UFFD(_access, _uffd_test_handler, \ + _uffd_faults) \ +{ \ + .name = SCAT2(ro_memslot_no_syndrome, _access), \ + .test_memslot_flags = KVM_MEM_READONLY, \ + .pt_memslot_flags = KVM_MEM_READONLY, \ + .mem_mark_cmd = CMD_HOLE_TEST | CMD_HOLE_PT, \ + .guest_test = _access, \ + .uffd_test_handler = _uffd_test_handler, \ + .uffd_pt_handler = uffd_pt_write_handler, \ + .dabt_handler = dabt_s1ptw_on_ro_memslot_handler, \ + .fail_vcpu_run_handler = fail_vcpu_run_mmio_no_syndrome_handler, \ + .expected_events = { .aborts = 1, .fail_vcpu_runs = 1, \ + .uffd_faults = _uffd_faults }, \ +} + static struct test_desc tests[] = { /* Check that HW is setting the Access Flag (AF) (sanity checks). */ TEST_ACCESS(guest_read64, with_af, CMD_NONE), @@ -979,6 +1066,35 @@ static struct test_desc tests[] = { TEST_DIRTY_LOG(guest_dc_zva, with_af, guest_check_write_in_dirty_log), TEST_DIRTY_LOG(guest_st_preidx, with_af, guest_check_write_in_dirty_log), + /* + * Access when the test and PT memslots are both marked for dirty + * logging and UFFD at the same time. The expected result is that + * writes should mark the dirty log and trigger a userfaultfd write + * fault. Reads/execs should result in a read userfaultfd fault, and + * nothing in the dirty log. The S1PTW in all cases should result in a + * write in the dirty log and a userfaultfd write. + */ + TEST_UFFD_AND_DIRTY_LOG(guest_read64, with_af, uffd_test_read_handler, 2, + guest_check_no_write_in_dirty_log), + /* no_af should also lead to a PT write. */ + TEST_UFFD_AND_DIRTY_LOG(guest_read64, no_af, uffd_test_read_handler, 2, + guest_check_no_write_in_dirty_log), + TEST_UFFD_AND_DIRTY_LOG(guest_ld_preidx, with_af, uffd_test_read_handler, + 2, guest_check_no_write_in_dirty_log), + TEST_UFFD_AND_DIRTY_LOG(guest_at, with_af, 0, 1, + guest_check_no_write_in_dirty_log), + TEST_UFFD_AND_DIRTY_LOG(guest_exec, with_af, uffd_test_read_handler, 2, + guest_check_no_write_in_dirty_log), + TEST_UFFD_AND_DIRTY_LOG(guest_write64, with_af, uffd_test_write_handler, + 2, guest_check_write_in_dirty_log), + TEST_UFFD_AND_DIRTY_LOG(guest_cas, with_af, uffd_test_read_handler, 2, + guest_check_write_in_dirty_log), + TEST_UFFD_AND_DIRTY_LOG(guest_dc_zva, with_af, uffd_test_write_handler, + 2, guest_check_write_in_dirty_log), + TEST_UFFD_AND_DIRTY_LOG(guest_st_preidx, with_af, + uffd_test_write_handler, 2, + guest_check_write_in_dirty_log), + /* * Try accesses when both the test and PT memslots are marked read-only * (with KVM_MEM_READONLY). The S1PTW results in an guest abort, whose @@ -1005,6 +1121,68 @@ static struct test_desc tests[] = { TEST_RO_MEMSLOT_NO_SYNDROME(guest_cas), TEST_RO_MEMSLOT_NO_SYNDROME(guest_st_preidx), + /* + * Access when both the test and PT memslots are read-only and marked + * for dirty logging at the same time. The expected result is that + * there should be no write in the dirty log. The S1PTW results in an + * abort which is handled by asking the host to recreate the memslot as + * writable. The readonly handling are the same as if the memslots were + * not marked for dirty logging: writes with a syndrome result in an + * MMIO exit, and writes with no syndrome result in a failed vcpu run. + */ + TEST_RO_MEMSLOT_AND_DIRTY_LOG(guest_read64, 0, 0, 0, + dabt_s1ptw_on_ro_memslot_handler, 1, + guest_check_no_write_in_dirty_log), + TEST_RO_MEMSLOT_AND_DIRTY_LOG(guest_ld_preidx, 0, 0, 0, + dabt_s1ptw_on_ro_memslot_handler, 1, + guest_check_no_write_in_dirty_log), + TEST_RO_MEMSLOT_AND_DIRTY_LOG(guest_at, 0, 0, 0, + dabt_s1ptw_on_ro_memslot_handler, 1, + guest_check_no_write_in_dirty_log), + TEST_RO_MEMSLOT_AND_DIRTY_LOG(guest_exec, 0, 0, + iabt_s1ptw_on_ro_memslot_handler, 0, 1, + guest_check_no_write_in_dirty_log), + TEST_RO_MEMSLOT_AND_DIRTY_LOG(guest_write64, mmio_on_test_gpa_handler, + 1, 0, dabt_s1ptw_on_ro_memslot_handler, 1, + guest_check_no_write_in_dirty_log), + TEST_RO_MEMSLOT_NO_SYNDROME_AND_DIRTY_LOG(guest_dc_zva, + guest_check_no_write_in_dirty_log), + TEST_RO_MEMSLOT_NO_SYNDROME_AND_DIRTY_LOG(guest_cas, + guest_check_no_write_in_dirty_log), + TEST_RO_MEMSLOT_NO_SYNDROME_AND_DIRTY_LOG(guest_st_preidx, + guest_check_no_write_in_dirty_log), + + /* + * Access when both the test and PT memslots are read-only, and punched + * with holes tracked with userfaultfd. The expected result is the + * union of both userfaultfd and read-only behaviors. For example, + * write accesses result in a userfaultfd write fault and an MMIO exit. + * Writes with no syndrome result in a failed vcpu run and no + * userfaultfd write fault. Reads only result in userfaultfd getting + * triggered. + */ + TEST_RO_MEMSLOT_AND_UFFD(guest_read64, 0, 0, 0, + dabt_s1ptw_on_ro_memslot_handler, 1, + uffd_test_read_handler, 2), + TEST_RO_MEMSLOT_AND_UFFD(guest_ld_preidx, 0, 0, 0, + dabt_s1ptw_on_ro_memslot_handler, 1, + uffd_test_read_handler, 2), + TEST_RO_MEMSLOT_AND_UFFD(guest_at, 0, 0, 0, + dabt_s1ptw_on_ro_memslot_handler, 1, + uffd_no_handler, 1), + TEST_RO_MEMSLOT_AND_UFFD(guest_exec, 0, 0, + iabt_s1ptw_on_ro_memslot_handler, 0, 1, + uffd_test_read_handler, 2), + TEST_RO_MEMSLOT_AND_UFFD(guest_write64, mmio_on_test_gpa_handler, 1, 0, + dabt_s1ptw_on_ro_memslot_handler, 1, + uffd_test_write_handler, 2), + TEST_RO_MEMSLOT_NO_SYNDROME_AND_UFFD(guest_cas, + uffd_test_read_handler, 2), + TEST_RO_MEMSLOT_NO_SYNDROME_AND_UFFD(guest_dc_zva, + uffd_no_handler, 1), + TEST_RO_MEMSLOT_NO_SYNDROME_AND_UFFD(guest_st_preidx, + uffd_no_handler, 1), + { 0 } }; -- 2.37.0.rc0.161.g10f37bed90-goog From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mm01.cs.columbia.edu (mm01.cs.columbia.edu [128.59.11.253]) by smtp.lore.kernel.org (Postfix) with ESMTP id 93293C433EF for ; Fri, 24 Jun 2022 21:33:31 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 475EC49EC5; Fri, 24 Jun 2022 17:33:31 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Authentication-Results: mm01.cs.columbia.edu (amavisd-new); dkim=softfail (fail, message has been altered) header.i=@google.com Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id LXtsh599Q0zh; Fri, 24 Jun 2022 17:33:29 -0400 (EDT) Received: from mm01.cs.columbia.edu (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id A0DA14A0FE; Fri, 24 Jun 2022 17:33:29 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 2F7B949ED7 for ; Fri, 24 Jun 2022 17:33:29 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id AR2ChPC1HtHj for ; Fri, 24 Jun 2022 17:33:27 -0400 (EDT) Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) by mm01.cs.columbia.edu (Postfix) with ESMTPS id DC76A49E35 for ; Fri, 24 Jun 2022 17:33:23 -0400 (EDT) Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-31382419c22so31206237b3.18 for ; Fri, 24 Jun 2022 14:33:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=w7pYH3Bp+7D9xuT3co02A5pJbc4yA028A6ZIKFRsVP4=; b=WMOIzGv+yBgmPmWboW5Ib1BtgY+7qA8wvottktfkN+RrL22pEEfHfLiOCA56hKi1g4 qZjmllp4XM0O52pMMhB8lR7ykCAtM/HaKIicOw7Zmw3HCnUqy/fvSlvoPixK8Lq7D+5A SOJmXAjm9RCNRyMR/SAsOox5gBIRsoGDXohzFCioViiFltsaj32tylEMrYOI23xJSDNl 8AwoKodZYbyMEkWiJUW+wDT601H9febrWwm+W6+rArsfFZcyM9LvzbFvkHbrgJ4nM0pW E8fYCP9rAyYcaocc+bdk5mdeIWGRTgHYFmV5Ze2n09dhAJEy9a9qq3GMTfD26srlGQ2I m1lQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=w7pYH3Bp+7D9xuT3co02A5pJbc4yA028A6ZIKFRsVP4=; b=VIS8qZrmrhPZw83bMMQTdgAgjwUNlAE3dzSBmp30iEcgXGYY3Ku3lXlAA+WuOKsgrq dXM6UfPaZDpzVDlZ5qMwfGSnE3ojW/+TtbGogTFJbLwRll9P7hr1qPl0kv1ww1787y/t PfYzV94iFg6pxwjg0cTYXSCFaDhWOWJI/ObsLdO12XU/jxMGV6FT7l3rtvkrIOiz1Mhl PD/NWlpPmcyoF7twZdkUcQrh2PBZDHFHjCletbQPN7rDp8y0b1NoypZo0NQG502yrkox +pa0aTjyaXr7s3/FYOaYx0nLJLPJvuKolf+/YQiyQkNL/m/3ijij4Pn3KCxeWZuM3zfp M24A== X-Gm-Message-State: AJIora/h8Krtixh9TWLErz0LqsVrWJZz/wJlTQiIrbtQBUovjgbBrPyx ifSOjXmxv/aPuOC9qUSzNhwMpSdEAV4B+A== X-Google-Smtp-Source: AGRyM1ufgISjcvjyIPokRN7wEV5P1Oyma2JgDTA2seroXGdLHZ1Yl3hfPPWOXaf4uVjlaWkEAPNYY7k9ga7HJg== X-Received: from ricarkol2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:62fe]) (user=ricarkol job=sendgmr) by 2002:a0d:fac6:0:b0:317:5202:b8c1 with SMTP id k189-20020a0dfac6000000b003175202b8c1mr1037201ywf.467.1656106403490; Fri, 24 Jun 2022 14:33:23 -0700 (PDT) Date: Fri, 24 Jun 2022 14:32:57 -0700 In-Reply-To: <20220624213257.1504783-1-ricarkol@google.com> Message-Id: <20220624213257.1504783-14-ricarkol@google.com> Mime-Version: 1.0 References: <20220624213257.1504783-1-ricarkol@google.com> X-Mailer: git-send-email 2.37.0.rc0.161.g10f37bed90-goog Subject: [PATCH v4 13/13] KVM: selftests: aarch64: Add mix of tests into page_fault_test From: Ricardo Koller To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, drjones@redhat.com Cc: maz@kernel.org, bgardon@google.com, dmatlack@google.com, pbonzini@redhat.com, axelrasmussen@google.com X-BeenThere: kvmarm@lists.cs.columbia.edu X-Mailman-Version: 2.1.14 Precedence: list List-Id: Where KVM/ARM decisions are made List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: kvmarm-bounces@lists.cs.columbia.edu Sender: kvmarm-bounces@lists.cs.columbia.edu Add some mix of tests into page_fault_test: memslots with all the pairwise combinations of read-only, userfaultfd, and dirty-logging. For example, writing into a read-only memslot which has a hole handled with userfaultfd. Signed-off-by: Ricardo Koller --- .../selftests/kvm/aarch64/page_fault_test.c | 178 ++++++++++++++++++ 1 file changed, 178 insertions(+) diff --git a/tools/testing/selftests/kvm/aarch64/page_fault_test.c b/tools/testing/selftests/kvm/aarch64/page_fault_test.c index c96fc2fd3390..4116a35979d5 100644 --- a/tools/testing/selftests/kvm/aarch64/page_fault_test.c +++ b/tools/testing/selftests/kvm/aarch64/page_fault_test.c @@ -396,6 +396,12 @@ static int uffd_test_read_handler(int mode, int uffd, struct uffd_msg *msg) return uffd_generic_handler(mode, uffd, msg, &memslot[TEST], false); } +static int uffd_no_handler(int mode, int uffd, struct uffd_msg *msg) +{ + TEST_FAIL("There was no UFFD fault expected."); + return -1; +} + /* Returns false if the test should be skipped. */ static bool punch_hole_in_memslot(struct kvm_vm *vm, struct memslot_desc *memslot) @@ -886,6 +892,22 @@ static void help(char *name) .expected_events = { 0 }, \ } +#define TEST_UFFD_AND_DIRTY_LOG(_access, _with_af, _uffd_test_handler, \ + _uffd_faults, _test_check) \ +{ \ + .name = SCAT3(uffd_and_dirty_log, _access, _with_af), \ + .test_memslot_flags = KVM_MEM_LOG_DIRTY_PAGES, \ + .pt_memslot_flags = KVM_MEM_LOG_DIRTY_PAGES, \ + .guest_prepare = { _PREPARE(_with_af), \ + _PREPARE(_access) }, \ + .guest_test = _access, \ + .mem_mark_cmd = CMD_HOLE_TEST | CMD_HOLE_PT, \ + .guest_test_check = { _CHECK(_with_af), _test_check }, \ + .uffd_test_handler = _uffd_test_handler, \ + .uffd_pt_handler = uffd_pt_write_handler, \ + .expected_events = { .uffd_faults = _uffd_faults, }, \ +} + #define TEST_RO_MEMSLOT(_access, _mmio_handler, _mmio_exits, \ _iabt_handler, _dabt_handler, _aborts) \ { \ @@ -912,6 +934,71 @@ static void help(char *name) .expected_events = { .aborts = 1, .fail_vcpu_runs = 1 }, \ } +#define TEST_RO_MEMSLOT_AND_DIRTY_LOG(_access, _mmio_handler, _mmio_exits, \ + _iabt_handler, _dabt_handler, _aborts, \ + _test_check) \ +{ \ + .name = SCAT3(ro_memslot, _access, _with_af), \ + .test_memslot_flags = KVM_MEM_READONLY | KVM_MEM_LOG_DIRTY_PAGES, \ + .pt_memslot_flags = KVM_MEM_READONLY | KVM_MEM_LOG_DIRTY_PAGES, \ + .guest_prepare = { _PREPARE(_access) }, \ + .guest_test = _access, \ + .guest_test_check = { _test_check }, \ + .mmio_handler = _mmio_handler, \ + .iabt_handler = _iabt_handler, \ + .dabt_handler = _dabt_handler, \ + .expected_events = { .mmio_exits = _mmio_exits, \ + .aborts = _aborts}, \ +} + +#define TEST_RO_MEMSLOT_NO_SYNDROME_AND_DIRTY_LOG(_access, _test_check) \ +{ \ + .name = SCAT2(ro_memslot_no_syn_and_dlog, _access), \ + .test_memslot_flags = KVM_MEM_READONLY | KVM_MEM_LOG_DIRTY_PAGES, \ + .pt_memslot_flags = KVM_MEM_READONLY | KVM_MEM_LOG_DIRTY_PAGES, \ + .guest_test = _access, \ + .guest_test_check = { _test_check }, \ + .dabt_handler = dabt_s1ptw_on_ro_memslot_handler, \ + .fail_vcpu_run_handler = fail_vcpu_run_mmio_no_syndrome_handler, \ + .expected_events = { .aborts = 1, .fail_vcpu_runs = 1 }, \ +} + +#define TEST_RO_MEMSLOT_AND_UFFD(_access, _mmio_handler, _mmio_exits, \ + _iabt_handler, _dabt_handler, _aborts, \ + _uffd_test_handler, _uffd_faults) \ +{ \ + .name = SCAT2(ro_memslot_uffd, _access), \ + .test_memslot_flags = KVM_MEM_READONLY, \ + .pt_memslot_flags = KVM_MEM_READONLY, \ + .mem_mark_cmd = CMD_HOLE_TEST | CMD_HOLE_PT, \ + .guest_prepare = { _PREPARE(_access) }, \ + .guest_test = _access, \ + .uffd_test_handler = _uffd_test_handler, \ + .uffd_pt_handler = uffd_pt_write_handler, \ + .mmio_handler = _mmio_handler, \ + .iabt_handler = _iabt_handler, \ + .dabt_handler = _dabt_handler, \ + .expected_events = { .mmio_exits = _mmio_exits, \ + .aborts = _aborts, \ + .uffd_faults = _uffd_faults }, \ +} + +#define TEST_RO_MEMSLOT_NO_SYNDROME_AND_UFFD(_access, _uffd_test_handler, \ + _uffd_faults) \ +{ \ + .name = SCAT2(ro_memslot_no_syndrome, _access), \ + .test_memslot_flags = KVM_MEM_READONLY, \ + .pt_memslot_flags = KVM_MEM_READONLY, \ + .mem_mark_cmd = CMD_HOLE_TEST | CMD_HOLE_PT, \ + .guest_test = _access, \ + .uffd_test_handler = _uffd_test_handler, \ + .uffd_pt_handler = uffd_pt_write_handler, \ + .dabt_handler = dabt_s1ptw_on_ro_memslot_handler, \ + .fail_vcpu_run_handler = fail_vcpu_run_mmio_no_syndrome_handler, \ + .expected_events = { .aborts = 1, .fail_vcpu_runs = 1, \ + .uffd_faults = _uffd_faults }, \ +} + static struct test_desc tests[] = { /* Check that HW is setting the Access Flag (AF) (sanity checks). */ TEST_ACCESS(guest_read64, with_af, CMD_NONE), @@ -979,6 +1066,35 @@ static struct test_desc tests[] = { TEST_DIRTY_LOG(guest_dc_zva, with_af, guest_check_write_in_dirty_log), TEST_DIRTY_LOG(guest_st_preidx, with_af, guest_check_write_in_dirty_log), + /* + * Access when the test and PT memslots are both marked for dirty + * logging and UFFD at the same time. The expected result is that + * writes should mark the dirty log and trigger a userfaultfd write + * fault. Reads/execs should result in a read userfaultfd fault, and + * nothing in the dirty log. The S1PTW in all cases should result in a + * write in the dirty log and a userfaultfd write. + */ + TEST_UFFD_AND_DIRTY_LOG(guest_read64, with_af, uffd_test_read_handler, 2, + guest_check_no_write_in_dirty_log), + /* no_af should also lead to a PT write. */ + TEST_UFFD_AND_DIRTY_LOG(guest_read64, no_af, uffd_test_read_handler, 2, + guest_check_no_write_in_dirty_log), + TEST_UFFD_AND_DIRTY_LOG(guest_ld_preidx, with_af, uffd_test_read_handler, + 2, guest_check_no_write_in_dirty_log), + TEST_UFFD_AND_DIRTY_LOG(guest_at, with_af, 0, 1, + guest_check_no_write_in_dirty_log), + TEST_UFFD_AND_DIRTY_LOG(guest_exec, with_af, uffd_test_read_handler, 2, + guest_check_no_write_in_dirty_log), + TEST_UFFD_AND_DIRTY_LOG(guest_write64, with_af, uffd_test_write_handler, + 2, guest_check_write_in_dirty_log), + TEST_UFFD_AND_DIRTY_LOG(guest_cas, with_af, uffd_test_read_handler, 2, + guest_check_write_in_dirty_log), + TEST_UFFD_AND_DIRTY_LOG(guest_dc_zva, with_af, uffd_test_write_handler, + 2, guest_check_write_in_dirty_log), + TEST_UFFD_AND_DIRTY_LOG(guest_st_preidx, with_af, + uffd_test_write_handler, 2, + guest_check_write_in_dirty_log), + /* * Try accesses when both the test and PT memslots are marked read-only * (with KVM_MEM_READONLY). The S1PTW results in an guest abort, whose @@ -1005,6 +1121,68 @@ static struct test_desc tests[] = { TEST_RO_MEMSLOT_NO_SYNDROME(guest_cas), TEST_RO_MEMSLOT_NO_SYNDROME(guest_st_preidx), + /* + * Access when both the test and PT memslots are read-only and marked + * for dirty logging at the same time. The expected result is that + * there should be no write in the dirty log. The S1PTW results in an + * abort which is handled by asking the host to recreate the memslot as + * writable. The readonly handling are the same as if the memslots were + * not marked for dirty logging: writes with a syndrome result in an + * MMIO exit, and writes with no syndrome result in a failed vcpu run. + */ + TEST_RO_MEMSLOT_AND_DIRTY_LOG(guest_read64, 0, 0, 0, + dabt_s1ptw_on_ro_memslot_handler, 1, + guest_check_no_write_in_dirty_log), + TEST_RO_MEMSLOT_AND_DIRTY_LOG(guest_ld_preidx, 0, 0, 0, + dabt_s1ptw_on_ro_memslot_handler, 1, + guest_check_no_write_in_dirty_log), + TEST_RO_MEMSLOT_AND_DIRTY_LOG(guest_at, 0, 0, 0, + dabt_s1ptw_on_ro_memslot_handler, 1, + guest_check_no_write_in_dirty_log), + TEST_RO_MEMSLOT_AND_DIRTY_LOG(guest_exec, 0, 0, + iabt_s1ptw_on_ro_memslot_handler, 0, 1, + guest_check_no_write_in_dirty_log), + TEST_RO_MEMSLOT_AND_DIRTY_LOG(guest_write64, mmio_on_test_gpa_handler, + 1, 0, dabt_s1ptw_on_ro_memslot_handler, 1, + guest_check_no_write_in_dirty_log), + TEST_RO_MEMSLOT_NO_SYNDROME_AND_DIRTY_LOG(guest_dc_zva, + guest_check_no_write_in_dirty_log), + TEST_RO_MEMSLOT_NO_SYNDROME_AND_DIRTY_LOG(guest_cas, + guest_check_no_write_in_dirty_log), + TEST_RO_MEMSLOT_NO_SYNDROME_AND_DIRTY_LOG(guest_st_preidx, + guest_check_no_write_in_dirty_log), + + /* + * Access when both the test and PT memslots are read-only, and punched + * with holes tracked with userfaultfd. The expected result is the + * union of both userfaultfd and read-only behaviors. For example, + * write accesses result in a userfaultfd write fault and an MMIO exit. + * Writes with no syndrome result in a failed vcpu run and no + * userfaultfd write fault. Reads only result in userfaultfd getting + * triggered. + */ + TEST_RO_MEMSLOT_AND_UFFD(guest_read64, 0, 0, 0, + dabt_s1ptw_on_ro_memslot_handler, 1, + uffd_test_read_handler, 2), + TEST_RO_MEMSLOT_AND_UFFD(guest_ld_preidx, 0, 0, 0, + dabt_s1ptw_on_ro_memslot_handler, 1, + uffd_test_read_handler, 2), + TEST_RO_MEMSLOT_AND_UFFD(guest_at, 0, 0, 0, + dabt_s1ptw_on_ro_memslot_handler, 1, + uffd_no_handler, 1), + TEST_RO_MEMSLOT_AND_UFFD(guest_exec, 0, 0, + iabt_s1ptw_on_ro_memslot_handler, 0, 1, + uffd_test_read_handler, 2), + TEST_RO_MEMSLOT_AND_UFFD(guest_write64, mmio_on_test_gpa_handler, 1, 0, + dabt_s1ptw_on_ro_memslot_handler, 1, + uffd_test_write_handler, 2), + TEST_RO_MEMSLOT_NO_SYNDROME_AND_UFFD(guest_cas, + uffd_test_read_handler, 2), + TEST_RO_MEMSLOT_NO_SYNDROME_AND_UFFD(guest_dc_zva, + uffd_no_handler, 1), + TEST_RO_MEMSLOT_NO_SYNDROME_AND_UFFD(guest_st_preidx, + uffd_no_handler, 1), + { 0 } }; -- 2.37.0.rc0.161.g10f37bed90-goog _______________________________________________ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm