From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.0 required=3.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 12100C10F12 for ; Mon, 15 Apr 2019 17:27:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id DA63820656 for ; Mon, 15 Apr 2019 17:27:13 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="Cdw+qbO4" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727953AbfDOR0o (ORCPT ); Mon, 15 Apr 2019 13:26:44 -0400 Received: from mail-wr1-f65.google.com ([209.85.221.65]:36094 "EHLO mail-wr1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727947AbfDOR0n (ORCPT ); Mon, 15 Apr 2019 13:26:43 -0400 Received: by mail-wr1-f65.google.com with SMTP id y13so23045673wrd.3 for ; Mon, 15 Apr 2019 10:26:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=J0YL3VY81ZYkLm7FFiGr50XaGEO0Un8tKpiZShOimgc=; b=Cdw+qbO4RE0N578B3pOOPagsOZ311UAjuZTJVA0bOj6B39ZDNR51HNzXMjzI8Zzjg/ XxGNv1LJGkDo0owsRMOYX0B/aRoQ1kyd9kpjVyAlPRpuDdyEJiZD7gux9pUxEI1BP52V OkQbUo9WPJlME9tWrIe2tiYMmNdETOYYlgoyqr2rohb2uZZqX/1ENGiZg8iJtQRKhYgG AdTfSzIXRKIewUAog81ClX8Hfzy238Lf53owT44ISIZkbwvIQFBC1oThdYlAVQOv5qem gJxg76S+qWHx+JkQb71FRWPoDF4IQ6aCLyEjt1+YinAXzKuH3I0HvwpPlPCaXxQc0PUp VcKA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=J0YL3VY81ZYkLm7FFiGr50XaGEO0Un8tKpiZShOimgc=; b=hMX/MsgCYUOEk6fJMWqNn98rN8rLpxcn4Gw7icH4v/GYzP3zjFKiqOcfFMOADL1GBu 9cFvv76XVpt8lq5GSgYxrj8UfOC7aECiz28XGgAqHwUtL0pNLYpwiZqAhhkP7VxluvvR Oi+ykoxaUciqwYY0pMYP7xBw7NGfNMbwzWuH2+KWayZbI3qvkehkwKxVfmOo1oghJ6go O1WF6HTFBEMqhAr4xuDJLJRp7YCKbixTnr33DrwAPN54BYbpqfJ4pYREa2E+NA+jD5U4 up8GXFxRxzpTMbiUo6aBfYLuZniMGUn/0S+Vj6tQ9jrvMnYRZCvG8XOrHcT/DGXhvIxP C/qQ== X-Gm-Message-State: APjAAAX/4UsVv0nL+6SmQPoqR5CtchxC3qJW8/TVy81c9r5n2yJQzifF idyQcZ4rnup4ltlkiRbSkSN6bYKgaYo= X-Google-Smtp-Source: APXvYqwa8LY2ekSFo6IfldMAD9owVJJl70Z5WTHDwvaK4FquX/yKPvO84Y/rYJsYMfWDUzKrhGcbrQ== X-Received: by 2002:adf:e648:: with SMTP id b8mr47495556wrn.132.1555349201533; Mon, 15 Apr 2019 10:26:41 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id v190sm27094232wme.18.2019.04.15.10.26.40 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 15 Apr 2019 10:26:40 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang Subject: [PATCH v4 bpf-next 05/15] bpf: introduce new bpf prog load flags "BPF_F_TEST_RND_HI32" Date: Mon, 15 Apr 2019 18:26:15 +0100 Message-Id: <1555349185-12508-6-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1555349185-12508-1-git-send-email-jiong.wang@netronome.com> References: <1555349185-12508-1-git-send-email-jiong.wang@netronome.com> Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org x86_64 and AArch64 perhaps are two arches that running bpf testsuite frequently, however the zero extension insertion pass is not enabled for them because of their hardware support. It is critical to guarantee the pass correction as it is supposed to be enabled at default for a couple of other arches, for example PowerPC, SPARC, arm, NFP etc. Therefore, it would be very useful if there is a way to test this pass on for example x86_64. The test methodology employed by this set is "poisoning" useless bits. High 32-bit of a definition is randomized if it is identified as not used by any later instructions. Such randomization is only enabled under testing mode which is gated by the new bpf prog load flags "BPF_F_TEST_RND_HI32". Suggested-by: Alexei Starovoitov Signed-off-by: Jiong Wang --- include/uapi/linux/bpf.h | 18 ++++++++++++++++++ kernel/bpf/syscall.c | 4 +++- tools/include/uapi/linux/bpf.h | 18 ++++++++++++++++++ 3 files changed, 39 insertions(+), 1 deletion(-) diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index c26be24..89c85a3 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -258,6 +258,24 @@ enum bpf_attach_type { */ #define BPF_F_ANY_ALIGNMENT (1U << 1) +/* BPF_F_TEST_RND_HI32 is used in BPF_PROG_LOAD command for testing purpose. + * Verifier does sub-register def/use analysis and identifies instructions whose + * def only matters for low 32-bit, high 32-bit is never referenced later + * through implicit zero extension. Therefore verifier notifies JIT back-ends + * that it is safe to ignore clearing high 32-bit for these instructions. This + * saves some back-ends a lot of code-gen. However such optimization is not + * necessary on some arches, for example x86_64, arm64 etc, whose JIT back-ends + * hence hasn't used verifier's analysis result. But, we really want to have a + * way to be able to verify the correctness of the described optimization on + * x86_64 on which testsuites are frequently exercised. + * + * So, this flag is introduced. Once it is set, verifier will randomize high + * 32-bit for those instructions who has been identified as safe to ignore them. + * Then, if verifier is not doing correct analysis, such randomization will + * regress tests to expose bugs. + */ +#define BPF_F_TEST_RND_HI32 (1U << 2) + /* When BPF ldimm64's insn[0].src_reg != 0 then this can have * two extensions: * diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index 92c9b8a..abe2804 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -1600,7 +1600,9 @@ static int bpf_prog_load(union bpf_attr *attr, union bpf_attr __user *uattr) if (CHECK_ATTR(BPF_PROG_LOAD)) return -EINVAL; - if (attr->prog_flags & ~(BPF_F_STRICT_ALIGNMENT | BPF_F_ANY_ALIGNMENT)) + if (attr->prog_flags & ~(BPF_F_STRICT_ALIGNMENT | + BPF_F_ANY_ALIGNMENT | + BPF_F_TEST_RND_HI32)) return -EINVAL; if (!IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) && diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index c26be24..89c85a3 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -258,6 +258,24 @@ enum bpf_attach_type { */ #define BPF_F_ANY_ALIGNMENT (1U << 1) +/* BPF_F_TEST_RND_HI32 is used in BPF_PROG_LOAD command for testing purpose. + * Verifier does sub-register def/use analysis and identifies instructions whose + * def only matters for low 32-bit, high 32-bit is never referenced later + * through implicit zero extension. Therefore verifier notifies JIT back-ends + * that it is safe to ignore clearing high 32-bit for these instructions. This + * saves some back-ends a lot of code-gen. However such optimization is not + * necessary on some arches, for example x86_64, arm64 etc, whose JIT back-ends + * hence hasn't used verifier's analysis result. But, we really want to have a + * way to be able to verify the correctness of the described optimization on + * x86_64 on which testsuites are frequently exercised. + * + * So, this flag is introduced. Once it is set, verifier will randomize high + * 32-bit for those instructions who has been identified as safe to ignore them. + * Then, if verifier is not doing correct analysis, such randomization will + * regress tests to expose bugs. + */ +#define BPF_F_TEST_RND_HI32 (1U << 2) + /* When BPF ldimm64's insn[0].src_reg != 0 then this can have * two extensions: * -- 2.7.4